Search Results

Search found 11640 results on 466 pages for 'figure'.

Page 18/466 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Ops Center 12c - Provisioning Solaris Using a Card-Based NIC

    - by scottdickson
    It's been a long time since last I added something here, but having some conversations this last week, I got inspired to update things. I've been spending a lot of time with Ops Center for managing and installing systems these days.  So, I suspect a number of my upcoming posts will be in that area. Today, I want to look at how to provision Solaris using Ops Center when your network is not connected to one of the built-in NICs.  We'll talk about how this can work for both Solaris 10 and Solaris 11, since they are pretty similar.  In both cases, WANboot is a key piece of the story. Here's what I want to do:  I have a Sun Fire T2000 server with a Quad-GbE nxge card installed.  The only network is connected to port 2 on that card rather than the built-in network interfaces.  I want to install Solaris on it across the network, either Solaris 10 or Solaris 11.  I have met with a lot of customers lately who have a similar architecture.  Usually, they have T4-4 servers with the network connected via 10GbE connections. Add to this mix the fact that I use Ops Center to manage the systems in my lab, so I really would like to add this to Ops Center.  If possible, I would like this to be completely hands free.  I can't quite do that yet. Close, but not quite. WANBoot or Old-Style NetBoot? When a system is installed from the network, it needs some help getting the process rolling.  It has to figure out what its network configuration (IP address, gateway, etc.) ought to be.  It needs to figure out what server is going to help it boot and install, and it needs the instructions for the installation.  There are two different ways to bootstrap an installation of Solaris on SPARC across the network.   The old way uses a broadcast of RARP or more recently DHCP to obtain the IP configuration and the rest of the information needed.  The second is to explicitly configure this information in the OBP and use WANBoot for installation WANBoot has a number of benefits over broadcast-based installation: it is not restricted to a single subnet; it does not require special DHCP configuration or DHCP helpers; it uses standard HTTP and HTTPS protocols which traverse firewalls much more easily than NFS-based package installation.  But, WANBoot is not available on really old hardware and WANBoot requires the use o Flash Archives in Solaris 10.  Still, for many people, this is a great approach. As it turns out, WANBoot is necessary if you plan to install using a NIC on a card rather than a built-in NIC. Identifying Which Network Interface to Use One of the trickiest aspects to this process, and the one that actually requires manual intervention to set up, is identifying how the OBP and Solaris refer to the NIC that we want to use to boot.  The OBP already has device aliases configured for the built-in NICs called net, net0, net1, net2, net3.  The device alias net typically points to net0 so that when you issue the command  "boot net -v install", it uses net0 for the boot.  Our task is to figure out the network instance for the NIC we want to use.  We will need to get to the OBP console of the system we want to install in order to figure out what the network should be called.  I will presume you know how to get to the ok prompt.  Once there, we have to see what networks the OBP sees and identify which one is associated with our NIC using the OBP command show-nets. SunOS Release 5.11 Version 11.0 64-bit Copyright (c) 1983, 2011, Oracle and/or its affiliates. All rights reserved. {4} ok banner Sun Fire T200, No Keyboard Copyright (c) 1998, 2010, Oracle and/or its affiliates. All rights reserved. OpenBoot 4.30.4.b, 32640 MB memory available, Serial #69057548. Ethernet address 0:14:4f:1d:bc:c, Host ID: 841dbc0c. {4} ok show-nets a) /pci@7c0/pci@0/pci@2/network@0,1 b) /pci@7c0/pci@0/pci@2/network@0 c) /pci@780/pci@0/pci@8/network@0,3 d) /pci@780/pci@0/pci@8/network@0,2 e) /pci@780/pci@0/pci@8/network@0,1 f) /pci@780/pci@0/pci@8/network@0 g) /pci@780/pci@0/pci@1/network@0,1 h) /pci@780/pci@0/pci@1/network@0 q) NO SELECTION Enter Selection, q to quit: d /pci@780/pci@0/pci@8/network@0,2 has been selected. Type ^Y ( Control-Y ) to insert it in the command line. e.g. ok nvalias mydev ^Y for creating devalias mydev for /pci@780/pci@0/pci@8/network@0,2 {4} ok devalias ... net3 /pci@7c0/pci@0/pci@2/network@0,1 net2 /pci@7c0/pci@0/pci@2/network@0 net1 /pci@780/pci@0/pci@1/network@0,1 net0 /pci@780/pci@0/pci@1/network@0 net /pci@780/pci@0/pci@1/network@0 ... name aliases By looking at the devalias and the show-nets output, we can see that our Quad-GbE card must be the device nodes starting with  /pci@780/pci@0/pci@8/network@0.  The cable for our network is plugged into the 3rd slot, so the device address for our network must be /pci@780/pci@0/pci@8/network@0,2. With that, we can create a device alias for our network interface.  Naming the device alias may take a little bit of trial and error, especially in Solaris 11 where the device alias seems to matter more with the new virtualized network stack. So far in my testing, since this is the "next" network interface to be used, I have found success in naming it net4, even though it's a NIC in the middle of a card that might, by rights, be called net6 (assuming the 0th interface on the card is the next interface identified by Solaris and this is the 3rd interface on the card).  So, we will call it net4.  We need to assign a device alias to it: {4} ok nvalias net4 /pci@780/pci@0/pci@8/network@0,2 {4} ok devalias net4 /pci@780/pci@0/pci@8/network@0,2 ... We also may need to have the MAC for this particular interface, so let's get it, too.  To do this, we go to the device and interrogate its properties. {4} ok cd /pci@780/pci@0/pci@8/network@0,2 {4} ok .properties assigned-addresses 82060210 00000000 03000000 00000000 01000000 82060218 00000000 00320000 00000000 00008000 82060220 00000000 00328000 00000000 00008000 82060230 00000000 00600000 00000000 00100000 local-mac-address 00 21 28 20 42 92 phy-type mif ... From this, we can see that the MAC for this interface is  00:21:28:20:42:92.  We will need this later. This is all we need to do at the OBP.  Now, we can configure Ops Center to use this interface. Network Boot in Solaris 10 Solaris 10 turns out to be a little simpler than Solaris 11 for this sort of a network boot.  Since WANBoot in Solaris 10 fetches a specified In order to install the system using Ops Center, it is necessary to create a OS Provisioning profile and its corresponding plan.  I am going to presume that you already know how to do this within Ops Center 12c and I will just cover the differences between a regular profile and a profile that can use an alternate interface. Create a OS Provisioning profile for Solaris 10 as usual.  However, when you specify the network resources for the primary network, click on the name of the NIC, probably GB_0, and rename it to GB_N/netN, where N is the instance number you used previously in creating the device alias.  This is where the trial and error may come into play.  You may need to try a few instance numbers before you, the OBP, and Solaris all agree on the instance number.  Mark this as the boot network. For Solaris 10, you ought to be able to then apply the OS Provisioning profile to the server and it should install using that interface.  And if you put your cards in the same slots and plug the networks into the same NICs, this profile is reusable across multiple servers. Why This Works If you watch the console as Solaris boots during the OSP process, Ops Center is going to look for the device alias netN.  Since WANBoot requires a device alias called just net, Ops Center uses the value of your netN device alias and assigns that device to the net alias.  That means that boot net will automatically use this device.  Very cool!  Here's a trace from the console as Ops Center provisions a server: Sun Sun Fire T200, No KeyboardCopyright (c) 1998, 2010, Oracle and/or its affiliates. All rights reserved.OpenBoot 4.30.4.b, 32640 MB memory available, Serial #69057548.Ethernet address 0:14:4f:1d:bc:c, Host ID: 841dbc0c.auto-boot? =            false{0} ok  {0} ok printenv network-boot-argumentsnetwork-boot-arguments =  host-ip=10.140.204.234,router-ip=10.140.204.1,subnet-mask=255.255.254.0,hostname=atl-sewr-52,client-id=0100144F1DBC0C,file=http://10.140.204.22:5555/cgi-bin/wanboot-cgi{0} ok {0} ok devalias net net                      /pci@780/pci@0/pci@1/network@0{0} ok devalias net4 net4                     /pci@780/pci@0/pci@8/network@0,2{0} ok devalias net /pci@780/pci@0/pci@8/network@0,2{0} ok setenv network-boot-arguments host-ip=10.140.204.234,router-ip=10.140.204.1,subnet-mask=255.255.254.0,hostname=atl-sewr-52,client-id=0100144F1DBC0C,file=http://10.140.204.22:8004/cgi-bin/wanboot-cginetwork-boot-arguments =  host-ip=10.140.204.234,router-ip=10.140.204.1,subnet-mask=255.255.254.0,hostname=atl-sewr-52,client-id=0100144F1DBC0C,file=http://10.140.204.22:8004/cgi-bin/wanboot-cgi{0} ok {0} ok boot net - installBoot device: /pci@780/pci@0/pci@8/network@0,2  File and args: - install/pci@780/pci@0/pci@8/network@0,2: 1000 Mbps link up<time unavailable> wanboot info: WAN boot messages->console<time unavailable> wanboot info: configuring /pci@780/pci@0/pci@8/network@0,2 See what happened?  Ops Center looked for the network device alias called net4 that we specified in the profile, took the value from it, and made it the net device alias for the boot.  Pretty cool! WANBoot and Solaris 11 Solaris 11 requires an additional step since the Automated Installer in Solaris 11 uses the MAC address of the network to figure out which manifest to use for system installation.  In order to make sure this is available, we have to take an extra step to associate the MAC of the NIC on the card with the host.  So, in addition to creating the device alias like we did above, we also have to declare to Ops Center that the host has this new MAC. Declaring the NIC Start out by discovering the hardware as usual.  Once you have discovered it, take a look under the Connectivity tab to see what networks it has discovered.  In the case of this system, it shows the 4 built-in networks, but not the networks on the additional cards.  These are not directly visible to the system controller.  In order to add the additional network interface to the hardware asset, it is necessary to Declare it.  We will declare that we have a server with this additional NIC, but we will also  specify the existing GB_0 network so that Ops Center can associate the right resources together.  The GB_0 acts as sort of a key to tie our new declaration to the old system already discovered.  Go to the Assets tab, select All Assets, and then in the Actions tab, select Add Asset.  Rather than going through a discovery this time, we will manually declare a new asset. When we declare it, we will give the hostname, IP address, system model that match those that have already been discovered.  Then, we will declare both GB_0 with its existing MAC and the new GB_4 with its MAC.  Remember that we collected the MAC for GB_4 when we created its device alias. After you declare the asset, you will see the new NIC in the connectivity tab for the asset.  You will notice that only the NICs you listed when you declared it are seen now.  If you want Ops Center to see all of the existing NICs as well as the additional one, declare them as well.  Add the other GB_1, GB_2, GB_3 links and their MACs just as you did GB_0 and GB_4.  Installing the OS  Once you have declared the asset, you can create an OS Provisioning profile for Solaris 11 in the same way that you did for Solaris 10.  The only difference from any other provisioning profile you might have created already is the network to use for installation.  Again, use GB_N/netN where N is the interface number you used for your device alias and in your declaration.  And away you go.  When the system boots from the network, the automated installer (AI) is able to see which system manifest to use, based on the new MAC that was associated, and the system gets installed. {0} ok {0} ok printenv network-boot-argumentsnetwork-boot-arguments =  host-ip=10.140.204.234,router-ip=10.140.204.1,subnet-mask=255.255.254.0,hostname=atl-sewr-52,client-id=01002128204292,file=http://10.140.204.22:5555/cgi-bin/wanboot-cgi{0} ok {0} ok devalias net net                      /pci@780/pci@0/pci@1/network@0{0} ok devalias net4 net4                     /pci@780/pci@0/pci@8/network@0,2{0} ok devalias net /pci@780/pci@0/pci@8/network@0,2{0} ok setenv network-boot-arguments host-ip=10.140.204.234,router-ip=10.140.204.1,subnet-mask=255.255.254.0,hostname=atl-sewr-52,client-id=01002128204292,file=http://10.140.204.22:5555/cgi-bin/wanboot-cginetwork-boot-arguments =  host-ip=10.140.204.234,router-ip=10.140.204.1,subnet-mask=255.255.254.0,hostname=atl-sewr-52,client-id=01002128204292,file=http://10.140.204.22:5555/cgi-bin/wanboot-cgi{0} ok {0} ok boot net - installBoot device: /pci@780/pci@0/pci@8/network@0,2  File and args: - install/pci@780/pci@0/pci@8/network@0,2: 1000 Mbps link up<time unavailable> wanboot info: WAN boot messages->console<time unavailable> wanboot info: configuring /pci@780/pci@0/pci@8/network@0,2...SunOS Release 5.11 Version 11.0 64-bitCopyright (c) 1983, 2011, Oracle and/or its affiliates. All rights reserved.Remounting root read/writeProbing for device nodes ...Preparing network image for useDownloading solaris.zlib--2012-02-17 15:10:17--  http://10.140.204.22:5555/var/js/AI/sparc//solaris.zlibConnecting to 10.140.204.22:5555... connected.HTTP request sent, awaiting response... 200 OKLength: 126752256 (121M) [text/plain]Saving to: `/tmp/solaris.zlib'100%[======================================>] 126,752,256 28.6M/s   in 4.4s    2012-02-17 15:10:21 (27.3 MB/s) - `/tmp/solaris.zlib' saved [126752256/126752256] Conclusion So, why go to all of this trouble?  More and more, I find that customers are wiring their data center to only use higher speed networks - 10GbE only to the hosts.  Some customers are moving aggressively toward consolidated networks combining storage and network on CNA NICs.  All of this means that network-based provisioning cannot rely exclusively on the built-in network interfaces.  So, it's important to be able to provision a system using other than the built-in networks.  Turns out, that this is pretty straight-forward for both Solaris 10 and Solaris 11 and fits into the Ops Center deployment process quite nicely. Hopefully, you will be able to use this as you build out your own private cloud solutions with Ops Center.

    Read the article

  • Benchmarking MySQL Replication with Multi-Threaded Slaves

    - by Mat Keep
    0 0 1 1145 6530 Homework 54 15 7660 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} The objective of this benchmark is to measure the performance improvement achieved when enabling the Multi-Threaded Slave enhancement delivered as a part MySQL 5.6. As the results demonstrate, Multi-Threaded Slaves delivers 5x higher replication performance based on a configuration with 10 databases/schemas. For real-world deployments, higher replication performance directly translates to: · Improved consistency of reads from slaves (i.e. reduced risk of reading "stale" data) · Reduced risk of data loss should the master fail before replicating all events in its binary log (binlog) The multi-threaded slave splits processing between worker threads based on schema, allowing updates to be applied in parallel, rather than sequentially. This delivers benefits to those workloads that isolate application data using databases - e.g. multi-tenant systems deployed in cloud environments. Multi-Threaded Slaves are just one of many enhancements to replication previewed as part of the MySQL 5.6 Development Release, which include: · Global Transaction Identifiers coupled with MySQL utilities for automatic failover / switchover and slave promotion · Crash Safe Slaves and Binlog · Optimized Row Based Replication · Replication Event Checksums · Time Delayed Replication These and many more are discussed in the “MySQL 5.6 Replication: Enabling the Next Generation of Web & Cloud Services” Developer Zone article  Back to the benchmark - details are as follows. Environment The test environment consisted of two Linux servers: · one running the replication master · one running the replication slave. Only the slave was involved in the actual measurements, and was based on the following configuration: - Hardware: Oracle Sun Fire X4170 M2 Server - CPU: 2 sockets, 6 cores with hyper-threading, 2930 MHz. - OS: 64-bit Oracle Enterprise Linux 6.1 - Memory: 48 GB Test Procedure Initial Setup: Two MySQL servers were started on two different hosts, configured as replication master and slave. 10 sysbench schemas were created, each with a single table: CREATE TABLE `sbtest` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `k` int(10) unsigned NOT NULL DEFAULT '0',    `c` char(120) NOT NULL DEFAULT '',    `pad` char(60) NOT NULL DEFAULT '',    PRIMARY KEY (`id`),    KEY `k` (`k`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 10,000 rows were inserted in each of the 10 tables, for a total of 100,000 rows. When the inserts had replicated to the slave, the slave threads were stopped. The slave data directory was copied to a backup location and the slave threads position in the master binlog noted. 10 sysbench clients, each configured with 10 threads, were spawned at the same time to generate a random schema load against each of the 10 schemas on the master. Each sysbench client executed 10,000 "update key" statements: UPDATE sbtest set k=k+1 WHERE id = <random row> In total, this generated 100,000 update statements to later replicate during the test itself. Test Methodology: The number of slave workers to test with was configured using: SET GLOBAL slave_parallel_workers=<workers> Then the slave IO thread was started and the test waited for all the update queries to be copied over to the relay log on the slave. The benchmark clock was started and then the slave SQL thread was started. The test waited for the slave SQL thread to finish executing the 100k update queries, doing "select master_pos_wait()". When master_pos_wait() returned, the benchmark clock was stopped and the duration calculated. The calculated duration from the benchmark clock should be close to the time it took for the SQL thread to execute the 100,000 update queries. The 100k queries divided by this duration gave the benchmark metric, reported as Queries Per Second (QPS). Test Reset: The test-reset cycle was implemented as follows: · the slave was stopped · the slave data directory replaced with the previous backup · the slave restarted with the slave threads replication pointer repositioned to the point before the update queries in the binlog. The test could then be repeated with identical set of queries but a different number of slave worker threads, enabling a fair comparison. The Test-Reset cycle was repeated 3 times for 0-24 number of workers and the QPS metric calculated and averaged for each worker count. MySQL Configuration The relevant configuration settings used for MySQL are as follows: binlog-format=STATEMENT relay-log-info-repository=TABLE master-info-repository=TABLE As described in the test procedure, the slave_parallel_workers setting was modified as part of the test logic. The consequence of changing this setting is: 0 worker threads:    - current (i.e. single threaded) sequential mode    - 1 x IO thread and 1 x SQL thread    - SQL thread both reads and executes the events 1 worker thread:    - sequential mode    - 1 x IO thread, 1 x Coordinator SQL thread and 1 x Worker thread    - coordinator reads the event and hands it to the worker who executes 2+ worker threads:    - parallel execution    - 1 x IO thread, 1 x Coordinator SQL thread and 2+ Worker threads    - coordinator reads events and hands them to the workers who execute them Results Figure 1 below shows that Multi-Threaded Slaves deliver ~5x higher replication performance when configured with 10 worker threads, with the load evenly distributed across our 10 x schemas. This result is compared to the current replication implementation which is based on a single SQL thread only (i.e. zero worker threads). Figure 1: 5x Higher Performance with Multi-Threaded Slaves The following figure shows more detailed results, with QPS sampled and reported as the worker threads are incremented. The raw numbers behind this graph are reported in the Appendix section of this post. Figure 2: Detailed Results As the results above show, the configuration does not scale noticably from 5 to 9 worker threads. When configured with 10 worker threads however, scalability increases significantly. The conclusion therefore is that it is desirable to configure the same number of worker threads as schemas. Other conclusions from the results: · Running with 1 worker compared to zero workers just introduces overhead without the benefit of parallel execution. · As expected, having more workers than schemas adds no visible benefit. Aside from what is shown in the results above, testing also demonstrated that the following settings had a very positive effect on slave performance: relay-log-info-repository=TABLE master-info-repository=TABLE For 5+ workers, it was up to 2.3 times as fast to run with TABLE compared to FILE. Conclusion As the results demonstrate, Multi-Threaded Slaves deliver significant performance increases to MySQL replication when handling multiple schemas. This, and the other replication enhancements introduced in MySQL 5.6 are fully available for you to download and evaluate now from the MySQL Developer site (select Development Release tab). You can learn more about MySQL 5.6 from the documentation  Please don’t hesitate to comment on this or other replication blogs with feedback and questions. Appendix – Detailed Results

    Read the article

  • Algorithm to Find the Aggregate Mass of "Granola Bar"-Like Structures?

    - by Stuart Robbins
    I'm a planetary science researcher and one project I'm working on is N-body simulations of Saturn's rings. The goal of this particular study is to watch as particles clump together under their own self-gravity and measure the aggregate mass of the clumps versus the mean velocity of all particles in the cell. We're trying to figure out if this can explain some observations made by the Cassini spacecraft during the Saturnian summer solstice when large structures were seen casting shadows on the nearly edge-on rings. Below is a screenshot of what any given timestep looks like. (Each particle is 2 m in diameter and the simulation cell itself is around 700 m across.) The code I'm using already spits out the mean velocity at every timestep. What I need to do is figure out a way to determine the mass of particles in the clumps and NOT the stray particles between them. I know every particle's position, mass, size, etc., but I don't know easily that, say, particles 30,000-40,000 along with 102,000-105,000 make up one strand that to the human eye is obvious. So, the algorithm I need to write would need to be a code with as few user-entered parameters as possible (for replicability and objectivity) that would go through all the particle positions, figure out what particles belong to clumps, and then calculate the mass. It would be great if it could do it for "each" clump/strand as opposed to everything over the cell, but I don't think I actually need it to separate them out. The only thing I was thinking of was doing some sort of N2 distance calculation where I'd calculate the distance between every particle and if, say, the closest 100 particles were within a certain distance, then that particle would be considered part of a cluster. But that seems pretty sloppy and I was hoping that you CS folks and programmers might know of a more elegant solution? Edited with My Solution: What I did was to take a sort of nearest-neighbor / cluster approach and do the quick-n-dirty N2 implementation first. So, take every particle, calculate distance to all other particles, and the threshold for in a cluster or not was whether there were N particles within d distance (two parameters that have to be set a priori, unfortunately, but as was said by some responses/comments, I wasn't going to get away with not having some of those). I then sped it up by not sorting distances but simply doing an order N search and increment a counter for the particles within d, and that sped stuff up by a factor of 6. Then I added a "stupid programmer's tree" (because I know next to nothing about tree codes). I divide up the simulation cell into a set number of grids (best results when grid size ˜7 d) where the main grid lines up with the cell, one grid is offset by half in x and y, and the other two are offset by 1/4 in ±x and ±y. The code then divides particles into the grids, then each particle N only has to have distances calculated to the other particles in that cell. Theoretically, if this were a real tree, I should get order N*log(N) as opposed to N2 speeds. I got somewhere between the two, where for a 50,000-particle sub-set I got a 17x increase in speed, and for a 150,000-particle cell, I got a 38x increase in speed. 12 seconds for the first, 53 seconds for the second, 460 seconds for a 500,000-particle cell. Those are comparable speeds to how long the code takes to run the simulation 1 timestep forward, so that's reasonable at this point. Oh -- and it's fully threaded, so it'll take as many processors as I can throw at it.

    Read the article

  • Cox Communications' Strategic Approach to Enterprise User Experience: How Change Management and Usab

    - by Applications User Experience
    Author: Anna Wichansky, Senior Director, Applications User Experience, and Chair, Oracle Usability Advisory Board As part of our work in the User Experience group, our teams often go to Customer events such as the Higher Education User Group (HEUG) conference, Alliance 2010. This year's event was held in San Antonio, Texas, and was attended by hundreds of higher education, government, and public sector users of Oracle applications. The User Assistance team used this opportunity to reach out to customers in the Educational and Government sectors to better understand how their organizations are currently approaching help, messages, and other forms of user assistance. What is User Assistance? For us, user assistance is more than the old books of users' manuals and documentation. User assistance is anything that helps users get their jobs done quickly and efficiently. Instead of expecting users to stop and look through a guide or manual, we have been developing solutions that are embedded within the interface. We know that when people are having difficulty with a task, they want to be able to search efficiently for solutions and collaborate with coworkers. We know that they want to find their answers right there, right then, so that they can get on with their work. In our interviews at Alliance, we wanted to learn what the participants could tell us about what was happening on their campuses and in their institutions. Figure 1. For Oracle User Assistance, it's not just about books any more. So what did we do? Off to Texas, we recruited 10 people from nine different government and education organizations to come to our Oracle User Experience Onsite Usability Labs. We conducted one-hour interviews with these folks and asked them all about User Assistance--what people are doing, what they would like to do, what technologies they are using, what they would like to use, and ultimately what should we as a company be planning for our future products. We used this as an opportunity also to show them some of our design concepts for Fusion User Assistance, our next generation of user assistance based on the best of our user assistance in other products. Figure 2. Interviewing a technical user at Alliance. What we learned... People are not using paper or online manuals anymore. They don't want to see a manual that is written for technical users and that doesn't make sense to the ordinary end user. They really don't want to have to flip through a manual trying to find an answer to their question. Even when the answer might be tailored to their organization, they don't want to dig through documentation. When they need an answer now, they don't have the patience to dig for something that might or might not be clearly written. What does it mean to an organization when users don't want to deal with documentation? In many cases, it means that frustrated users make phone calls to try to find the answers that they need immediately. Phone calls are expensive to an organization and frustrating to the technical support staff who have provided documentation that no one wants to read anymore. If they don't call, they email for help often, and many users are asking for the same information. The bottom line is that if they could get that help immediately in the interface, they wouldn't have to make those calls or send those emails -- and that saves time and money. Our Fusion User Assistance options to customize help and get help for the task immediately were seen as an opportunity by these technical users to build the solutions that their users need and want. Figure 3. Joyce Ohgi and Laurie Pattison of Applications UX. Chicken Fried Steak. That was huge. But then, this was Texas, where we discovered a lot of things come very big. Drinks are served in quart-size glasses and dishes like Chicken Fried Steaks are served on platters not plates. We saw three-pound cinnamon rolls that you down with tea sweet enough to curl your hair. Deep in the heart of Texas, we learned a lot, and we ate even more.

    Read the article

  • Tip on Reusing Classes in Different .NET Project Types

    - by psheriff
    All of us have class libraries that we developed for use in our projects. When you create a .NET Class Library project with many classes, you can use that DLL in ASP.NET, Windows Forms and WPF applications. However, for Silverlight and Windows Phone, these .NET Class Libraries cannot be used. The reason is Silverlight and Windows Phone both use a scaled down version of .NET and thus do not have access to the full .NET framework class library. However, there are many classes and functionality that will work in the full .NET and in the scaled down versions that Silverlight and Windows Phone use.Let’s take an example of a class that you might want to use in all of the above mentioned projects. The code listing shown below might be something that you have in a Windows Form or an ASP.NET application. public class StringCommon{  public static bool IsAllLowerCase(string value)  {    return new Regex(@"^([^A-Z])+$").IsMatch(value);  }   public static bool IsAllUpperCase(string value)  {    return new Regex(@"^([^a-z])+$").IsMatch(value);  }} The StringCommon class is very simple with just two methods, but you know that the System.Text.RegularExpressions namespace is available in Silverlight and Windows Phone. Thus, you know that you may reuse this class in your Silverlight and Windows Phone projects. Here is the problem: if you create a Silverlight Class Library project and you right-click on that project in Solution Explorer and choose Add | Add Existing Item… from the menu, the class file StringCommon.cs will be copied from the original location and placed into the Silverlight Class Library project. You now have two files with the same code. If you want to change the code you will now need to change it in two places! This is a maintenance nightmare that you have just created. If you then add this to a Windows Phone Class Library project, you now have three places you need to modify the code! Add As LinkInstead of creating three separate copies of the same class file, you want to leave the original class file in its original location and just create a link to that file from the Silverlight and Windows Phone class libraries. Visual Studio will allow you to do this, but you need to do one additional step in the Add Existing Item dialog (see Figure 1). You will still right mouse click on the project and choose Add | Add Existing Item… from the menu. You will still highlight the file you want to add to your project, but DO NOT click on the Add button. Instead click on the drop down portion of the Add button and choose the “Add As Link” menu item. This will now create a link to the file on disk and will not copy the file into your new project. Figure 1: Add as Link will create a link, not copy the file over. When this linked file is added to your project, there will be a different icon next to that file in the Solution Explorer window. This icon signifies that this is a link to a file in another folder on your hard drive.   Figure 2: The Linked file will have a different icon to show it is a link. Of course, if you have code that will not work in Silverlight or Windows Phone -- because the code has dependencies on features of .NET that are not supported on those platforms – you  can always wrap conditional compilation code around the offending code so it will be removed when compiled in those class libraries. SummaryIn this short blog entry you learned how to reuse one of your class libraries from ASP.NET, Windows Forms or WPF applications in your Silverlight or Windows Phone class libraries. You can do this without creating a maintenance nightmare by using the “Add a Link” feature of the Add Existing Item dialog. Good Luck with your Coding,Paul Sheriff ** SPECIAL OFFER FOR MY BLOG READERS **Visit http://www.pdsa.com/Event/Blog for a free video on Silverlight entitled Silverlight XAML for the Complete Novice - Part 1.

    Read the article

  • Problem with creating a deterministic finite automata (DFA) - Mercury

    - by Jabba The hut
    I would like to have a deterministic finite automata (DFA) simulated in Mercury. But I’m s(t)uck at several places. Formally, a DFA is described with the following characteristics: a setOfStates S, an inputAlphabet E <-- summation symbol, a transitionFunction : S × E -- S, a startState s € S, a setOfAcceptableFinalStates F =C S. A DFA will always starts in the start state. Then the DFA will read all the characters on the input, one by one. Based on the current input character and the current state, there will be made to a new state. These transitions are defined in the transitions function. when the DFA is in one of his acceptable final states, after reading the last character, then will the DFA accept the input, If not, then the input will be is rejected. The figure shows a DFA the accepting strings where the amount of zeros, is a plurality of three. Condition 1 is the initial state, and also the only acceptable state. for each input character is the corresponding arc followed to the next state. Link to Figure What must be done A type “mystate” which represents a state. Each state has a number which is used for identification. A type “transition” that represents a possible transition between states. Each transition has a source_state, an input_character, and a final_state. A type “statemachine” that represents the entire DFA. In the solution, the DFA must have the following properties: The set of all states, the input alphabet, a transition function, represented as a set of possible transitions, a set of accepting final states, a current state of the DFA A predicate “init_machine (state machine :: out)” which unifies his arguments with the DFA, as shown as in the Figure. The current state for the DFA is set to his initial state, namely, 1. The input alphabet of the DFA is composed of the characters '0'and '1'. A user can enter a text, which will be controlled by the DFA. the program will continues until the user types Ctrl-D and simulates an EOF. If the user use characters that are not allowed into the input alphabet of the DFA, then there will be an error message end the program will close. (pred require) Example Enter a sentence: 0110 String is not ok! Enter a sentence: 011101 String is not ok! Enter a sentence: 110100 String is ok! Enter a sentence: 000110010 String is ok! Enter a sentence: 011102 Uncaught exception Mercury: Software Error: Character does not belong to the input alphabet! the thing wat I have. :- module dfa. :- interface. :- import_module io. :- pred main(io.state::di, io.state::uo) is det. :- implementation. :- import_module int,string,list,bool. 1 :- type mystate ---> state(int). 2 :- type transition ---> trans(source_state::mystate, input_character::bool, final_state::mystate). 3 (error, finale_state and current_state and input_character) :- type statemachine ---> dfa(list(mystate),list(input_character),list(transition),list(final_state),current_state(mystate)) 4 missing a lot :- pred init_machine(statemachine :: out) is det. %init_machine(statemachine(L_Mystate,0,L_transition,L_final_state,1)) :- <-probably fault 5 not perfect main(!IO) :- io.write_string("\nEnter a sentence: ", !IO), io.read_line_as_string(Input, !IO), ( Invoer = ok(StringVar), S1 = string.strip(StringVar), (if S1 = "mustbeabool" then io.write_string("Sentenceis Ok! ", !IO) else io.write_string("Sentence is not Ok!.", !IO)), main(!IO) ; Invoer = eof ; Invoer = error(ErrorCode), io.format("%s\n", [s(io.error_message(ErrorCode))], !IO) ). Hope you can help me kind regards

    Read the article

  • I.T. Chargeback : Core to Cloud Computing

    - by Anand Akela
    Contributed by Mark McGill Consolidation and Virtualization have been widely adopted over the years to help deliver benefits such as increased server utilization, greater agility and lower cost to the I.T. organization. These are key enablers of cloud, but in themselves they do not provide a complete cloud solution. Building a true enterprise private cloud involves moving from an admin driven world, where the I.T. department is ultimately responsible for the provisioning of servers, databases, middleware and applications, to a world where the consumers of I.T. resources can provision their infrastructure, platforms and even complete application stacks on demand. Switching from an admin-driven provisioning model to a user-driven model creates some challenges. How do you ensure that users provisioning resources will not provision more than they need? How do you encourage users to return resources when they have finished with them so that others can use them? While chargeback has existed as a concept for many years (especially in mainframe environments), it is the move to this self-service model that has created a need for a new breed of chargeback applications for cloud. Enabling self-service without some form of chargeback is like opening a shop where all of the goods are free. A successful chargeback solution will be able to allocate the costs of shared I.T. infrastructure based on the relative consumption by the users. Doing this creates transparency between the I.T. department and the consumers of I.T. When users are able to understand how their consumption translates to cost they are much more likely to be prudent when it comes to their use of I.T. resources. This also gives them control of their I.T. costs, as moderate usage will translate to a lower charge at the end of the month. Implementing Chargeback successfully create a win-win situation for I.T. and the consumers. Chargeback can help to ensure that I.T. resources are used for activities that deliver business value. It also improves the overall utilization of I.T. infrastructure as I.T. resources that are not needed are not left running idle. Enterprise Manager 12c provides an integrated metering and chargeback solution for Enterprise Manager Targets. This solution is built on top of the rich configuration and utilization information already available in Enterprise Manager. It provides metering not just for virtual machines, but also for physical hosts, databases and middleware. Enterprise Manager 12c provides metering based on the utilization and configuration of the following types of Enterprise Manager Target: Oracle VM Host Oracle Database Oracle WebLogic Server Using Enterprise Manager Chargeback, administrators are able to create a set of Charge Plans that are used to attach prices to the various metered resources. These plans can contain fixed costs (eg. $10/month/database), configuration based costs (eg. $10/month if OS is Windows) and utilization based costs (eg. $0.05/GB of Memory/hour) The self-service user provisioning these resources is then able to view a report that details their usage and helps them understand how this usage translates into cost. Armed with this information, the user is able to determine if the resources are delivering adequate business value based on what is being charged. Figure 1: Chargeback in Self-Service Portal Enterprise Manager 12c provides a variety of additional interfaces into this data. The administrator can access summary and trending reports. Summary reports allow the administrator to drill-down through the cost center hierarchy to identify, for example, the top resource consumers across the organization. Figure 2: Charge Summary Report Trending reports can be used for I.T. planning and budgeting as they show utilization and charge trends over a period of time. Figure 3: CPU Trend Report We also provide chargeback reports through BI Publisher. This provides a way for users who do not have an Enterprise Manager login (such as Line of Business managers) to view charge and usage information. For situations where a bill needs to be produced, chargeback can be integrated with billing applications such as Oracle Billing and Revenue Management (BRM). Further information on Enterprise Manager 12c’s integrated metering and chargeback: White Paper Screenwatch Cloud Management on OTN

    Read the article

  • Pass object to dynamically loaded ListView ItemTemplate

    - by mickyjtwin
    I have been following the following post on using multiple ItemTemplates in a ListView control. While following this example does produce output, I am trying ot figure out how to psas an object through to the ItemTemplate's user control, which I do not seem to be able to do/figure out. protected void lvwComments_OnItemCreated(object sender, ListViewItemEventArgs e) { ListViewDataItem currentItem = (e.Item as ListViewDataItem); Comment comment = (Comment)currentItem.DataItem; if (comment == null) return; string controlPath = string.Empty; switch (comment.Type) { case CommentType.User: controlPath = "~/layouts/controls/General Comment.ascx"; break; case CommentType.Official: controlPath = "~/layouts/controls/Official Comment.ascx"; break; } lvwComments.ItemTemplate = LoadTemplate(Controlpath); } The User Control is as follows: public partial class OfficialComment : UserControl { protected void Page_Load(object sender, EventArgs e) { } } In the example, the values are being output in the ascx page: <%# Eval("ItemName") %> however I need to access the ListItem in this control to do other logic. I cannot figure out how I to send through my Comment item. The sender object and EventArgs do not contain the info.

    Read the article

  • .htaccess - RewriteRules working, but browser address bar displaying full (unfriendly) URL

    - by axol
    Hey there, Haven't been able to find a solution to this around the net or these forums - apologise if I've missed something! My .htaccess RewriteRules are working well - have search-engine and user -friendly links in my web pages, and unfriendly database URLs running hidden in the background. Except when I added a RewriteRule to add "www." to the front of URLs if the user didn't enter it - to ensure only one appears in search engines. Here's what now happening, and I can't figure out why! My friendly URL structure for content is like this, and the database query string uses the first "importantword": www.example.com/importantword-nonimportantword/ .htaccess snippet: Options +FollowSymLinks Options -Indexes RewriteEngine on RewriteOptions MaxRedirects=10 RewriteBase / RewriteRule ^/$ index.php [L] RewriteRule ^(.*)-(.*)/overview/$ detail.php?categoryID=$1 [L] RewriteCond %{HTTP_HOST} !^www.example.com$ RewriteRule ^(.*)$ http://www.example.com/$1 [L] What's happening since I added the last 2 lines: CASE 1: user types (or clicks) www.example.com/honda-vehicle/overview/ - Works correctly - They are taken to the correct page and the browser URL bar says: www.example.com/honda-vehicles/overview/ CASE 2: user types example.com - Works correctly - They are taken to www.example.com and the browser URL bar says: www.example.com CASE 3: user types (or clicks) example.com/honda-vehicles/overview/ i.e. without the prefix "www" - Does NOT work correctly - They are taken to the right page, but the browser URL bar displays the unfriendly URL: www.example.com/detail.php?categoryID=honda I figure there's some issue with the order of the RewriteRules, but it's doing my head in trying to logically step through it and figure it out! Any assistance at all or pointers would be most appreciated!

    Read the article

  • MATLAB Builder NE crash apppool on IIS 7.5

    - by Alkersan
    Im developing a web user interface for MATLAB functions with ASP.NET. Ive started with studying demos and stucked with such problem. I created a MyComponent.dll assembly with deploytool from MATLAB 2010a, target framework - 3.5. This component has one function GetKnot() which returns a figure. function df = getKnot() f = figure('Visible', 'off'); knot; df = webfigure(f); close(f); end Then I made simple webapp in visual studio 2008 sp1, with only one page Default.aspx. I added references to MWArray.dll, WebFiguresService.dll and MyComponent.dll. The codeBehind is: using System; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using MyComponent; using MathWorks.MATLAB.NET.WebFigures; namespace MATLAB_WebApplication { public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { var myComponentClass = new MyComponentClass(); var x = myComponentClass.getKnot(); //WebFigureControl1.WebFigure = new WebFigure(); } } } When I run this page on Visual Studio`s Development web server - everything is fine, figure works. But when I`m trying to deploy webfigure on my local iis 7.5 which runs on Win7 x32 - iis app pool crashes. There is an entry in System Event Log "A process serving application pool 'Classic .NET AppPool' suffered a fatal communication error with the Windows Process Activation Service. The process id was '3676'. The data field contains the error number 6D000780". This happens when MyComponent is instantiating. What I could forget when moved to IIS? Other examples, like magic square console application, runs perfect, and every matlab component instantiating, but not in IIS environment.

    Read the article

  • javascript: detect if XP or Classic windows theme is enabled

    - by mkoryak
    Is there any way to detect which windows XP theme is in use? I suspect that there is no specific api call you can make, but you may be able to figure it out by checking something on some DOM element, ie feature detection. Another question: does the classic theme even exist on windows vista or windows 7? edit - this is my solution: function isXpTheme() { var rgb; var map = { "rgb(212,208,200)" : false, "rgb(236,233,216)" : true }; var $elem = $("<button>"); $elem.css("backgroundColor", "ButtonFace"); $("body").append($elem); var elem = $elem.get(0); if (document.defaultView && document.defaultView.getComputedStyle) { s = document.defaultView.getComputedStyle(elem, ""); rgb = s && s.getPropertyValue("background-color"); } else if (elem.currentStyle) { rgb = (function (el) { // get a rgb based color on IE var oRG =document.body.createTextRange(); oRG.moveToElementText(el); var iClr=oRG.queryCommandValue("BackColor"); return "rgb("+(iClr & 0xFF)+","+((iClr & 0xFF00)>>8)+","+ ((iClr & 0xFF0000)>>16)+")"; })(elem); } else if (elem.style["backgroundColor"]) { rgb = elem.style["backgroundColor"]; } else { rgb = null; } $elem.remove(); rgb = rgb.replace(/[ ]+/g,"") if(rgb){; return map[rgb]; } } Next step is to figure out what this function returns on non-xp machines and/or figure out how to detect windows boxes. I have tested this in windows XP only, so vista and windows 7 might give different color values, it should be easy to add though. Here is a demo page of this in action: http://programmingdrunk.com/current-projects/isXpTheme/

    Read the article

  • How to have Android Service communicate with Activity

    - by Scott Saunders
    I'm writing my first Android application and trying to get my head around communication between services and activities. I have a Service that will run in the background and do some gps and time based logging. I will have an Activity that will be used to start and stop the Service. So first, I need to be able to figure out if the Service is running when the Activity is started. There are some other questions here about that, so I think I can figure that out (but feel free to offer advice). My real problem: if the Activity is running and the Service is started, I need a way for the Service to send messages to the Activity. Simple Strings and integers at this point - status messages mostly. The messages will not happen regularly, so I don't think polling the service is a good way to go if there is another way. I only want this communication when the Activity has been started by the user - I don't want to start the Activity from the Service. In other words, if you start the Activity and the Service is running, you will see some status messages in the Activity UI when something interesting happens. If you don't start the Activity, you will not see these messages (they're not that interesting). It seems like I should be able to determine if the Service is running, and if so, add the Activity as a listener. Then remove the Activity as a listener when the Activity pauses or stops. Is that actually possible? The only way I can figure out to do it is to have the Activity implement Parcelable and build an AIDL file so I can pass it through the Service's remote interface. That seems like overkill though, and I have no idea how the Activity should implement writeToParcel() / readFromParcel(). Is there an easier or better way? Thanks for any help.

    Read the article

  • Matplotlib canvas drawing

    - by Morgoth
    Let's say I define a few functions to do certain matplotlib actions, such as def dostuff(ax): ax.scatter([0.],[0.]) Now if I launch ipython, I can load these functions and start a new figure: In [1]: import matplotlib.pyplot as mpl In [2]: fig = mpl.figure() In [3]: ax = fig.add_subplot(1,1,1) In [4]: run functions # run the file with the above defined function If I now call dostuff, then the figure does not refresh: In [6]: dostuff(ax) I have to then explicitly run: In [7]: fig.canvas.draw() To get the canvas to draw. Now I can modify dostuff to be def dostuff(ax): ax.scatter([0.],[0.]) ax.get_figure().canvas.draw() This re-draws the canvas automatically. But now, say that I have the following code: def dostuff1(ax): ax.scatter([0.],[0.]) ax.get_figure().canvas.draw() def dostuff2(ax): ax.scatter([1.],[1.]) ax.get_figure().canvas.draw() def doboth(ax): dostuff1(ax) dostuff2(ax) ax.get_figure().canvas.draw() I can call each of these functions, and the canvas will be redrawn, but in the case of doboth(), it will get redrawn multiple times. My question is: how could I code this, such that the canvas.draw() only gets called once? In the above example it won't change much, but in more complex cases with tens of functions that can be called individually or grouped, the repeated drawing is much more obvious, and it would be nice to be able to avoid it. I thought of using decorators, but it doesn't look as though it would be simple. Any ideas?

    Read the article

  • Suggestions on how build an HTML Diff tool?

    - by Danimal
    In this post I asked if there were any tools that compare the structure (not actual content) of 2 HTML pages. I ask because I receive HTML templates from our designers, and frequently miss minor formatting changes in my implementation. I then waste a few hours of designer time sifting through my pages to find my mistakes. The thread offered some good suggestions, but there was nothing that fit the bill. "Fine, then", thought I, "I'll just crank one out myself. I'm a halfway-decent developer, right?". Well, once I started to think about it, I couldn't quite figure out how to go about it. I can crank out a data-driven website easily enough, or do a CMS implementation, or throw documents in and out of BizTalk all day. Can't begin to figure out how to compare HTML docs. Well, sure, I have to read the DOM, and iterate through the nodes. I have to map the structure to some data structure (how??), and then compare them (how??). It's a development task like none I've ever attempted. So now that I've identified a weakness in my knowledge, I'm even more challenged to figure this out. Any suggestions on how to get started? clarification: the actual content isn't what I want to compare -- the creative guys fill their pages with lorem ipsum, and I use real content. Instead, I want to compare structure: <div class="foo">lorem ipsum<div> is different that <div class="foo"><p>lorem ipsum<p><div>

    Read the article

  • How to access CSS generated content with JavaScript

    - by Boldewyn
    I generate the numbering of my headers and figures with CSS's counter and content properties: img.figure:after { counter-increment: figure; content: "Fig. " counter(section) "." counter(figure); } This (appropriate browser assumed) gives a nice labelling "Fig. 1.1", "Fig. 1.2" and so on following any image. Question: How can I access this from Javascript? The question is twofold in that I'd like to access either the current value of a certain counter (at a certain DOM node) or the value of the CSS generated content (at a certain DOM node) or, obviously, both information. Background: I'd like to append to links back-referencing to figures the appropriate number, like this: <a href="#fig1">see here</h> ------------------------^ " (Fig 1.1)" inserted via JS As far as I can see, it boils down to this problem: I could access content or counter via getComputedStyle: var fig_content = window.getComputedStyle( document.getElementById('fig-a'), ':after').content; However, this is not the live value, but the one declared in the stylesheet. I cannot find any interface to access the real live value.

    Read the article

  • text not wrapping around some floated images, wraps in IE & FF but not chrome, safari

    - by Hartley
    This is unlike anything I've read about and I've been totally scratching my head for the last few hours trying to figure out what's going on. I have a hand-coded site @ hartbro.com Part of the site is a blog, in which I include pictures. Here's the HTML code around one of the images that's causing trouble. <a href="blogcontent/090811.jpg" class="img"> <img src="blogcontent/090811.jpg" alt="Downed trees" width="25%" class="floatright" /></a> The storm left as quickly as it came. The sky cleared up and we were glad that the oppressive heat had let up. What I've noticed is that, on some of the blog entries that include more than one image, the 2nd image isn't really floating like its supposed to be, with the text wrapping around it. I figure its got to be some sort of conflict with some CSS that I have that's causing the problem but I just can't figure out what it is. I don't understand how it works in FF & IE but not Chrome or Safari?? Here's all of the relevant CSS, let me know if you need anything else. Thanks in advance. img{ margin:10px; } img.floatleft{ float:left; } img.floatright{ float:right; } edit: here's an screen-shot of what's happening.

    Read the article

  • Common lisp, CFFI, and instantiating c structs

    - by andrew
    Hi, I've been on google for about, oh, 3 hours looking for a solution to this "problem." I'm trying to figure out how to instantiate a C structure in lisp using CFFI. I have a struct in c: struct cpVect{cpFloat x,y;} Simple right? I have auto-generated CFFI bindings (swig, I think) to this struct: (cffi:defcstruct #.(chipmunk-lispify "cpVect" 'classname) (#.(chipmunk-lispify "x" 'slotname) :double) (#.(chipmunk-lispify "y" 'slotname) :double)) This generates a struct "VECT" with slots :X and :Y, which foreign-slot-names confirms (please note that I neither generated the bindings or programmed the C library (chipmunk physics), but the actual functions are being called from lisp just fine). I've searched far and wide, and maybe I've seen it 100 times and glossed over it, but I cannot figure out how to create a instance of cpVect in lisp to use in other functions. Note the function: cpShape *cpPolyShapeNew(cpBody *body, int numVerts, cpVect *verts, cpVect offset) Takes not only a cpVect, but also a pointer to a set of cpVects, which brings me to my second question: how do I create a pointer to a set of structs? I've been to http://common-lisp.net/project/cffi/manual/html_node/defcstruct.html and tried the code, but get "Error: Unbound variable: PTR" (I'm in Clozure CL), not to mention that looks to only return a pointer, not an instance. I'm new to lisp, been going pretty strong so far, but this is the first real problem I've hit that I can't figure out. Thanks!

    Read the article

  • I need some help cropping an image in PHP (GD)

    - by evan
    http://i.imgur.com/foT9u.jpg Using that image as an example, here's what I need to do: Crop the blue square to have the same proportional ratio as that of the black square From doing that, I should then be able to resize the blue square to fit into the black square without losing stretching it - It'll retain its proportions. Note: The blue square must be cropped 'from the center'. The original center should remain the center after the crop (it can't be cropped from the top left, for example). Here's what I'm thinking needs to be done (using the, landscape, blue square as the example): Figure out the difference between the black squares width and height Figure out the difference between the blue squares width and height This should tell me how much to crop the blue square by and with how much of a 'top offset' Once it's cropped to fit the black squares proportions, it can then be resized I've been messing around with code similar to: if (BLACK_WIDTH > BLACK_HEIGHT) { $diffHeight = BLACK_WIDTH - BLACK_HEIGHT; $newHeight = $blue_Height - $blue_Height; echo $newHeight; } And using Photoshop to try and get a feel for how this should be done, but it continues to fail .< How should I go about doing this? How can I figure out how much to crop by (depending on if the blue square is landscape or portrait)? How do I then get the offset to retain the blue squares center?

    Read the article

  • Task predecessor/dependencies logic for task management application

    - by Serge
    Hey guys, I'm trying to figure out the logic for creating tasks that have dependencies. In short I'm building a dynamic task management system and each tasks has several options one of them is to have the task start after a predecessor. Users can add/remove/re-order (by drag&drop) tasks so I'm wondering how can I make the predecessors dynamic, here's an example of what I mean Task 1 Task 2 Task 3 - dependent of task 2 Task 4 - dependent of task 2 Tasks get renamed on delete and/or re-order. If task 1 gets deleted then 3 and 4 should become dependent of task 1 (which is the old task 2). I've been banging my head for the past few hours trying to figure out how to do that. I'm using jQuery right now and each task is contained in a div with an incremental id (ie id="task1") that gets renamed whenever a task is removed or re-ordered and I'm using a dynamically populated drop down for selecting a predecessor. What would be the easiest way to get this done?? by the way, I'm not necessarily asking for code, just trying to figure out the best way to tackle this

    Read the article

  • django: How to make one form from multiple models containing foreignkeys

    - by Tim
    I am trying to make a form on one page that uses multiple models. The models reference each other. I am having trouble getting the form to validate because I cant figure out how to get the id of two of the models used in the form into the form to validate it. I used a hidden key in the template but I cant figure out how to make it work in the views My code is below: views: def the_view(request, a_id,): if request.method == 'POST': b_form= BForm(request.POST) c_form =CForm(request.POST) print "post" if b_form.is_valid() and c_form.is_valid(): print "valid" b_form.save() c_form.save() return HttpResponseRedirect(reverse('myproj.pro.views.this_page')) else: b_form= BForm() c_form = CForm() b_ide = B.objects.get(pk=request.b_id) id_of_a = A.objects.get(pk=a_id) return render_to_response('myproj/a/c.html', {'b_form':b_form, 'c_form':c_form, 'id_of_a':id_of_a, 'b_id':b_ide }) models class A(models.Model): name = models.CharField(max_length=256, null=True, blank=True) classe = models.CharField(max_length=256, null=True, blank=True) def __str__(self): return self.name class B(models.Model): aid = models.ForeignKey(A, null=True, blank=True) number = models.IntegerField(max_length=1000) other_number = models.IntegerField(max_length=1000) class C(models.Model): bid = models.ForeignKey(B, null=False, blank=False) field_name = models.CharField(max_length=15) field_value = models.CharField(max_length=256, null=True, blank=True) forms from mappamundi.mappa.models import A, B, C class BForm(forms.ModelForm): class Meta: model = B exclude = ('aid',) class CForm(forms.ModelForm): class Meta: model = C exclude = ('bid',) B has a foreign key reference to A, C has a foreign key reference to B. Since the models are related, I want to have the forms for them on one page, 1 submit button. Since I need to fill out fields for the forms for B and C & I dont want to select the id of B from a drop down list, I need to somehow get the id of the B form into the form so it will validate. I have a hidden field in the template, I just need to figure how to do it in the views

    Read the article

  • Calling SDL/OpenGL from Assembly code on Linux

    - by Lie Ryan
    I'm write a simple graphic-based program in Assembly for learning purpose; for this, I intended to use either OpenGL or SDL. I'm trying to call OpenGL/SDL's function from assembly. The problem is, unlike many assembly and OpenGL/SDL tutorials I found in the internet, the OpenGL/SDL in my machine apparently doesn't use C calling convention. I wrote a simple program in C, compile it to assembly (using -S switch), and apparently the assembly code that is generated by GCC calls the OpenGL/SDL functions by passing parameters in the registers instead of being pushed to the stack. Now, the question is, how do I determine how to pass arguments to these OpenGL/SDL functions? That is, how do I figure out which argument corresponds to which registers? Obviously since GCC can compile C code to call OpenGL/SDL, so therefore there must be a way to figure out the correspondence between function arguments and registers. In C calling conventions, the rule is easy, push parameters backwards and return value in eax/rax, I can simply read their C documentation and I can easily figure out how to pass the parameters. But how about these? Is there a way to call OpenGL/SDL using C calling convention? btw, I'm using yasm, with gcc/ld as the linker on Gentoo Linux amd64.

    Read the article

  • MySQL Ratings From Two Tables

    - by DirtyBirdNJ
    I am using MySQL and PHP to build a data layer for a flash game. Retrieving lists of levels is pretty easy, but I've hit a roadblock in trying to fetch the level's average rating along with it's pointer information. Here is an example data set: levels Table: level_id | level_name 1 | Some Level 2 | Second Level 3 | Third Level ratings Table: rating_id | level_id | rating_value 1 | 1 | 3 2 | 1 | 4 3 | 1 | 1 4 | 2 | 3 5 | 2 | 4 6 | 2 | 1 7 | 3 | 3 8 | 3 | 4 9 | 3 | 1 I know this requires a join, but I cannot figure out how to get the average rating value based on the level_id when I request a list of levels. This is what I'm trying to do: SELECT levels.level_id, AVG(ratings.level_rating WHERE levels.level_id = ratings.level_id) FROM levels I know my SQL is flawed there, but I can't figure out how to get this concept across. The only thing I can get to work is returning a single average from the entire ratings table, which is not very useful. Ideal Output from the above conceptually valid but syntactically awry query would be: level_id | level_rating 1| 3.34 2| 1.00 3| 4.54 My main issue is I can't figure out how to use the level_id of each response row before the query has been returned. It's like I want to use a placeholder... or an alias... I really don't know and it's very frustrating. The solution I have in place now is an EPIC band-aid and will only cause me problems long term... please help!

    Read the article

  • Changing middle mouse behavior in Adobe AIR HTML Control?

    - by Qz
    I'm using the Flex 4/Adobe AIR mx:HTML control in a project and I'm trying to figure out how to change the middle mouse behavior. For some reason the control treats middle mouse clicks and drags exactly the same as left mouse clicks and drags, navigating through links and selecting text -- everything. For my project I need to use the middle mouse button for a different function. I've figured out how to hook into the events (although mouseUp and mouseDown for Left/Middle don't work over HTML whitespace, perhaps because of the default behavior). However I can't figure out how to stop the default behavior for middle mouse events. Calling preventDefault() on the HTML control mouse events doesn't work, presumably because the behavior is handled by the HTMLLoader within the HTML control, which I imagine hooks mouse events to the actual HTML content displayed on screen. Unfortunately I can't view the source for HTMLLoader to figure out what's going on, and browsing the properties and events doesn't shed any light on the situation either. Any help would be greatly appreciated! Oh, if anyone can suggest a different embeddable HTML renderer that doesn't have this problem, then that might work too, I'm not too tied to WebKit (although I do need to use AIR)

    Read the article

  • Query distinct list of choices for Django form with App Engine Datastore

    - by Brian
    I've been trying to figure this out for hours across a couple of days, and can not get it to work. I've been everywhere. I'll continue trying to figure it out, but was hoping for a quicker solution. I'm using App Engine datastore + Django. Using a query in a view and custom forms, I was able to get a list to the form but then I was not able to post. I have been trying to figure out how to dynamically add the choices as part of the Django form... I've tried various ways with no success. Help! Below are the two models. I'd like to get a distinct list of address_id to show in the location field in InfoForm. This fields could (and maybe should) be named the same, but I thought it'd be easier if they were named different. class Info(db.Model): user = db.UserProperty() location = db.StringProperty() info = db.StringProperty() created = db.DateTimeProperty(auto_now_add=True) modified = db.DateTimeProperty(auto_now=True) class Locations(db.Model): user = db.UserProperty() address_id = db.StringProperty() address = db.StringProperty() class InfoForm(djangoforms.ModelForm): info = forms.ChoiceField(choices=INFO_CHOICES) location = forms.ChoiceField() class Meta: model = Info exclude = ['user','created','modified']

    Read the article

  • How to replace auto-implemented c# get body at runtime or compile time?

    - by qstarin
    I've been trying to figure this out all night, but I guess my knowledge of the .Net Framework just isn't that deep and the problem doesn't exactly Google well, but if I can get a nod in the right direction I'm sure I can implement it, one way or another. I'd like to be able to declare a property decorated with a custom attribute as such: public MyClass { [ReplaceWithExpressionFrom(typeof(SomeOtherClass))] public virtual bool MyProperty { get; } } public SomeOtherClass : IExpressionHolder<MyClass, bool> { ... } public interface IExpressionHolder<TArg, TResult> { Expression<Func<TArg, TResult>> Expression { get; } } And then somehow - this is the part I'm having trouble figuring - replace the automatically generated implementation of that getter with a piece of custom code, something like: Type expressionHolderType = LookupAttributeCtorArgTypeInDeclarationOfPropertyWereReplacing(); return ReplaceWithExpressionFromAttribute.GetCompiledExpressionFrom(expressionHolderType)(this); The main thing I'm not sure how to do is replace the automatic implementation of the get. The first thing that came to mind was PostSharp, but that's a more complicated dependency than I care for. I'd much prefer a way to code it without using post-processing attached to the build (I think that's the jist of how PostSharp sinks its hooks in anyway). The other part of this I'm not so sure about is how to retrieve the type parameter passed to the particular instantiation of the ReplaceWithExpressionFrom attribute (where it decorates the property whose body I want to replace; in other words, how do I get typeof(SomeOtherClass) where I'm coding the get body replacement). I plan to cache compiled expressions from concrete instances of IExpressionHolder, as I don't want to do that every time the property gets retrieved. I figure this has just got to be possible. At the very least I figure I should be able to search an assembly for any method decorated with the attribute and somehow proxy the class or just replace the IL or .. something? And I'd like to make the integration as smooth as possible, so if this can be done without explicitly calling a registration or initialization method somewhere that'd be super great. Thanks!

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >