Search Results

Search found 21089 results on 844 pages for 'virtual memory'.

Page 242/844 | < Previous Page | 238 239 240 241 242 243 244 245 246 247 248 249  | Next Page >

  • How to backup virtual machines on a standalone ESXi host?

    - by Massimo
    Standalone ESXi (4.1) host without any vCenter Server. How to backup virtual machines as quickly and storage-friendly as possible? I know I can access the ESXi console and use the standard Unix cp command, but this has the downfall of copying the whole VMDK files, not only their actually used space; so, for a 30-GB VMDK of which only 1 GB is used, the backup would take 30 full GBs of space, and time accordingly. And yes, I know about thin-provisioned virtual disks, but they tend to behave very badly when physically copied, and/or to blow up to their full provisioned size; also, they are not recommended for actual VM performance. It is ok for me to shut down the VMs before backing them up (i.e. I don't need "live" backups); but I need a way to copy them around efficiently; and yes, a way to automate shutdown/startup when taking a backup would also help. I only have ESXi; no Service Console, no vCenter Server... what's the best way to handle this task? Also, what about restores?

    Read the article

  • How to setup a virtual host in Ubuntu running on Amazon EC2 instance?

    - by Rade
    I have an app that's accessible via 1.2.3.4/myapp. The app is installed in /var/www/myapp. I've set up a subdomain(apps.mydomain.com) that points to 1.2.3.4. I want the server to point to var/www/myapp if I type apps.mydomain.com/myapp, how do I do that? I have experience creating virtual hosts(lots of them) locally but I'm lost because it's now in production and it's a little different. Here's my virtual host config: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName apps.mydomain.com/myapp DocumentRoot /var/www/myapp/public <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride All Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> Any idea why I still see the files instead of pointing me to the document root? Just in case someone might ask, the app is based on Laravel 4 framework. It's really bad right now because anyone can access the files from the browser.

    Read the article

  • ORA-4030 Troubleshooting

    - by [email protected]
    QUICKLINK: Note 399497.1 FAQ ORA-4030 Note 1088087.1 : ORA-4030 Diagnostic Tools [Video]   Have you observed an ORA-0430 error reported in your alert log? ORA-4030 errors are raised when memory or resources are requested from the Operating System and the Operating System is unable to provide the memory or resources.   The arguments included with the ORA-4030 are often important to narrowing down the problem. For more specifics on the ORA-4030 error and scenarios that lead to this problem, see Note 399497.1 FAQ ORA-4030.   Looking for the best way to diagnose? There are several available diagnostic tools (error tracing, 11g Diagnosibility, OCM, Process Memory Guides, RDA, OSW, diagnostic scripts) that collectively can prove powerful for identifying the cause of the ORA-4030.    Error Tracing   The ORA-4030 error usually occurs on the client workstation and for this reason, a trace file and alert log entry may not have been generated on the server side.  It may be necessary to add additional tracing events to get initial diagnostics on the problem. To setup tracing to trap the ORA-4030, on the server use the following in SQLPlus: alter system set events '4030 trace name heapdump level 536870917;name errorstack level 3';Once the error reoccurs with the event set, you can turn off  tracing using the following command in SQLPlus:alter system set events '4030 trace name context off; name context off';NOTE:   See more diagnostics information to collect in Note 399497.1  11g DiagnosibilityStarting with Oracle Database 11g Release 1, the Diagnosability infrastructure was introduced which places traces and core files into a location controlled by the DIAGNOSTIC_DEST initialization parameter when an incident, such as an ORA-4030 occurs.  For earlier versions, the trace file will be written to either USER_DUMP_DEST (if the error was caught in a user process) or BACKGROUND_DUMP_DEST (if the error was caught in a background process like PMON or SMON). The trace file may contain vital information about what led to the error condition.    Note 443529.1 11g Quick Steps to Package and Send Critical Error Diagnostic Informationto Support[Video]  Oracle Configuration Manager (OCM) Oracle Configuration Manager (OCM) works with My Oracle Support to enable proactive support capability that helps you organize, collect and manage your Oracle configurations. Oracle Configuration Manager Quick Start Guide Note 548815.1: My Oracle Support Configuration Management FAQ Note 250434.1: BULLETIN: Learn More About My Oracle Support Configuration Manager    General Process Memory Guides   An ORA-4030 indicates a limit has been reached with respect to the Oracle process private memory allocation.    Each Operating System will handle memory allocations with Oracle slightly differently. Solaris     Note 163763.1Linux       Note 341782.1IBM AIX   Notes 166491.1 and 123754.1HP           Note 166490.1Windows Note 225349.1, Note 373602.1, Note 231159.1, Note 269495.1, Note 762031.1Generic    Note 169706.1   RDAThe RDA report will show more detailed information about the database and Server Configuration. Note 414966.1 RDA Documentation Index Download RDA -- refer to Note 314422.1 Remote Diagnostic Agent (RDA) 4 - Getting Started OS Watcher (OSW)This tool is designed to gather Operating System side statistics to compare with the findings from the database.  This is a key tool in cases where memory usage is higher than expected on the server while not experiencing ORA-4030 errors currently. Reference more details on setup and usage in Note 301137.1 OS Watcher User Guide Diagnostic Scripts   Refer to Note 1088087.1 : ORA-4030 Diagnostic Tools [Video] Common Causes/Solutions The ORA-4030 can occur for a variety of reasons.  Some common causes are:   * OS Memory limit reached such as physical memory and/or swap/virtual paging.   For instance, IBM AIX can experience ORA-4030 issues related to swap scenarios.  See Note 740603.1 10.2.0.4 not using large pages on AIX for more on that problem. Also reference Note 188149.1 for pointers on 10g and stack size issues.* OS limits reached (kernel or user shell limits) that limit overall, user level or process level memory * OS limit on PGA memory size due to SGA attach address           Reference: Note 1028623.6 SOLARIS How to Relocate the SGA* Oracle internal limit on functionality like PL/SQL varrays or bulk collections. ORA-4030 errors will include arguments like "pl/sql vc2" "pmucalm coll" "pmuccst: adt/re".  See Coding Pointers for pointers on application design to get around these issues* Application design causing limits to be reached* Bug - space leaks, heap leaks   ***For reference to the content in this blog, refer to Note.1088267.1 Master Note for Diagnosing ORA-4030

    Read the article

  • Find only physical network adapters with WMI Win32_NetworkAdapter class

    - by Mladen Prajdic
    WMI is Windows Management Instrumentation infrastructure for managing data and machines. We can access it by using WQL (WMI querying language or SQL for WMI). One thing to remember from the WQL link is that it doesn't support ORDER BY. This means that when you do SELECT * FROM wmiObject, the returned order of the objects is not guaranteed. It can return adapters in different order based on logged-in user, permissions of that user, etc… This is not documented anywhere that I've looked and is derived just from my observations. To get network adapters we have to query the Win32_NetworkAdapter class. This returns us all network adapters that windows detect, real and virtual ones, however it only supplies IPv4 data. I've tried various methods of combining properties that are common on all systems since Windows XP. The first thing to do to remove all virtual adapters (like tunneling, WAN miniports, etc…) created by Microsoft. We do this by adding WHERE Manufacturer!='Microsoft' to our WMI query. This greatly narrows the number of adapters we have to work with. Just on my machine it went from 20 adapters to 5. What was left were one real physical Realtek LAN adapter, 2 virtual adapters installed by VMware and 2 virtual adapters installed by VirtualBox. If you read the Win32_NetworkAdapter help page you'd notice that there's an AdapterType that enumerates various adapter types like LAN or Wireless and AdapterTypeID that gives you the same information as AdapterType only in integer form. The dirty little secret is that these 2 properties don't work. They are both hardcoded, AdapterTypeID to "0" and AdapterType to "Ethernet 802.3". The only exceptions I've seen so far are adapters that have no values at all for the two properties, "RAS Async Adapter" that has values of AdapterType = "Wide Area Network" and AdapterTypeID = "3" and various tunneling adapters that have values of AdapterType = "Tunnel" and AdapterTypeID = "15". In the help docs there isn't even a value for 15. So this property was of no help. Next property to give hope is NetConnectionId. This is the name of the network connection as it appears in the Control Panel -> Network Connections. Problem is this value is also localized into various languages and can have different names for different connection. So both of these properties don't help and we haven't even started talking about eliminating virtual adapters. Same as the previous one this property was also of no help. Next two properties I checked were ConfigManagerErrorCode and NetConnectionStatus in hopes of finding disabled and disconnected adapters. If an adapter is enabled but disconnected the ConfigManagerErrorCode = 0 with different NetConnectionStatus. If the adapter is disabled it reports ConfigManagerErrorCode = 22. This looked like a win by using (ConfigManagerErrorCode=0 or ConfigManagerErrorCode=22) in our condition. This way we get enabled (connected and disconnected adapters). Problem with all of the above properties is that none of them filter out the virtual adapters installed by virtualization software like VMware and VirtualBox. The last property to give hope is PNPDeviceID. There's an interesting observation about physical and virtual adapters with this property. Every virtual adapter PNPDeviceID starts with "ROOT\". Even VMware and VirtualBox ones. There were some really, really old physical adapters that had PNPDeviceID starting with "ROOT\" but those were in pre win XP era AFAIK. Since my minimum system to check was Windows XP SP2 I didn't have to worry about those. The only virtual adapter I've seen to not have PNPDeviceID start with "ROOT\" is the RAS Async Adapter for Wide Area Network. But because it is made by Microsoft we've eliminated it with the first condition for the manufacturer. Using the PNPDeviceID has so far proven to be really effective and I've tested it on over 20 different computers of various configurations from Windows XP laptops with wireless and bluetooth cards to virtualized Windows 2008 R2 servers. So far it always worked as expected. I will appreciate you letting me know if you find a configuration where it doesn't work. Let's see some C# code how to do this: ManagementObjectSearcher mos = null;// WHERE Manufacturer!='Microsoft' removes all of the // Microsoft provided virtual adapters like tunneling, miniports, and Wide Area Network adapters.mos = new ManagementObjectSearcher(@"SELECT * FROM Win32_NetworkAdapter WHERE Manufacturer != 'Microsoft'");// Trying the ConfigManagerErrorCode and NetConnectionStatus variations // proved to still not be enough and it returns adapters installed by // the virtualization software like VMWare and VirtualBox// ConfigManagerErrorCode = 0 -> Device is working properly. This covers enabled and/or disconnected devices// ConfigManagerErrorCode = 22 AND NetConnectionStatus = 0 -> Device is disabled and Disconnected. // Some virtual devices report ConfigManagerErrorCode = 22 (disabled) and some other NetConnectionStatus than 0mos = new ManagementObjectSearcher(@"SELECT * FROM Win32_NetworkAdapter WHERE Manufacturer != 'Microsoft' AND (ConfigManagerErrorCode = 0 OR (ConfigManagerErrorCode = 22 AND NetConnectionStatus = 0))");// Final solution with filtering on the Manufacturer and PNPDeviceID not starting with "ROOT\"// Physical devices have PNPDeviceID starting with "PCI\" or something else besides "ROOT\"mos = new ManagementObjectSearcher(@"SELECT * FROM Win32_NetworkAdapter WHERE Manufacturer != 'Microsoft' AND NOT PNPDeviceID LIKE 'ROOT\\%'");// Get the physical adapters and sort them by their index. // This is needed because they're not sorted by defaultIList<ManagementObject> managementObjectList = mos.Get() .Cast<ManagementObject>() .OrderBy(p => Convert.ToUInt32(p.Properties["Index"].Value)) .ToList();// Let's just show all the properties for all physical adapters.foreach (ManagementObject mo in managementObjectList){ foreach (PropertyData pd in mo.Properties) Console.WriteLine(pd.Name + ": " + (pd.Value ?? "N/A"));}   That's it. Hope this helps you in some way.

    Read the article

  • SPARC M7 Chip - 32 cores - Mind Blowing performance

    - by Angelo-Oracle
    The M7 Chip Oracle just announced its Next Generation Processor at the HotChips HC26 conference. As the Tech Lead in our Systems Division's Partner group, I had a front row seat to the extraordinary price performance advantage of Oracle current T5 and M6 based systems. Partner after partner tested  these systems and were impressed with it performance. Just read some of the quotes to see what our partner has been saying about our hardware. We just announced our next generation processor, the M7. This has 32 cores (up from 16-cores in T5 and 12-cores in M6). With 20 nm technology  this is our most advanced processor. The processor has more cores than anything else in the industry today. After the Sun acquisition Oracle has released 5 processors in 4 years and this is the 6th.  The S4 core  The M7 is built using the foundation of the S4 core. This is the next generation core technology. Like its predecessor, the S4 has 8 dynamic threads. It increases the frequency while maintaining the Pipeline depth. Each core has its own fine grain power estimator that keeps the core within its power envelop in 250 nano-sec granularity. Each core also includes Software in Silicon features for Application Acceleration Support. Each core includes features to improve Application Data Integrity, with almost no performance loss. The core also allows using part of the Virtual Address to store meta-data.  User-Level Synchronization Instructions are also part of the S4 core. Each core has 16 KB Instruction and 16 KB Data L1 cache. The Core Clusters  The cores on the M7 chip are organized in sets of 4-core clusters. The core clusters share  L2 cache.  All four cores in the complex share 256 KB of 4 way set associative L2 Instruction Cache, with over 1/2 TB/s of throughput. Two cores share 256 KB of 8 way set associative L2 Data Cache, with over 1/2 TB/s of throughput. With this innovative Core Cluster architecture, the M7 doubles core execution bandwidth. to maximize per-thread performance.  The Chip  Each  M7 chip has 8 sets of these core-clusters. The chip has 64 MB on-chip L3 cache. This L3 caches is shared among all the cores and is partitioned into 8 x 8 MB chunks. Each chunk is  8-way set associative cache. The aggregate bandwidth for the L3 cache on the chip is over 1.6TB/s. Each chip has 4 DDR4 memory controllers and can support upto 16 DDR4 DIMMs, allowing for 2 TB of RAM/chip. The chip also includes 4 internal links of PCIe Gen3 I/O controllers.  Each chip has 7 coherence links, allowing for 8 of these chips to be connected together gluelessly. Also 32 of these chips can be connected in an SMP configuration. A potential system with 32 chips will have 1024 cores and 8192 threads and 64 TB of RAM.  Software in Silicon The M7 chip has many built in Application Accelerators in Silicon. These features will be exposed to our Software partners using the SPARC Accelerator Program.  The M7  has built-in logic to decompress data at the speed of memory access. This means that applications can directly work on compressed data in memory increasing the data access rates. The VA Masking feature allows the use of part of the virtual address to store meta-data.  Realtime Application Data Integrity The Realtime Application Data Integrity feature helps applications safeguard against invalid, stale memory reference and buffer overflows. The first 4-bits if the Pointer can be used to store a version number and this version number is also maintained in the memory & cache lines. When a pointer accesses memory the hardware checks to make sure the two versions match. A SEGV signal is raised when there is a mismatch. This feature can be used by the Database, applications and the OS.  M7 Database In-Memory Query Accelerator The M7 chip also includes a In-Silicon Query Engines.  These accelerate tasks that work on In-Memory Columnar Vectors. Oracle In-Memory options stores data in Column Format. The M7 Query Engine can speed up In-Memory Format Conversion, Value and Range Comparisons and Set Membership lookups. This engine can work on Compressed data - this means not only are we accelerating the query performance but also increasing the memory bandwidth for queries.  SPARC Accelerated Program  At the Hotchips conference we also introduced the SPARC Accelerated Program to provide our partners and third part developers access to all the goodness of the M7's SPARC Application Acceleration features. Please get in touch with us if you are interested in knowing more about this program. 

    Read the article

  • How John Got 15x Improvement Without Really Trying

    - by rchrd
    The following article was published on a Sun Microsystems website a number of years ago by John Feo. It is still useful and worth preserving. So I'm republishing it here.  How I Got 15x Improvement Without Really Trying John Feo, Sun Microsystems Taking ten "personal" program codes used in scientific and engineering research, the author was able to get from 2 to 15 times performance improvement easily by applying some simple general optimization techniques. Introduction Scientific research based on computer simulation depends on the simulation for advancement. The research can advance only as fast as the computational codes can execute. The codes' efficiency determines both the rate and quality of results. In the same amount of time, a faster program can generate more results and can carry out a more detailed simulation of physical phenomena than a slower program. Highly optimized programs help science advance quickly and insure that monies supporting scientific research are used as effectively as possible. Scientific computer codes divide into three broad categories: ISV, community, and personal. ISV codes are large, mature production codes developed and sold commercially. The codes improve slowly over time both in methods and capabilities, and they are well tuned for most vendor platforms. Since the codes are mature and complex, there are few opportunities to improve their performance solely through code optimization. Improvements of 10% to 15% are typical. Examples of ISV codes are DYNA3D, Gaussian, and Nastran. Community codes are non-commercial production codes used by a particular research field. Generally, they are developed and distributed by a single academic or research institution with assistance from the community. Most users just run the codes, but some develop new methods and extensions that feed back into the general release. The codes are available on most vendor platforms. Since these codes are younger than ISV codes, there are more opportunities to optimize the source code. Improvements of 50% are not unusual. Examples of community codes are AMBER, CHARM, BLAST, and FASTA. Personal codes are those written by single users or small research groups for their own use. These codes are not distributed, but may be passed from professor-to-student or student-to-student over several years. They form the primordial ocean of applications from which community and ISV codes emerge. Government research grants pay for the development of most personal codes. This paper reports on the nature and performance of this class of codes. Over the last year, I have looked at over two dozen personal codes from more than a dozen research institutions. The codes cover a variety of scientific fields, including astronomy, atmospheric sciences, bioinformatics, biology, chemistry, geology, and physics. The sources range from a few hundred lines to more than ten thousand lines, and are written in Fortran, Fortran 90, C, and C++. For the most part, the codes are modular, documented, and written in a clear, straightforward manner. They do not use complex language features, advanced data structures, programming tricks, or libraries. I had little trouble understanding what the codes did or how data structures were used. Most came with a makefile. Surprisingly, only one of the applications is parallel. All developers have access to parallel machines, so availability is not an issue. Several tried to parallelize their applications, but stopped after encountering difficulties. Lack of education and a perception that parallelism is difficult prevented most from trying. I parallelized several of the codes using OpenMP, and did not judge any of the codes as difficult to parallelize. Even more surprising than the lack of parallelism is the inefficiency of the codes. I was able to get large improvements in performance in a matter of a few days applying simple optimization techniques. Table 1 lists ten representative codes [names and affiliation are omitted to preserve anonymity]. Improvements on one processor range from 2x to 15.5x with a simple average of 4.75x. I did not use sophisticated performance tools or drill deep into the program's execution character as one would do when tuning ISV or community codes. Using only a profiler and source line timers, I identified inefficient sections of code and improved their performance by inspection. The changes were at a high level. I am sure there is another factor of 2 or 3 in each code, and more if the codes are parallelized. The study’s results show that personal scientific codes are running many times slower than they should and that the problem is pervasive. Computational scientists are not sloppy programmers; however, few are trained in the art of computer programming or code optimization. I found that most have a working knowledge of some programming language and standard software engineering practices; but they do not know, or think about, how to make their programs run faster. They simply do not know the standard techniques used to make codes run faster. In fact, they do not even perceive that such techniques exist. The case studies described in this paper show that applying simple, well known techniques can significantly increase the performance of personal codes. It is important that the scientific community and the Government agencies that support scientific research find ways to better educate academic scientific programmers. The inefficiency of their codes is so bad that it is retarding both the quality and progress of scientific research. # cacheperformance redundantoperations loopstructures performanceimprovement 1 x x 15.5 2 x 2.8 3 x x 2.5 4 x 2.1 5 x x 2.0 6 x 5.0 7 x 5.8 8 x 6.3 9 2.2 10 x x 3.3 Table 1 — Area of improvement and performance gains of 10 codes The remainder of the paper is organized as follows: sections 2, 3, and 4 discuss the three most common sources of inefficiencies in the codes studied. These are cache performance, redundant operations, and loop structures. Each section includes several examples. The last section summaries the work and suggests a possible solution to the issues raised. Optimizing cache performance Commodity microprocessor systems use caches to increase memory bandwidth and reduce memory latencies. Typical latencies from processor to L1, L2, local, and remote memory are 3, 10, 50, and 200 cycles, respectively. Moreover, bandwidth falls off dramatically as memory distances increase. Programs that do not use cache effectively run many times slower than programs that do. When optimizing for cache, the biggest performance gains are achieved by accessing data in cache order and reusing data to amortize the overhead of cache misses. Secondary considerations are prefetching, associativity, and replacement; however, the understanding and analysis required to optimize for the latter are probably beyond the capabilities of the non-expert. Much can be gained simply by accessing data in the correct order and maximizing data reuse. 6 out of the 10 codes studied here benefited from such high level optimizations. Array Accesses The most important cache optimization is the most basic: accessing Fortran array elements in column order and C array elements in row order. Four of the ten codes—1, 2, 4, and 10—got it wrong. Compilers will restructure nested loops to optimize cache performance, but may not do so if the loop structure is too complex, or the loop body includes conditionals, complex addressing, or function calls. In code 1, the compiler failed to invert a key loop because of complex addressing do I = 0, 1010, delta_x IM = I - delta_x IP = I + delta_x do J = 5, 995, delta_x JM = J - delta_x JP = J + delta_x T1 = CA1(IP, J) + CA1(I, JP) T2 = CA1(IM, J) + CA1(I, JM) S1 = T1 + T2 - 4 * CA1(I, J) CA(I, J) = CA1(I, J) + D * S1 end do end do In code 2, the culprit is conditionals do I = 1, N do J = 1, N If (IFLAG(I,J) .EQ. 0) then T1 = Value(I, J-1) T2 = Value(I-1, J) T3 = Value(I, J) T4 = Value(I+1, J) T5 = Value(I, J+1) Value(I,J) = 0.25 * (T1 + T2 + T5 + T4) Delta = ABS(T3 - Value(I,J)) If (Delta .GT. MaxDelta) MaxDelta = Delta endif enddo enddo I fixed both programs by inverting the loops by hand. Code 10 has three-dimensional arrays and triply nested loops. The structure of the most computationally intensive loops is too complex to invert automatically or by hand. The only practical solution is to transpose the arrays so that the dimension accessed by the innermost loop is in cache order. The arrays can be transposed at construction or prior to entering a computationally intensive section of code. The former requires all array references to be modified, while the latter is cost effective only if the cost of the transpose is amortized over many accesses. I used the second approach to optimize code 10. Code 5 has four-dimensional arrays and loops are nested four deep. For all of the reasons cited above the compiler is not able to restructure three key loops. Assume C arrays and let the four dimensions of the arrays be i, j, k, and l. In the original code, the index structure of the three loops is L1: for i L2: for i L3: for i for l for l for j for k for j for k for j for k for l So only L3 accesses array elements in cache order. L1 is a very complex loop—much too complex to invert. I brought the loop into cache alignment by transposing the second and fourth dimensions of the arrays. Since the code uses a macro to compute all array indexes, I effected the transpose at construction and changed the macro appropriately. The dimensions of the new arrays are now: i, l, k, and j. L3 is a simple loop and easily inverted. L2 has a loop-carried scalar dependence in k. By promoting the scalar name that carries the dependence to an array, I was able to invert the third and fourth subloops aligning the loop with cache. Code 5 is by far the most difficult of the four codes to optimize for array accesses; but the knowledge required to fix the problems is no more than that required for the other codes. I would judge this code at the limits of, but not beyond, the capabilities of appropriately trained computational scientists. Array Strides When a cache miss occurs, a line (64 bytes) rather than just one word is loaded into the cache. If data is accessed stride 1, than the cost of the miss is amortized over 8 words. Any stride other than one reduces the cost savings. Two of the ten codes studied suffered from non-unit strides. The codes represent two important classes of "strided" codes. Code 1 employs a multi-grid algorithm to reduce time to convergence. The grids are every tenth, fifth, second, and unit element. Since time to convergence is inversely proportional to the distance between elements, coarse grids converge quickly providing good starting values for finer grids. The better starting values further reduce the time to convergence. The downside is that grids of every nth element, n > 1, introduce non-unit strides into the computation. In the original code, much of the savings of the multi-grid algorithm were lost due to this problem. I eliminated the problem by compressing (copying) coarse grids into continuous memory, and rewriting the computation as a function of the compressed grid. On convergence, I copied the final values of the compressed grid back to the original grid. The savings gained from unit stride access of the compressed grid more than paid for the cost of copying. Using compressed grids, the loop from code 1 included in the previous section becomes do j = 1, GZ do i = 1, GZ T1 = CA(i+0, j-1) + CA(i-1, j+0) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) S1 = T1 + T4 - 4 * CA1(i+0, j+0) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 enddo enddo where CA and CA1 are compressed arrays of size GZ. Code 7 traverses a list of objects selecting objects for later processing. The labels of the selected objects are stored in an array. The selection step has unit stride, but the processing steps have irregular stride. A fix is to save the parameters of the selected objects in temporary arrays as they are selected, and pass the temporary arrays to the processing functions. The fix is practical if the same parameters are used in selection as in processing, or if processing comprises a series of distinct steps which use overlapping subsets of the parameters. Both conditions are true for code 7, so I achieved significant improvement by copying parameters to temporary arrays during selection. Data reuse In the previous sections, we optimized for spatial locality. It is also important to optimize for temporal locality. Once read, a datum should be used as much as possible before it is forced from cache. Loop fusion and loop unrolling are two techniques that increase temporal locality. Unfortunately, both techniques increase register pressure—as loop bodies become larger, the number of registers required to hold temporary values grows. Once register spilling occurs, any gains evaporate quickly. For multiprocessors with small register sets or small caches, the sweet spot can be very small. In the ten codes presented here, I found no opportunities for loop fusion and only two opportunities for loop unrolling (codes 1 and 3). In code 1, unrolling the outer and inner loop one iteration increases the number of result values computed by the loop body from 1 to 4, do J = 1, GZ-2, 2 do I = 1, GZ-2, 2 T1 = CA1(i+0, j-1) + CA1(i-1, j+0) T2 = CA1(i+1, j-1) + CA1(i+0, j+0) T3 = CA1(i+0, j+0) + CA1(i-1, j+1) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) T5 = CA1(i+2, j+0) + CA1(i+1, j+1) T6 = CA1(i+1, j+1) + CA1(i+0, j+2) T7 = CA1(i+2, j+1) + CA1(i+1, j+2) S1 = T1 + T4 - 4 * CA1(i+0, j+0) S2 = T2 + T5 - 4 * CA1(i+1, j+0) S3 = T3 + T6 - 4 * CA1(i+0, j+1) S4 = T4 + T7 - 4 * CA1(i+1, j+1) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 CA(i+1, j+0) = CA1(i+1, j+0) + DD * S2 CA(i+0, j+1) = CA1(i+0, j+1) + DD * S3 CA(i+1, j+1) = CA1(i+1, j+1) + DD * S4 enddo enddo The loop body executes 12 reads, whereas as the rolled loop shown in the previous section executes 20 reads to compute the same four values. In code 3, two loops are unrolled 8 times and one loop is unrolled 4 times. Here is the before for (k = 0; k < NK[u]; k++) { sum = 0.0; for (y = 0; y < NY; y++) { sum += W[y][u][k] * delta[y]; } backprop[i++]=sum; } and after code for (k = 0; k < KK - 8; k+=8) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (y = 0; y < NY; y++) { sum0 += W[y][0][k+0] * delta[y]; sum1 += W[y][0][k+1] * delta[y]; sum2 += W[y][0][k+2] * delta[y]; sum3 += W[y][0][k+3] * delta[y]; sum4 += W[y][0][k+4] * delta[y]; sum5 += W[y][0][k+5] * delta[y]; sum6 += W[y][0][k+6] * delta[y]; sum7 += W[y][0][k+7] * delta[y]; } backprop[k+0] = sum0; backprop[k+1] = sum1; backprop[k+2] = sum2; backprop[k+3] = sum3; backprop[k+4] = sum4; backprop[k+5] = sum5; backprop[k+6] = sum6; backprop[k+7] = sum7; } for one of the loops unrolled 8 times. Optimizing for temporal locality is the most difficult optimization considered in this paper. The concepts are not difficult, but the sweet spot is small. Identifying where the program can benefit from loop unrolling or loop fusion is not trivial. Moreover, it takes some effort to get it right. Still, educating scientific programmers about temporal locality and teaching them how to optimize for it will pay dividends. Reducing instruction count Execution time is a function of instruction count. Reduce the count and you usually reduce the time. The best solution is to use a more efficient algorithm; that is, an algorithm whose order of complexity is smaller, that converges quicker, or is more accurate. Optimizing source code without changing the algorithm yields smaller, but still significant, gains. This paper considers only the latter because the intent is to study how much better codes can run if written by programmers schooled in basic code optimization techniques. The ten codes studied benefited from three types of "instruction reducing" optimizations. The two most prevalent were hoisting invariant memory and data operations out of inner loops. The third was eliminating unnecessary data copying. The nature of these inefficiencies is language dependent. Memory operations The semantics of C make it difficult for the compiler to determine all the invariant memory operations in a loop. The problem is particularly acute for loops in functions since the compiler may not know the values of the function's parameters at every call site when compiling the function. Most compilers support pragmas to help resolve ambiguities; however, these pragmas are not comprehensive and there is no standard syntax. To guarantee that invariant memory operations are not executed repetitively, the user has little choice but to hoist the operations by hand. The problem is not as severe in Fortran programs because in the absence of equivalence statements, it is a violation of the language's semantics for two names to share memory. Codes 3 and 5 are C programs. In both cases, the compiler did not hoist all invariant memory operations from inner loops. Consider the following loop from code 3 for (y = 0; y < NY; y++) { i = 0; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += delta[y] * I1[i++]; } } } Since dW[y][u] can point to the same memory space as delta for one or more values of y and u, assignment to dW[y][u][k] may change the value of delta[y]. In reality, dW and delta do not overlap in memory, so I rewrote the loop as for (y = 0; y < NY; y++) { i = 0; Dy = delta[y]; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += Dy * I1[i++]; } } } Failure to hoist invariant memory operations may be due to complex address calculations. If the compiler can not determine that the address calculation is invariant, then it can hoist neither the calculation nor the associated memory operations. As noted above, code 5 uses a macro to address four-dimensional arrays #define MAT4D(a,q,i,j,k) (double *)((a)->data + (q)*(a)->strides[0] + (i)*(a)->strides[3] + (j)*(a)->strides[2] + (k)*(a)->strides[1]) The macro is too complex for the compiler to understand and so, it does not identify any subexpressions as loop invariant. The simplest way to eliminate the address calculation from the innermost loop (over i) is to define a0 = MAT4D(a,q,0,j,k) before the loop and then replace all instances of *MAT4D(a,q,i,j,k) in the loop with a0[i] A similar problem appears in code 6, a Fortran program. The key loop in this program is do n1 = 1, nh nx1 = (n1 - 1) / nz + 1 nz1 = n1 - nz * (nx1 - 1) do n2 = 1, nh nx2 = (n2 - 1) / nz + 1 nz2 = n2 - nz * (nx2 - 1) ndx = nx2 - nx1 ndy = nz2 - nz1 gxx = grn(1,ndx,ndy) gyy = grn(2,ndx,ndy) gxy = grn(3,ndx,ndy) balance(n1,1) = balance(n1,1) + (force(n2,1) * gxx + force(n2,2) * gxy) * h1 balance(n1,2) = balance(n1,2) + (force(n2,1) * gxy + force(n2,2) * gyy)*h1 end do end do The programmer has written this loop well—there are no loop invariant operations with respect to n1 and n2. However, the loop resides within an iterative loop over time and the index calculations are independent with respect to time. Trading space for time, I precomputed the index values prior to the entering the time loop and stored the values in two arrays. I then replaced the index calculations with reads of the arrays. Data operations Ways to reduce data operations can appear in many forms. Implementing a more efficient algorithm produces the biggest gains. The closest I came to an algorithm change was in code 4. This code computes the inner product of K-vectors A(i) and B(j), 0 = i < N, 0 = j < M, for most values of i and j. Since the program computes most of the NM possible inner products, it is more efficient to compute all the inner products in one triply-nested loop rather than one at a time when needed. The savings accrue from reading A(i) once for all B(j) vectors and from loop unrolling. for (i = 0; i < N; i+=8) { for (j = 0; j < M; j++) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (k = 0; k < K; k++) { sum0 += A[i+0][k] * B[j][k]; sum1 += A[i+1][k] * B[j][k]; sum2 += A[i+2][k] * B[j][k]; sum3 += A[i+3][k] * B[j][k]; sum4 += A[i+4][k] * B[j][k]; sum5 += A[i+5][k] * B[j][k]; sum6 += A[i+6][k] * B[j][k]; sum7 += A[i+7][k] * B[j][k]; } C[i+0][j] = sum0; C[i+1][j] = sum1; C[i+2][j] = sum2; C[i+3][j] = sum3; C[i+4][j] = sum4; C[i+5][j] = sum5; C[i+6][j] = sum6; C[i+7][j] = sum7; }} This change requires knowledge of a typical run; i.e., that most inner products are computed. The reasons for the change, however, derive from basic optimization concepts. It is the type of change easily made at development time by a knowledgeable programmer. In code 5, we have the data version of the index optimization in code 6. Here a very expensive computation is a function of the loop indices and so cannot be hoisted out of the loop; however, the computation is invariant with respect to an outer iterative loop over time. We can compute its value for each iteration of the computation loop prior to entering the time loop and save the values in an array. The increase in memory required to store the values is small in comparison to the large savings in time. The main loop in Code 8 is doubly nested. The inner loop includes a series of guarded computations; some are a function of the inner loop index but not the outer loop index while others are a function of the outer loop index but not the inner loop index for (j = 0; j < N; j++) { for (i = 0; i < M; i++) { r = i * hrmax; R = A[j]; temp = (PRM[3] == 0.0) ? 1.0 : pow(r, PRM[3]); high = temp * kcoeff * B[j] * PRM[2] * PRM[4]; low = high * PRM[6] * PRM[6] / (1.0 + pow(PRM[4] * PRM[6], 2.0)); kap = (R > PRM[6]) ? high * R * R / (1.0 + pow(PRM[4]*r, 2.0) : low * pow(R/PRM[6], PRM[5]); < rest of loop omitted > }} Note that the value of temp is invariant to j. Thus, we can hoist the computation for temp out of the loop and save its values in an array. for (i = 0; i < M; i++) { r = i * hrmax; TEMP[i] = pow(r, PRM[3]); } [N.B. – the case for PRM[3] = 0 is omitted and will be reintroduced later.] We now hoist out of the inner loop the computations invariant to i. Since the conditional guarding the value of kap is invariant to i, it behooves us to hoist the computation out of the inner loop, thereby executing the guard once rather than M times. The final version of the code is for (j = 0; j < N; j++) { R = rig[j] / 1000.; tmp1 = kcoeff * par[2] * beta[j] * par[4]; tmp2 = 1.0 + (par[4] * par[4] * par[6] * par[6]); tmp3 = 1.0 + (par[4] * par[4] * R * R); tmp4 = par[6] * par[6] / tmp2; tmp5 = R * R / tmp3; tmp6 = pow(R / par[6], par[5]); if ((par[3] == 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp5; } else if ((par[3] == 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp4 * tmp6; } else if ((par[3] != 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp5; } else if ((par[3] != 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp4 * tmp6; } for (i = 0; i < M; i++) { kap = KAP[i]; r = i * hrmax; < rest of loop omitted > } } Maybe not the prettiest piece of code, but certainly much more efficient than the original loop, Copy operations Several programs unnecessarily copy data from one data structure to another. This problem occurs in both Fortran and C programs, although it manifests itself differently in the two languages. Code 1 declares two arrays—one for old values and one for new values. At the end of each iteration, the array of new values is copied to the array of old values to reset the data structures for the next iteration. This problem occurs in Fortran programs not included in this study and in both Fortran 77 and Fortran 90 code. Introducing pointers to the arrays and swapping pointer values is an obvious way to eliminate the copying; but pointers is not a feature that many Fortran programmers know well or are comfortable using. An easy solution not involving pointers is to extend the dimension of the value array by 1 and use the last dimension to differentiate between arrays at different times. For example, if the data space is N x N, declare the array (N, N, 2). Then store the problem’s initial values in (_, _, 2) and define the scalar names new = 2 and old = 1. At the start of each iteration, swap old and new to reset the arrays. The old–new copy problem did not appear in any C program. In programs that had new and old values, the code swapped pointers to reset data structures. Where unnecessary coping did occur is in structure assignment and parameter passing. Structures in C are handled much like scalars. Assignment causes the data space of the right-hand name to be copied to the data space of the left-hand name. Similarly, when a structure is passed to a function, the data space of the actual parameter is copied to the data space of the formal parameter. If the structure is large and the assignment or function call is in an inner loop, then copying costs can grow quite large. While none of the ten programs considered here manifested this problem, it did occur in programs not included in the study. A simple fix is always to refer to structures via pointers. Optimizing loop structures Since scientific programs spend almost all their time in loops, efficient loops are the key to good performance. Conditionals, function calls, little instruction level parallelism, and large numbers of temporary values make it difficult for the compiler to generate tightly packed, highly efficient code. Conditionals and function calls introduce jumps that disrupt code flow. Users should eliminate or isolate conditionls to their own loops as much as possible. Often logical expressions can be substituted for if-then-else statements. For example, code 2 includes the following snippet MaxDelta = 0.0 do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) if (Delta > MaxDelta) MaxDelta = Delta enddo enddo if (MaxDelta .gt. 0.001) goto 200 Since the only use of MaxDelta is to control the jump to 200 and all that matters is whether or not it is greater than 0.001, I made MaxDelta a boolean and rewrote the snippet as MaxDelta = .false. do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) MaxDelta = MaxDelta .or. (Delta .gt. 0.001) enddo enddo if (MaxDelta) goto 200 thereby, eliminating the conditional expression from the inner loop. A microprocessor can execute many instructions per instruction cycle. Typically, it can execute one or more memory, floating point, integer, and jump operations. To be executed simultaneously, the operations must be independent. Thick loops tend to have more instruction level parallelism than thin loops. Moreover, they reduce memory traffice by maximizing data reuse. Loop unrolling and loop fusion are two techniques to increase the size of loop bodies. Several of the codes studied benefitted from loop unrolling, but none benefitted from loop fusion. This observation is not too surpising since it is the general tendency of programmers to write thick loops. As loops become thicker, the number of temporary values grows, increasing register pressure. If registers spill, then memory traffic increases and code flow is disrupted. A thick loop with many temporary values may execute slower than an equivalent series of thin loops. The biggest gain will be achieved if the thick loop can be split into a series of independent loops eliminating the need to write and read temporary arrays. I found such an occasion in code 10 where I split the loop do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do into two disjoint loops do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) end do end do do i = 1, n do j = 1, m C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do Conclusions Over the course of the last year, I have had the opportunity to work with over two dozen academic scientific programmers at leading research universities. Their research interests span a broad range of scientific fields. Except for two programs that relied almost exclusively on library routines (matrix multiply and fast Fourier transform), I was able to improve significantly the single processor performance of all codes. Improvements range from 2x to 15.5x with a simple average of 4.75x. Changes to the source code were at a very high level. I did not use sophisticated techniques or programming tools to discover inefficiencies or effect the changes. Only one code was parallel despite the availability of parallel systems to all developers. Clearly, we have a problem—personal scientific research codes are highly inefficient and not running parallel. The developers are unaware of simple optimization techniques to make programs run faster. They lack education in the art of code optimization and parallel programming. I do not believe we can fix the problem by publishing additional books or training manuals. To date, the developers in questions have not studied the books or manual available, and are unlikely to do so in the future. Short courses are a possible solution, but I believe they are too concentrated to be much use. The general concepts can be taught in a three or four day course, but that is not enough time for students to practice what they learn and acquire the experience to apply and extend the concepts to their codes. Practice is the key to becoming proficient at optimization. I recommend that graduate students be required to take a semester length course in optimization and parallel programming. We would never give someone access to state-of-the-art scientific equipment costing hundreds of thousands of dollars without first requiring them to demonstrate that they know how to use the equipment. Yet the criterion for time on state-of-the-art supercomputers is at most an interesting project. Requestors are never asked to demonstrate that they know how to use the system, or can use the system effectively. A semester course would teach them the required skills. Government agencies that fund academic scientific research pay for most of the computer systems supporting scientific research as well as the development of most personal scientific codes. These agencies should require graduate schools to offer a course in optimization and parallel programming as a requirement for funding. About the Author John Feo received his Ph.D. in Computer Science from The University of Texas at Austin in 1986. After graduate school, Dr. Feo worked at Lawrence Livermore National Laboratory where he was the Group Leader of the Computer Research Group and principal investigator of the Sisal Language Project. In 1997, Dr. Feo joined Tera Computer Company where he was project manager for the MTA, and oversaw the programming and evaluation of the MTA at the San Diego Supercomputer Center. In 2000, Dr. Feo joined Sun Microsystems as an HPC application specialist. He works with university research groups to optimize and parallelize scientific codes. Dr. Feo has published over two dozen research articles in the areas of parallel parallel programming, parallel programming languages, and application performance.

    Read the article

  • Binary search in a sorted (memory-mapped ?) file in Java

    - by sds
    I am struggling to port a Perl program to Java, and learning Java as I go. A central component of the original program is a Perl module that does string prefix lookups in a +500 GB sorted text file using binary search (essentially, "seek" to a byte offset in the middle of the file, backtrack to nearest newline, compare line prefix with the search string, "seek" to half/double that byte offset, repeat until found...) I have experimented with several database solutions but found that nothing beats this in sheer lookup speed with data sets of this size. Do you know of any existing Java library that implements such functionality? Failing that, could you point me to some idiomatic example code that does random access reads in text files? Alternatively, I am not familiar with the new (?) Java I/O libraries but would it be an option to memory-map the 500 GB text file (I'm on a 64-bit machine with memory to spare) and do binary search on the memory-mapped byte array? I would be very interested to hear any experiences you have to share about this and similar problems.

    Read the article

  • How to have a policy class implement a virtual function?

    - by dehmann
    I'm trying to design a policy-based class, where a certain interface is implemented by the policy itself, so the class derives from the policy, which itself is a template (I got this kind of thinking from Alexandrescu's book): #include <iostream> #include <vector> class TestInterface { public: virtual void test() = 0; }; class TestImpl1 { public: void test() {std::cerr << "Impl1" << std::endl;} }; template<class TestPolicy> class Foo : public TestInterface, TestPolicy { }; Then, in the main() function, I call test() on (potentially) various different objects that all implement the same interface: int main() { std::vector<TestInterface*> foos; foos.push_back(new Foo<TestImpl1>()); foos[0]->test(); delete foos[0]; return 0; } It doesn't compile, though, because the following virtual functions are pure within ‘Foo<TestImpl1>’: virtual void TestInterface::test() I thought TestInterface::test() is implemented because we derive from TestImpl1?

    Read the article

  • Why do pure virtual base classes get direct access to static data members while derived instances do

    - by Shamster
    I've created a simple pair of classes. One is pure virtual with a static data member, and the other is derived from the base, as follows: #include <iostream> template <class T> class Base { public: Base (const T _member) { member = _member; } static T member; virtual void Print () const = 0; }; template <class T> T Base<T>::member; template <class T> void Base<T>::Print () const { std::cout << "Base: " << member << std::endl; } template <class T> class Derived : public Base<T> { public: Derived (const T _member) : Base<T>(_member) { } virtual void Print () const { std::cout << "Derived: " << this->member << std::endl; } }; I've found from this relationship that when I need access to the static data member in the base class, I can call it with direct access as if it were a regular, non-static class member. i.e. - the Base::Print() method does not require a this- modifier. However, the derived class does require the this-member indirect access syntax. I don't understand why this is. Both class methods are accessing the same static data, so why does the derived class need further specification? A simple call to test it is: int main () { Derived<double> dd (7.0); dd.Print(); return 0; } which prints the expected "Derived: 7"

    Read the article

  • Using game of life or other virtual environment for artificial (intelligence) life simulation? [clos

    - by Berlin Brown
    One of my interests in AI focuses not so much on data but more on biologic computing. This includes neural networks, mapping the brain, cellular-automata, virtual life and environments. Described below is an exciting project that includes develop a virtual environment for bots to evolve in. "Polyworld is a cross-platform (Linux, Mac OS X) program written by Larry Yaeger to evolve Artificial Intelligence through natural selection and evolutionary algorithms." http://en.wikipedia.org/wiki/Polyworld " Polyworld is a promising project for studying virtual life but it still is far from creating an "intelligent autonomous" agent. Here is my question, in theory, what parameters would you use create an AI environment? Possibly a brain environment? Possibly multiple self contained life organisms that have their own "brain" or life structures. I would like a create a spin on the game of life simulation. What if you have a 64x64 game of life grid. But instead of one grid, you might have N number of grids. The N number of grids are your "life force" If all of the game of life entities die in a particular grid then that entire grid dies. A group of "grids" makes up a life form. I don't have an immediate goal. First, I want to simulate an environment and visualize what is going on in the environment with OpenGL and see if there are any interesting properties to the environment. I then want to add "scarce resources" and see if the AI environment can manage resources adequately.

    Read the article

  • How to create a "Shell IDList Array" to support drag-and-drop of virtual files from C# to Windows Ex

    - by JustABill
    I started trying to implement drag-and-drop of virtual files (from a C# 4/WPF app) with this codeplex tutorial. After spending some time trying to figure out a DV_E_FORMATETC error, I figured out I need to support the "Shell IDList Array" data format. But there seems to be next to zero documentation on what this format actually does. After some searching, I found this page on advanced data transfer which said that a Shell IDList Array was a pointer to a CIDA structure. This CIDA structure then contains the number of PIDLs, and a list of offsets to them. So what the hell is a PIDL? After some more searching, this page sort of implies it's a pointer to an ITEMIDLIST, which itself contains a single member that is a list of SHITEMIDs. My next idea was to try dragging a file from another application with virtual files. I just got a MemoryStream back for this format. At least I know what class to provide for the thing, but that doesn't help at all for explaining what to put in it. So now that that's explained, I still have no idea how to create one of these things so that it's valid. There's two real questions here: What is a valid "abID" member for a SHITEMID? These virtual files only exist with my program; will the thing they are dragged to pass the "abID" back later when it executes GetData? Do they have to be system-unique? Why are there two levels of lists; a list of PIDLs and each PIDL has a list of SHITEMIDs? I'm assuming one of them is one for each file, but what's the other one for? Multiple IDs for the same file? Any help or even links that explain what I should be doing would be greatly appreciated.

    Read the article

  • How can I render an in-memory UIViewController's view Landscape?

    - by Aaron
    I'm trying to render an in-memory (but not in hierarchy, yet) UIViewController's view into an in-memory image buffer so I can do some interesting transition animations. However, when I render the UIViewController's view into that buffer, it is always rendering as though the controller is in Portrait orientation, no matter the orientation of the rest of the app. How do I clue this controller in? My code in RootViewController looks like this: MyUIViewController* controller = [[MyUIViewController alloc] init]; int width = self.view.frame.size.width; int height = self.view.frame.size.height; int bitmapBytesPerRow = width * 4; unsigned char *offscreenData = calloc(bitmapBytesPerRow * height, sizeof(unsigned char)); CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); CGContextRef offscreenContext = CGBitmapContextCreate(offscreenData, width, height, 8, bitmapBytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast); CGContextTranslateCTM(offscreenContext, 0.0f, height); CGContextScaleCTM(offscreenContext, 1.0f, -1.0f); [(CALayer*)[controller.view layer] renderInContext:offscreenContext]; At that point, the offscreen memory buffers contents are portrait-oriented, even when the window is in landscape orientation. Ideas?

    Read the article

  • How can I store large amount of data from a database to XML (memory problem)?

    - by Andrija
    First, I had a problem with getting the data from the Database, it took too much memory and failed. I've set -Xmx1500M and I'm using scrolling ResultSet so that was taken care of. Now I need to make an XML from the data, but I can't put it in one file. At the moment, I'm doing it like this: while(rs.next()){ i++; xmlStringBuilder.append("\n\t<row>"); xmlStringBuilder.append("\n\t\t<ID>" + Util.transformToHTML(rs.getInt("id")) + "</ID>"); xmlStringBuilder.append("\n\t\t<JED_ID>" + Util.transformToHTML(rs.getInt("jed_id")) + "</JED_ID>"); xmlStringBuilder.append("\n\t\t<IME_PJ>" + Util.transformToHTML(rs.getString("ime_pj")) + "</IME_PJ>"); //etc. xmlStringBuilder.append("\n\t</row>"); if (i%100000 == 0){ //stores the data to a file with the name i.xml storeKBR(xmlStringBuilder.toString(),i); xmlStringBuilder= null; xmlStringBuilder= new StringBuilder(); } and it works; I get 12 100 MB files. Now, what I'd like to do is to do is have all that data in one file (which I then compress) but if just remove the if part, I go out of memory. I thought about trying to write to a file, closing it, then opening, but that wouldn't get me much since I'd have to load the file to memory when I open it. P.S. If there's a better way to release the Builder, do let me know :)

    Read the article

  • Can I use Eclipse JDT to create new 'working copies' of source files in memory only?

    - by RYates
    I'm using Eclipse JDT to build a Java refactoring platform, for exploring different refactorings in memory before choosing one and saving it. I can create collections of working copies of the source files, edit them in memory, and commit the changes to disk using the JDT framework. However, I also want to generate new 'working copy' source files in memory as part of refactorings, and only create the corresponding real source file if I commit the working copy. I have seen various hints that this is possible, e.g. http://www.jarvana.com/jarvana/view/org/eclipse/jdt/doc/isv/3.3.0-v20070613/isv-3.3.0-v20070613.jar!/guide/jdt%5Fapi%5Fmanip.htm says "Note that the compilation unit does not need to exist in the Java model in order for a working copy to be created". So far I have only been able to create a new real file, i.e. ICompilationUnit newICompilationUnit = myPackage.createCompilationUnit(newName, "package piffle; public class Baz{private int i=0;}", false, null); This is not what I want. Does anyone know how to create a new 'working copy' source file, that does not appear in my file system until I commit it? Or any other mechanism to achieve the same thing?

    Read the article

  • Mocking non-virtual methods in C++ without editing production code?

    - by wk1989
    Hello, I am a fairly new software developer currently working adding unit tests to an existing C++ project that started years ago. Due to a non-technical reason, I'm not allowed to modify any existing code. The base class of all my modules has a bunch of methods for Setting/Getting data and communicating with other modules. Since I just want to unit testing each individual module, I want to be able to use canned values for all my inter-module communication methods. I.e. for a method Ping() which checks if another module is active, I want to have it return true or false based on what kind of test I'm doing. I've been looking into Google Test and Google Mock, and it does support mocking non-virtual methods. However the approach described (http://code.google.com/p/googlemock/wiki/CookBook#Mocking_Nonvirtual_Methods) requires me to "templatize" the original methods to take in either real or mock objects. I can't go and templatize my methods in the base class due to the requirement mentioned earlier, so I need some other way of mocking these virtual methods Basically, the methods I want to mock are in some base class, the modules I want to unit test and create mocks of are derived classes of that base class. There are intermediate modules in between my base Module class and the modules that I want to test. I would appreciate any advise! Thanks, JW EDIT: A more concrete examples My base class is lets say rootModule, the module I want to test is leafModule. There is an intermediate module which inherits from rootModule, leafModule inherits from this intermediate module. In my leafModule, I want to test the doStuff() method, which calls the non virtual GetStatus(moduleName) defined in the rootModule class. I need to somehow make GetStatus() to return a chosen canned value. Mocking is new to me, so is using mock objects even the right approach?

    Read the article

  • Image.Save(..) throws a GDI+ exception because the memory stream is closed.

    - by Pure.Krome
    Hi folks, i've got some binary data which i want to save as an image. When i try to save the image, it throws an exception if the memory stream used to create the image, was closed before the save. The reason i do this is because i'm dynamically creating images and as such .. i need to use a memory stream. this is the code: [TestMethod] public void TestMethod1() { // Grab the binary data. byte[] data = File.ReadAllBytes("Chick.jpg"); // Read in the data but do not close, before using the stream. Stream originalBinaryDataStream = new MemoryStream(data); Bitmap image = new Bitmap(originalBinaryDataStream); image.Save(@"c:\test.jpg"); originalBinaryDataStream.Dispose(); // Now lets use a nice dispose, etc... Bitmap2 image2; using (Stream originalBinaryDataStream2 = new MemoryStream(data)) { image2 = new Bitmap(originalBinaryDataStream2); } image2.Save(@"C:\temp\pewpew.jpg"); // This throws the GDI+ exception. } Does anyone have any suggestions to how i could save an image with the stream closed? I cannot rely on the developers to remember to close the stream after the image is saved. In fact, the developer would have NO IDEA that the image was generated using a memory stream (because it happens in some other code, elsewhere). I'm really confused :(

    Read the article

  • 'The RPC server is unavailable' when converting a physical ISA/Forefront TMG machine to virtual (P2V) in SCVMM

    - by Goran B.
    When I try to convert a physical ISA/TMG machine to virtual using SCVMM, i keep getting an error in the Collect machine configuration step ('Scan Now' button): VMM is unable to complete the request. The connection to the agent MACHINE_NAME was lost. Ensure that the computer MACHINE_NAME exists on the network, WMI service and the agent are installed and running and that a firewall is not blocking HTTP and WMI traffic. ID: 3157 Details: The RPC server is unavailable (0x800706BA) Firewall rules allow for RPC traffic from the SCVMM machine to ISA/TMG machine.

    Read the article

  • Introducing Oracle VM Server for SPARC

    - by Honglin Su
    As you are watching Oracle's Virtualization Strategy Webcast and exploring the great virtualization offerings of Oracle VM product line, I'd like to introduce Oracle VM Server for SPARC --  highly efficient, enterprise-class virtualization solution for Sun SPARC Enterprise Systems with Chip Multithreading (CMT) technology. Oracle VM Server for SPARC, previously called Sun Logical Domains, leverages the built-in SPARC hypervisor to subdivide supported platforms' resources (CPUs, memory, network, and storage) by creating partitions called logical (or virtual) domains. Each logical domain can run an independent operating system. Oracle VM Server for SPARC provides the flexibility to deploy multiple Oracle Solaris operating systems simultaneously on a single platform. Oracle VM Server also allows you to create up to 128 virtual servers on one system to take advantage of the massive thread scale offered by the CMT architecture. Oracle VM Server for SPARC integrates both the industry-leading CMT capability of the UltraSPARC T1, T2 and T2 Plus processors and the Oracle Solaris operating system. This combination helps to increase flexibility, isolate workload processing, and improve the potential for maximum server utilization. Oracle VM Server for SPARC delivers the following: Leading Price/Performance - The low-overhead architecture provides scalable performance under increasing workloads without additional license cost. This enables you to meet the most aggressive price/performance requirement Advanced RAS - Each logical domain is an entirely independent virtual machine with its own OS. It supports virtual disk mutipathing and failover as well as faster network failover with link-based IP multipathing (IPMP) support. Moreover, it's fully integrated with Solaris FMA (Fault Management Architecture), which enables predictive self healing. CPU Dynamic Resource Management (DRM) - Enable your resource management policy and domain workload to trigger the automatic addition and removal of CPUs. This ability helps you to better align with your IT and business priorities. Enhanced Domain Migrations - Perform domain migrations interactively and non-interactively to bring more flexibility to the management of your virtualized environment. Improve active domain migration performance by compressing memory transfers and taking advantage of cryptographic acceleration hardware. These methods provide faster migration for load balancing, power saving, and planned maintenance. Dynamic Crypto Control - Dynamically add and remove cryptographic units (aka MAU) to and from active domains. Also, migrate active domains that have cryptographic units. Physical-to-virtual (P2V) Conversion - Quickly convert an existing SPARC server running the Oracle Solaris 8, 9 or 10 OS into a virtualized Oracle Solaris 10 image. Use this image to facilitate OS migration into the virtualized environment. Virtual I/O Dynamic Reconfiguration (DR) - Add and remove virtual I/O services and devices without needing to reboot the system. CPU Power Management - Implement power saving by disabling each core on a Sun UltraSPARC T2 or T2 Plus processor that has all of its CPU threads idle. Advanced Network Configuration - Configure the following network features to obtain more flexible network configurations, higher performance, and scalability: Jumbo frames, VLANs, virtual switches for link aggregations, and network interface unit (NIU) hybrid I/O. Official Certification Based On Real-World Testing - Use Oracle VM Server for SPARC with the most sophisticated enterprise workloads under real-world conditions, including Oracle Real Application Clusters (RAC). Affordable, Full-Stack Enterprise Class Support - Obtain worldwide support from Oracle for the entire virtualization environment and workloads together. The support covers hardware, firmware, OS, virtualization, and the software stack. SPARC Server Virtualization Oracle offers a full portfolio of virtualization solutions to address your needs. SPARC is the leading platform to have the hard partitioning capability that provides the physical isolation needed to run independent operating systems. Many customers have already used Oracle Solaris Containers for application isolation. Oracle VM Server for SPARC provides another important feature with OS isolation. This gives you the flexibility to deploy multiple operating systems simultaneously on a single Sun SPARC T-Series server with finer granularity for computing resources.  For SPARC CMT processors, the natural level of granularity is an execution thread, not a time-sliced microsecond of execution resources. Each CPU thread can be treated as an independent virtual processor. The scheduler is naturally built into the CPU for lower overhead and higher performance. Your organizations can couple Oracle Solaris Containers and Oracle VM Server for SPARC with the breakthrough space and energy savings afforded by Sun SPARC Enterprise systems with CMT technology to deliver a more agile, responsive, and low-cost environment. Management with Oracle Enterprise Manager Ops Center The Oracle Enterprise Manager Ops Center Virtualization Management Pack provides full lifecycle management of virtual guests, including Oracle VM Server for SPARC and Oracle Solaris Containers. It helps you streamline operations and reduce downtime. Together, the Virtualization Management Pack and the Ops Center Provisioning and Patch Automation Pack provide an end-to-end management solution for physical and virtual systems through a single web-based console. This solution automates the lifecycle management of physical and virtual systems and is the most effective systems management solution for Oracle's Sun infrastructure. Ease of Deployment with Configuration Assistant The Oracle VM Server for SPARC Configuration Assistant can help you easily create logical domains. After gathering the configuration data, the Configuration Assistant determines the best way to create a deployment to suit your requirements. The Configuration Assistant is available as both a graphical user interface (GUI) and terminal-based tool. Oracle Solaris Cluster HA Support The Oracle Solaris Cluster HA for Oracle VM Server for SPARC data service provides a mechanism for orderly startup and shutdown, fault monitoring and automatic failover of the Oracle VM Server guest domain service. In addition, applications that run on a logical domain, as well as its resources and dependencies can be controlled and managed independently. These are managed as if they were running in a classical Solaris Cluster hardware node. Supported Systems Oracle VM Server for SPARC is supported on all Sun SPARC Enterprise Systems with CMT technology. UltraSPARC T2 Plus Systems ·   Sun SPARC Enterprise T5140 Server ·   Sun SPARC Enterprise T5240 Server ·   Sun SPARC Enterprise T5440 Server ·   Sun Netra T5440 Server ·   Sun Blade T6340 Server Module ·   Sun Netra T6340 Server Module UltraSPARC T2 Systems ·   Sun SPARC Enterprise T5120 Server ·   Sun SPARC Enterprise T5220 Server ·   Sun Netra T5220 Server ·   Sun Blade T6320 Server Module ·   Sun Netra CP3260 ATCA Blade Server Note that UltraSPARC T1 systems are supported on earlier versions of the software.Sun SPARC Enterprise Systems with CMT technology come with the right to use (RTU) of Oracle VM Server, and the software is pre-installed. If you have the systems under warranty or with support, you can download the software and system firmware as well as their updates. Oracle Premier Support for Systems provides fully-integrated support for your server hardware, firmware, OS, and virtualization software. Visit oracle.com/support for information about Oracle's support offerings for Sun systems. For more information about Oracle's virtualization offerings, visit oracle.com/virtualization.

    Read the article

  • Windows Azure VMs - New "Stopped" VM Options Provide Cost-effective Flexibility for On-Demand Workloads

    - by KeithMayer
    Originally posted on: http://geekswithblogs.net/KeithMayer/archive/2013/06/22/windows-azure-vms---new-stopped-vm-options-provide-cost-effective.aspxDidn’t make it to TechEd this year? Don’t worry!  This month, we’ll be releasing a new article series that highlights the Best of TechEd announcements and technical information for IT Pros.  Today’s article focuses on a new, much-heralded enhancement to Windows Azure Infrastructure Services to make it more cost-effective for spinning VMs up and down on-demand on the Windows Azure cloud platform. NEW! VMs that are shutdown from the Windows Azure Management Portal will no longer continue to accumulate compute charges while stopped! Previous to this enhancement being available, the Azure platform maintained fabric resource reservations for VMs, even in a shutdown state, to ensure consistent resource availability when starting those VMs in the future.  And, this meant that VMs had to be exported and completely deprovisioned when not in use to avoid compute charges. In this article, I'll provide more details on the scenarios that this enhancement best fits, and I'll also review the new options and considerations that we now have for performing safe shutdowns of Windows Azure VMs. Which scenarios does the new enhancement best fit? Being able to easily shutdown VMs from the Windows Azure Management Portal without continued compute charges is a great enhancement for certain cloud use cases, such as: On-demand dev/test/lab environments - Freely start and stop lab VMs so that they are only accumulating compute charges when being actively used.  "Bursting" load-balanced web applications - Provision a number of load-balanced VMs, but keep the minimum number of VMs running to support "normal" loads. Easily start-up the remaining VMs only when needed to support peak loads. Disaster Recovery - Start-up "cold" VMs when needed to recover from disaster scenarios. BUT ... there is a consideration to keep in mind when using the Windows Azure Management Portal to shutdown VMs: although performing a VM shutdown via the Windows Azure Management Portal causes that VM to no longer accumulate compute charges, it also deallocates the VM from fabric resources to which it was previously assigned.  These fabric resources include compute resources such as virtual CPU cores and memory, as well as network resources, such as IP addresses.  This means that when the VM is later started after being shutdown from the portal, the VM could be assigned a different IP address or placed on a different compute node within the fabric. In some cases, you may want to shutdown VMs using the old approach, where fabric resource assignments are maintained while the VM is in a shutdown state.  Specifically, you may wish to do this when temporarily shutting down or restarting a "7x24" VM as part of a maintenance activity.  Good news - you can still revert back to the old VM shutdown behavior when necessary by using the alternate VM shutdown approaches listed below.  Let's walk through each approach for performing a VM Shutdown action on Windows Azure so that we can understand the benefits and considerations of each... How many ways can I shutdown a VM? In Windows Azure Infrastructure Services, there's three general ways that can be used to safely shutdown VMs: Shutdown VM via Windows Azure Management Portal Shutdown Guest Operating System inside the VM Stop VM via Windows PowerShell using Windows Azure PowerShell Module Although each of these options performs a safe shutdown of the guest operation system and the VM itself, each option handles the VM shutdown end state differently. Shutdown VM via Windows Azure Management Portal When clicking the Shutdown button at the bottom of the Virtual Machines page in the Windows Azure Management Portal, the VM is safely shutdown and "deallocated" from fabric resources.  Shutdown button on Virtual Machines page in Windows Azure Management Portal  When the shutdown process completes, the VM will be shown on the Virtual Machines page with a "Stopped ( Deallocated )" status as shown in the figure below. Virtual Machine in a "Stopped (Deallocated)" Status "Deallocated" means that the VM configuration is no longer being actively associated with fabric resources, such as virtual CPUs, memory and networks. In this state, the VM will not continue to allocate compute charges, but since fabric resources are deallocated, the VM could receive a different internal IP address ( called "Dynamic IPs" or "DIPs" in Windows Azure ) the next time it is started.  TIP: If you are leveraging this shutdown option and consistency of DIPs is important to applications running inside your VMs, you should consider using virtual networks with your VMs.  Virtual networks permit you to assign a specific IP Address Space for use with VMs that are assigned to that virtual network.  As long as you start VMs in the same order in which they were originally provisioned, each VM should be reassigned the same DIP that it was previously using. What about consistency of External IP Addresses? Great question! External IP addresses ( called "Virtual IPs" or "VIPs" in Windows Azure ) are associated with the cloud service in which one or more Windows Azure VMs are running.  As long as at least 1 VM inside a cloud service remains in a "Running" state, the VIP assigned to a cloud service will be preserved.  If all VMs inside a cloud service are in a "Stopped ( Deallocated )" status, then the cloud service may receive a different VIP when VMs are next restarted. TIP: If consistency of VIPs is important for the cloud services in which you are running VMs, consider keeping one VM inside each cloud service in the alternate VM shutdown state listed below to preserve the VIP associated with the cloud service. Shutdown Guest Operating System inside the VM When performing a Guest OS shutdown or restart ( ie., a shutdown or restart operation initiated from the Guest OS running inside the VM ), the VM configuration will not be deallocated from fabric resources. In the figure below, the VM has been shutdown from within the Guest OS and is shown with a "Stopped" VM status rather than the "Stopped ( Deallocated )" VM status that was shown in the previous figure. Note that it may require a few minutes for the Windows Azure Management Portal to reflect that the VM is in a "Stopped" state in this scenario, because we are performing an OS shutdown inside the VM rather than through an Azure management endpoint. Virtual Machine in a "Stopped" Status VMs shown in a "Stopped" status will continue to accumulate compute charges, because fabric resources are still being reserved for these VMs.  However, this also means that DIPs and VIPs are preserved for VMs in this state, so you don't have to worry about VMs and cloud services getting different IP addresses when they are started in the future. Stop VM via Windows PowerShell In the latest version of the Windows Azure PowerShell Module, a new -StayProvisioned parameter has been added to the Stop-AzureVM cmdlet. This new parameter provides the flexibility to choose the VM configuration end result when stopping VMs using PowerShell: When running the Stop-AzureVM cmdlet without the -StayProvisioned parameter specified, the VM will be safely stopped and deallocated; that is, the VM will be left in a "Stopped ( Deallocated )" status just like the end result when a VM Shutdown operation is performed via the Windows Azure Management Portal.  When running the Stop-AzureVM cmdlet with the -StayProvisioned parameter specified, the VM will be safely stopped but fabric resource reservations will be preserved; that is the VM will be left in a "Stopped" status just like the end result when performing a Guest OS shutdown operation. So, with PowerShell, you can choose how Windows Azure should handle VM configuration and fabric resource reservations when stopping VMs on a case-by-case basis. TIP: It's important to note that the -StayProvisioned parameter is only available in the latest version of the Windows Azure PowerShell Module.  So, if you've previously downloaded this module, be sure to download and install the latest version to get this new functionality. Want to Learn More about Windows Azure Infrastructure Services? To learn more about Windows Azure Infrastructure Services, be sure to check-out these additional FREE resources: Become our next "Early Expert"! Complete the Early Experts "Cloud Quest" and build a multi-VM lab network in the cloud for FREE!  Build some cool scenarios! Check out our list of over 20+ Step-by-Step Lab Guides based on key scenarios that IT Pros are implementing on Windows Azure Infrastructure Services TODAY!  Looking forward to seeing you in the Cloud! - Keith Build Your Lab! Download Windows Server 2012 Don’t Have a Lab? Build Your Lab in the Cloud with Windows Azure Virtual Machines Want to Get Certified? Join our Windows Server 2012 "Early Experts" Study Group

    Read the article

  • What is the correct way to setup my virtual hosts in Apache to serve 2 differnt domains?

    - by Michael
    I have a current website on an Apache server (foo.com) and I want the same server to serve a new site (bar.com), is this the correct way to setup my virtual hosts? <VirtualHost *:80 ServerName foo.com ServerAlias www.foo.com Include conf.d/foo.conf </VirtualHost <VirtualHost *:80 ServerName bar.com ServerAlias www.bar.com Include conf.d/bar.conf </VirtualHost I'm hesitant to do an /etc/init.d/httpd reload without someone double checking! :)

    Read the article

  • Come play in the SQL Server 2008 R2 Hosted Trial virtual lab!

    - by ssqa.net
    In continuation to SQL_Server_2008_R2 release date announcement you can access a complete, integrated Microsoft SQL Server 2008 R2, SharePoint 2010, and Office 2010 environment… right from your desktop. SQL Server 2008 R2 Hosted Trial Our Hosted Trial makes it easy for you to experience new features without any need for configuration or additional work. Register now to try out up to seven labs: SQL Server 2008 R2 – Multi Server Management SQL Server 2008 R2 – PowerPivot SQL Server 2008 R2 – Reporting...(read more)

    Read the article

  • Running Solaris 11 as a control domain on a T2000

    - by jsavit
    There is increased adoption of Oracle Solaris 11, and many customers are deploying it on systems that previously ran Solaris 10. That includes older T1-processor based systems like T1000 and T2000. Even though they are old (from 2005) and don't have the performance of current SPARC servers, they are still functional, stable servers that customers continue to operate. One reason to install Solaris 11 on them is that older machines are attractive for testing OS upgrades before updating current, production systems. Normally this does not present a challenge, because Solaris 11 runs on any T-series or M-series SPARC server. One scenario adds a complication: running Solaris 11 in a control domain on a T1000 or T2000 hosting logical domains. Solaris 11 pre-installed Oracle VM Server for SPARC incompatible with T1 Unlike Solaris 10, Solaris 11 comes with Oracle VM Server for SPARC preinstalled. The ldomsmanager package contains the logical domains manager for Oracle VM Server for SPARC 2.2, which requires a SPARC T2, T2+, T3, or T4 server. It does not work with T1-processor systems, which are only supported by LDoms Manager 1.2 and earlier. The following screenshot shows what happens (bold font) if you try to use Oracle VM Server for SPARC 2.x commands in a Solaris 11 control domain. The commands were issued in a control domain on a T2000 that previously ran Solaris 10. We also display the version of the logical domains manager installed in Solaris 11: root@t2000 psrinfo -vp The physical processor has 4 virtual processors (0-3) UltraSPARC-T1 (chipid 0, clock 1200 MHz) # prtconf|grep T SUNW,Sun-Fire-T200 # ldm -V Failed to connect to logical domain manager: Connection refused # pkg info ldomsmanager Name: system/ldoms/ldomsmanager Summary: Logical Domains Manager Description: LDoms Manager - Virtualization for SPARC T-Series Category: System/Virtualization State: Installed Publisher: solaris Version: 2.2.0.0 Build Release: 5.11 Branch: 0.175.0.8.0.3.0 Packaging Date: May 25, 2012 10:20:48 PM Size: 2.86 MB FMRI: pkg://solaris/system/ldoms/[email protected],5.11-0.175.0.8.0.3.0:20120525T222048Z The 2.2 version of the logical domains manager will have to be removed, and 1.2 installed, in order to use this as a control domain. Preparing to change - create a new boot environment Before doing anything else, lets create a new boot environment: # beadm list BE Active Mountpoint Space Policy Created -- ------ ---------- ----- ------ ------- solaris NR / 2.14G static 2012-09-25 10:32 # beadm create solaris-1 # beadm activate solaris-1 # beadm list BE Active Mountpoint Space Policy Created -- ------ ---------- ----- ------ ------- solaris N / 4.82M static 2012-09-25 10:32 solaris-1 R - 2.14G static 2012-09-29 11:40 # init 0 Normally an init 6 to reboot would have been sufficient, but in the next step I reset the system anyway in order to put the system in factory default mode for a "clean" domain configuration. Preparing to change - reset to factory default There was a leftover domain configuration on the T2000, so I reset it to the factory install state. Since the ldm command is't working yet, it can't be done from the control domain, so I did it by logging onto to the service processor: $ ssh -X admin@t2000-sc Copyright (c) 2010, Oracle and/or its affiliates. All rights reserved. Oracle Advanced Lights Out Manager CMT v1.7.9 Please login: admin Please Enter password: ******** sc> showhost Sun-Fire-T2000 System Firmware 6.7.10 2010/07/14 16:35 Host flash versions: OBP 4.30.4.b 2010/07/09 13:48 Hypervisor 1.7.3.c 2010/07/09 15:14 POST 4.30.4.b 2010/07/09 14:24 sc> bootmode config="factory-default" sc> poweroff Are you sure you want to power off the system [y/n]? y SC Alert: SC Request to Power Off Host. SC Alert: Host system has shut down. sc> poweron SC Alert: Host System has Reset At this point I rebooted into the new Solaris 11 boot environment, and Solaris commands showed it was running on the factory default configuration of a single domain owning all 32 CPUs and 32GB of RAM (that's what it looked like in 2005.) # psrinfo -vp The physical processor has 8 cores and 32 virtual processors (0-31) The core has 4 virtual processors (0-3) The core has 4 virtual processors (4-7) The core has 4 virtual processors (8-11) The core has 4 virtual processors (12-15) The core has 4 virtual processors (16-19) The core has 4 virtual processors (20-23) The core has 4 virtual processors (24-27) The core has 4 virtual processors (28-31) UltraSPARC-T1 (chipid 0, clock 1200 MHz) # prtconf|grep Mem Memory size: 32640 Megabytes Note that the older processor has 4 virtual CPUs per core, while current processors have 8 per core. Remove ldomsmanager 2.2 and install the 1.2 version The Solaris 11 pkg command is now used to remove the 2.2 version that shipped with Solaris 11: # pkg uninstall ldomsmanager Packages to remove: 1 Create boot environment: No Create backup boot environment: No Services to change: 2 PHASE ACTIONS Removal Phase 130/130 PHASE ITEMS Package State Update Phase 1/1 Package Cache Update Phase 1/1 Image State Update Phase 2/2 Finally, LDoms 1.2 installed via its install script, the same way it was done years ago: # unzip LDoms-1_2-Integration-10.zip # cd LDoms-1_2-Integration-10/Install/ # ./install-ldm Welcome to the LDoms installer. You are about to install the Logical Domains Manager package that will enable you to create, destroy and control other domains on your system. Given the capabilities of the LDoms domain manager, you can now change the security configuration of this Solaris instance using the Solaris Security Toolkit. ... ... normal install messages omitted ... The Solaris Security Toolkit applies to Solaris 10, and cannot be used in Solaris 11 (in which several things hardened by the Toolkit are already hardened by default), so answer b in the choice below: You are about to install the Logical Domains Manager package that will enable you to create, destroy and control other domains on your system. Given the capabilities of the LDoms domain manager, you can now change the security configuration of this Solaris instance using the Solaris Security Toolkit. Select a security profile from this list: a) Hardened Solaris configuration for LDoms (recommended) b) Standard Solaris configuration c) Your custom-defined Solaris security configuration profile Enter a, b, or c [a]: b ... other install messages omitted for brevity... After install I ensure that the necessary services are enabled, and verify the version of the installed LDoms Manager: # svcs ldmd STATE STIME FMRI online 22:00:36 svc:/ldoms/ldmd:default # svcs vntsd STATE STIME FMRI disabled Aug_19 svc:/ldoms/vntsd:default # ldm -V Logical Domain Manager (v 1.2-debug) Hypervisor control protocol v 1.3 Using Hypervisor MD v 1.1 System PROM: Hypervisor v. 1.7.3. @(#)Hypervisor 1.7.3.c 2010/07/09 15:14\015 OpenBoot v. 4.30.4. @(#)OBP 4.30.4.b 2010/07/09 13:48 Set up control domain and domain services At this point we have a functioning LDoms 1.2 environment that can be configured in the usual fashion. One difference is that LDoms 1.2 behavior had 'delayed configuration mode (as expected) during initial configuration before rebooting the control domain. Another minor difference with a Solaris 11 control domain is that you define virtual switches using the 'vanity name' of the network interface, rather than the hardware driver name as in Solaris 10. # ldm list ------------------------------------------------------------------------------ Notice: the LDom Manager is running in configuration mode. Configuration and resource information is displayed for the configuration under construction; not the current active configuration. The configuration being constructed will only take effect after it is downloaded to the system controller and the host is reset. ------------------------------------------------------------------------------ NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-c-- SP 32 32640M 3.2% 4d 2h 50m # ldm add-vdiskserver primary-vds0 primary # ldm add-vconscon port-range=5000-5100 primary-vcc0 primary # ldm add-vswitch net-dev=net0 primary-vsw0 primary # ldm set-mau 2 primary # ldm set-vcpu 8 primary # ldm set-memory 4g primary # ldm add-config initial # ldm list-spconfig factory-default initial [current] That's it, really. After reboot, we are ready to install guest domains. Summary - new wine in old bottles This example shows that (new) Solaris 11 can be installed on (old) T2000 servers and used as a control domain. The main activity is to remove the preinstalled Oracle VM Server for 2.2 and install Logical Domains 1.2 - the last version of LDoms to support T1-processor systems. I tested Solaris 10 and Solaris 11 guest domains running on this server and they worked without any surprises. This is a viable way to get further into Solaris 11 adoption, even on older T-series equipment.

    Read the article

  • How do I create a virtual directory in IIS?

    - by anil namde
    I have newly installed machine on which i have installed the IIS. On each of the sites I am installing requests are not reaching the site pages. However, if I send a request with the path of a HTML/image file explicitly the resource is accessible. Also, I do not get the "Create Virtual Directory" option in IIS. I only have the option to create a web site. What went wrong?

    Read the article

< Previous Page | 238 239 240 241 242 243 244 245 246 247 248 249  | Next Page >