Search Results

Search found 11930 results on 478 pages for 'shared machines'.

Page 66/478 | < Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >

  • Deploying virtual machines - Windows Guest/Linux Host or vice versa?

    - by samoz
    I'm looking to deploy several virtual machines for users. They need access to both Windows and Linux. They also need to use the computers graphics card (for Photoshop, modeling, etc) under Windows. My question is, will an Ubuntu host/Windows guest or a Windows host/Ubuntu guest be faster? I'm somewhat worried about Windows getting a cluttered registry and slow, but on the otherhand, a Windows host would have direct access to hardware (Unless I'm just unaware of how to grant hardware access to a guest). Does the choice of software (VMware or VirtualBox) effect the choice?

    Read the article

  • Different color prompts for different machines when using terminal/ssh?

    - by bcrawl
    Hi, I have 5 machines I constantly ssh into to do work. Its getting increasingly frustrating when I am issuing wrong commands on wrong boxes. Luckily I havent done anything bad yet. I wanted to know if there is any hack which I can hardcode which will display my prompt in different colors based on the machine I am ssh into? Such as blue for desktop1, purple for laptop, red for server etc? Is this possible? Currently I am using this command export PS1="\e[0;31m[\u@\h \W]\$ \e[m " taken from here http://www.cyberciti.biz/faq/bash-shell-change-the-color-of-my-shell-prompt-under-linux-or-unix/ but it obviously doesnt work across ssh. Also, if you have any other cool bash tips for helping me ease my sight will be wonderful. I got this tip which colors the man pages. http://linuxtidbits.wordpress.com/2009/03/23/less-colors-for-man-pages/

    Read the article

  • Active directory Kerberos OSX problems

    - by Temotodochi
    I'll try to keep this short, but informative. I'm currently unable to bind OSX lion (10.7.4) machines to our AD. OSX kerberos (heimdal) is unable to locate the KDC service. However i can bind linux & windows machines to the AD without any problems in the same network AD controls the domain DNS and all the relevant _kerberos._tcp.x.domain.com and _kpasswd SRV DNS records are there and resolve fine when tried from OSX machines. Defined ports are open for service and manually accessible from OSX. When i try kinit in the OSX, i can get the first auth through (wrong passwords fail instantly), but when supplied with correct password, kinit fails after some waiting with "unable to reach KDC". All machines run NTP and have correct time. During testing, network is not firewalled between the machines Linux and windows machines have no problems whatsoever I have tried with and without /etc/krb5.conf - OSX by default does not need it in the krb5.conf i used a working config from one of our linux machines. dsconfigad fails with simple "connection failed to the directory server" I'm a bit baffled with this. OSX is like the KDC is nowhere to be found and at the same time my test machines with windows 7 and some linux (centos 6 & debian 6) machines have no problems whatsoever. Same network, same configurations. I'm missing some vital piece of configuration somewhere, and i can't find out what it is.

    Read the article

  • How to setup and manage a shared hosting server on Windows Server 2008 R2 Web Edition?

    - by Motivated Student
    Background I am a newbie in using Windows Server 2008 R2 Web Edition (and other editions as well). I have a static IP, a very fast internet connection, a server (PRIMERGY TX100 S1 Server) and Windows Server 2008 R2 Web Edition (trial version). The objective is to setup the server to be a shared hosting server such that each of my friends has a private account to manage his/her domain. to upload his/her web content to the server using the encrypted ftp. to manage database administration. to manage Certificate. etc Questions Is there a good reference to learn "how to setup and manage a shared hosting server on Windows Server 2008 R2" ? What are the rough steps I have to do to accomplish my objective?

    Read the article

  • VMware Workstation on Linux: Dropping core files in a shared folder...

    - by kclittle
    I'm using VMware 6.0.2 on a RHEL 4.6 host. The VMs are MontaVista CGE 5.0 (2.6.21 kernel). I'm trying to get applications running in the VMs to drop any core files on a HGFS volume, i.e. in a "shared folder". The core files get created as per the path and format given in /proc/sys/kernel/core_pattern, but they are always zero length. If I change the path to a local path (on a virtual disk in the VM), all is well. Any ideas what I have to do get the core files written into a shared folder? Thanks for your help!

    Read the article

  • How to Pre-Configure Shared Laptops' Microsoft Outlook 2010 Accounts to Connect to Exchange Server 2007 SP3?

    - by schultkl
    Our IT environment provides 10 shared, Microsoft Windows 7 laptops for an office staff of several hundred people. After checking-out and logging into a laptop with an Active Directory domain account, office staff frequently run Microsoft Outlook 2010. However, the first time office staff do this, Microsoft Outlook 2010 prompts the user to create and configure their local account. This takes just several clicks, as Microsoft Outlook 2010 auto-detects the office staff member's Microsoft Exchange Server 2007 (SP3) account. The problem is: all office staff have to do this on each new laptop they use. Until they do so, some functionality does not work (for example, Microsoft Word 2010 Save & Send fails with error "There was a problem creating the message"). How might our IT department "pre-configure" the shared laptops so office staff can simply log-in and use Microsoft Outlook 2010 functionality without the need to configure a local account?

    Read the article

  • How can I connect to my Airport Extreme Shared Disk using Windows?

    - by matt ryan
    I have a shared disk attached to my Airport Extreme, which I can connect to remotely in OS X through the finder via command - k, and entering in the proper address: afp://test.dyndns.org:1111 1111 being the port I've reserved for the disk in the AE port mapping. This is such a great feature, but I don't always have access to a Mac. My question is how can I connect to this drive via Windows XP? Please note, that the shared drive is a Drobo FS and formatted to handle both Windows and Mac OS. I've tried mapping a network drive via My Computer - Tools - Map Network Drive and entering: \\test.dyndns.org:1111\Drobo-Name I've also tried Start - Run and entering the same, both with and without the port #.

    Read the article

  • How ssl is usually set up on shared hosting (newbie question)?

    - by spirytus
    I am quite unclear on how ssl is usually set up on shared hosting. I have account with justhost.com and they provided me with public_html folder and (its sibling) ssl folder. When I create ssl certificates via cPanel it appears in SSL folder. Now, where I should put my html files to be accessible via https:// rather than http? normal files go into public_html (I figured this out ;) what about secure bunch? Also how can I specify that secured folder shouldn't be the ssl folder (if its the one in fact) but rather some other folder I specify? Is it possible at all with shared hosting? Thank you all for your help, I googled for hours and still am heavily confused as you see :)

    Read the article

  • Easy way to update apache on a server cluster with shared NFS conf?

    - by Simon
    we have a server setup where a server cluster connected with a db/files/conf server shared by nfs serve our sites, behind an Elastic Load Balancer at Amazon EC2. The setup works correctly, but keeping it up to date is becoming like hell, because the apache/php conf that webservers use is shared through NFS. So, if we try to run an apt-get upgrade on a server on the cluster, it will abort it due to the webserver is not able to write back the configuration to the nfs server. Every time we want to update the machines, or install a package like php-curl, we need to create a new ami, so the changes will reflect on the new launched amis. Could it be another way of doing the things simpler? Thanks in advance!

    Read the article

  • How can I open a shared sub calendar in Outlook 2010?

    - by Matt Love
    There is a team in my office that has a shared calendar (the team calendar is set up as a user in Active Directory/Exchange, so treat the team as a user). The team also has 3 sub-calendars for the different team members. Other people in the office need to be able to access this team's calendar. They can go to Open Calendar in Outlook and see the main calendar, but they cannot see the sub-calendars. The sub-calendars all have the Default user permissions set to Reviewer. If you go to File → Account Settings → Change [logged in Exchange account] → More Settings → Advanced and add the team's mailbox, it does show the calendars in Outlook, but it comes up under My Calendars instead of Shared Calendars. We need to be able to go to Open Calendar and open the calendar and open all the sub-calendars this way. How is this possible?

    Read the article

  • Prevent the "System" process from locking my files in a shared folder.

    - by Kamarey
    I have an application that creates files to be processed by SQL bulk. The files are created in shared folder on another server and than taken from there by SQL. The problem that sometime SQL returns an error, that the file is locked by another process and can't be accessed. The process that locks these files is "System" process. Looks like it lock files because of they are in a shared folder, but not sure. The use of any software to unlock files manually is not an option, as all bulk process is automatic. The question is: Why the "System" process locks these files and is there a way to prevent this?

    Read the article

  • Is it possible to track down who or what changed a shared permission?

    - by user45574
    Today I received an email from one of my users asking why he couldn’t access his shared folder on one of our servers. Example: \\servername\share\ = access denied. When I checked the share permissions on the folder I was surprised to see that the user had been removed from the "shared permissions" list. Now my question is: Is it possible to track who or what deleted the users share permissions on the folder? I have studied the different event logs, but couldn’t find any indication of anyone who had changed the share permissions. Kind Regards Martin

    Read the article

  • Windows 7 -Can't get access to shared folder from one computer to another.

    - by Carbonara
    I have 2 windows 7 computers and i'm trying to share a folder (that I want password protection on) outside of the homegroup. Both computers are part of the same workgroup and I have the same user account/password combination on both computers plus I have password protected sharing turned on in the network and sharing centre along with file and printer sharing turned on. On computer 1 I have right clicked and selected that I want the folder shared. When I navigate via the network on computer 2 to computer 1 the shared folder shows up on computer 2 but double clicking on it to open it gives me an alert saying I don't have permission to access it, no option to type in the user name and password (according to the help files I shouldn't even need to type the password in if both computers have the same username/password anyway but would need it if I'm logged in as a different user). It's just a blanket denial of access.

    Read the article

  • How do I push a new project to a shared Mercurial multi-repository?

    - by j-g-faustus
    I have a local machine ("laptop") and a shared Mercurial repository on another machine ("server"). The shared repository is set up as a multi-repository as described in the Mercurial documentation using Apache, the hgwebdir.cgi script and Mercurial 1.4. The setup works in the sense that I can browse the projects (repositories) in the web browser, I can clone and pull from the server, and I can push from the laptop when the project/repository already exists on the server. But I cannot create a new project on the laptop (hg init, do stuff, hg commit) and push it to the shared multi-repository (hg push http://server/hg/my-new-project-name) - I get "abort: HTTP Error 404: Not Found", presumably because the directory/project repository does not exist yet. How can I push a new project/directory structure to a Mercurial running elsewhere? I couldn't find anything in the documentation, how do you guys do it?

    Read the article

  • Having all Views in the Shared folder - works but is throwing "caught exceptions". Performance conc

    - by Scott
    Hi everyone, I have a simple but heavily used app done in VS2010/MVC2. I didn't like having separate folders for each view/controller and so have all the views in the Shared folder. It's working fine but while debugging in VS, I noticed that it's throwing IO "caught exceptions" since it seems to be looking in the [FolderName]/[ViewName] folder before going down to the Shared folder. Again, the app runs fine but I'm concerned that all these "caught exceptions" will have a minor performance impact since they do have a cost in via the CLR. Is there any way I can configure the Routing so that it will only look in the Shared folder? Thanks.

    Read the article

  • How do I make ESXi 5.0 to shutdown virtual machines when the physical power button is pushed?

    - by Pawel Sawicki
    I have a home NAS/DLNA server built out of an HP Micro Server with the HP branded VMware ESXi 5.0.0 build-623860 (free license) installed. Being a home media center I'd like it to be "manageable" by all my household members. This requires that it needs to be powered on an off (including all the VMs inside) by anybody with the physical access to the server by simply pressing the power button on the chassis. The "startup" part is easy to obtain - all I had to do was to configure the startup/shutdown policy: Once the server powers up, all VMs start as well and that's exactly what I need. Well.. it did work up until 5.0.0U1, but that's a different story: http://blogs.vmware.com/vsphere/2012/03/free-esxi-hypervisor-auto-start-breaks-with-50-update-1.html Unfortunately, pressing the power button doesn't gracefully shutdown the guest machines - they are terminated instead. If I run the "shut down" command from the vSphere Client interface guests are powered off. I'd like to get the same end result when the physical power button is switched. I've poked around a bit on the ESXi server. There's a "/sbin/shutdown.sh" script that seemed to do exactly what I need... but after trying it does exactly what the power off button. The "/etc/inittab" contains an entry for the "shutdown" level but I suppose it's not hooked to the power button. I can't find any acpi related configuration, neither do I know what exactly is executed when the power button is pressed. Does anybody have a clue how can I make the VMs shutdown automatically when the physical power switch is pressed to turn of the computer?

    Read the article

  • How do I load balance between two Linux machines?

    - by William Hilsum
    Inspired by the Stack Overflow network, I am now obsessed with HAProxy and trying to use it myself. At the moment, each HAProxy box has got two network cards (well, two configured, I can have a maximum of 4 and wasn't sure if they needed their own one for management between the boxes). On both machines, the backend one (eth1) is a private IP that goes to a switch connected to the webservers, and the front facing one (eth0) has a public internet IP that is routed straight though. In addition, I have created an additional virtual ip for eth0 called eth0:0 which has got a third public ip address. I just about get how to use it for load balancing between multiple web servers that are behind it, but, I am failing to load balance between the two HAProxy boxes - they appear to fight for the virtual IP, but, this does not appear to be a smart solution. Now, by using the virtual shared IP address, this solution appears to work and does seem to give me maximum uptime, but, is this the correct way to do it, or is there a smarter way? I have been looking at other Linux packages such as keepalived, but, I have only been using Linux (server) for a week now and am at the limits of my understanding. Is there anyone who has done this before and can you advise anything for maximum uptime?

    Read the article

  • Recent DDE / file open issue with Office 2007 affecting only a few machines, is a Windows Update to blame?

    - by kafka
    All our workstations run Windows 7 Professional 64 bit. It started with one, then another, then another couple of machines having a problem accessing Word files locally and on the network. This doesn't happen on my machine though. Affected users get the error message 'There was a problem sending the command to the program'. I've Googled for solutions, but none of the answers worked. They suggested deleting certain registry keys; unregistering and reregistering the program for DDE; resetting the way that the shell opens .docx programs etc. each to no avail. As it affects local and network shares I believe the problem lies with the clients, and not the server, and I'm starting to suspect that there could have been a recent Windows Update which has caused this. I've tried comparing the updates on my working machine with an affected machine, but I can't immediately see any major differences. Has anyone else recently encountered this problem? What are the best steps to take to further isolate what could be causing this?

    Read the article

  • How can I install .NET framework 3.5 on XP machines without internet connection?

    - by EricSchaefer
    I want to install .NET framework 3.5 on a couple of machines that do not have internet access. If I install the "no internet access"-package it still wants to download something. How can I figure out what is missing? Are there other installation packages? Edit: I would present screenshots but I cannot upload anything from here and the shots would be in german. So I present only the text translated back to english... Installing the "full redistributable package": At the bottom of the license agreement page it display this text: Size of download file: 67 MB Appoximate download time: 2h 44min (56KBit/s) 18min (512KBit/s) It shows the text even if I installed Windows Installer 3.1. After agreeing it displays the "Download and Installation Status"-Dialog with a progress bar labeled "Download:" and Status: Connection to server attempted (try X of 5). Total Download Status: 56MB/67MB I tried it in a VM with no network connection. It tries 5 times while the progress bar shows progress. Later the progress bar is labeled "Installation:". Even later it reports problems during setup and provides two buttons "Send Report Later" and "Don't Send". Now here it comes: "Setup completed" and "Microsoft .NET Framework 3.5 has been deinstalled successfully." (Emphasis is mine) "It is recommended to install current service packs and security updates. More information at Windows Update (link)." Edit2: Installed Service Pack 3, but still no success.

    Read the article

  • How to backup virtual machines on a standalone ESXi host?

    - by Massimo
    Standalone ESXi (4.1) host without any vCenter Server. How to backup virtual machines as quickly and storage-friendly as possible? I know I can access the ESXi console and use the standard Unix cp command, but this has the downfall of copying the whole VMDK files, not only their actually used space; so, for a 30-GB VMDK of which only 1 GB is used, the backup would take 30 full GBs of space, and time accordingly. And yes, I know about thin-provisioned virtual disks, but they tend to behave very badly when physically copied, and/or to blow up to their full provisioned size; also, they are not recommended for actual VM performance. It is ok for me to shut down the VMs before backing them up (i.e. I don't need "live" backups); but I need a way to copy them around efficiently; and yes, a way to automate shutdown/startup when taking a backup would also help. I only have ESXi; no Service Console, no vCenter Server... what's the best way to handle this task? Also, what about restores?

    Read the article

  • Using Stub Objects

    - by user9154181
    Having told the long and winding tale of where stub objects came from and how we use them to build Solaris, I'd like to focus now on the the nuts and bolts of building and using them. The following new features were added to the Solaris link-editor (ld) to support the production and use of stub objects: -z stub This new command line option informs ld that it is to build a stub object rather than a normal object. In this mode, it accepts the same command line arguments as usual, but will quietly ignore any objects and sharable object dependencies. STUB_OBJECT Mapfile Directive In order to build a stub version of an object, its mapfile must specify the STUB_OBJECT directive. When producing a non-stub object, the presence of STUB_OBJECT causes the link-editor to perform extra validation to ensure that the stub and non-stub objects will be compatible. ASSERT Mapfile Directive All data symbols exported from the object must have an ASSERT symbol directive in the mapfile that declares them as data and supplies the size, binding, bss attributes, and symbol aliasing details. When building the stub objects, the information in these ASSERT directives is used to create the data symbols. When building the real object, these ASSERT directives will ensure that the real object matches the linking interface presented by the stub. Although ASSERT was added to the link-editor in order to support stub objects, they are a general purpose feature that can be used independently of stub objects. For instance you might choose to use an ASSERT directive if you have a symbol that must have a specific address in order for the object to operate properly and you want to automatically ensure that this will always be the case. The material presented here is derived from a document I originally wrote during the development effort, which had the dual goals of providing supplemental materials for the stub object PSARC case, and as a set of edits that were eventually applied to the Oracle Solaris Linker and Libraries Manual (LLM). The Solaris 11 LLM contains this information in a more polished form. Stub Objects A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be used at runtime. However, an application can be built against a stub object, where the stub object provides the real object name to be used at runtime, and then use the real object at runtime. When building a stub object, the link-editor ignores any object or library files specified on the command line, and these files need not exist in order to build a stub. Since the compilation step can be omitted, and because the link-editor has relatively little work to do, stub objects can be built very quickly. Stub objects can be used to solve a variety of build problems: Speed Modern machines, using a version of make with the ability to parallelize operations, are capable of compiling and linking many objects simultaneously, and doing so offers significant speedups. However, it is typical that a given object will depend on other objects, and that there will be a core set of objects that nearly everything else depends on. It is necessary to impose an ordering that builds each object before any other object that requires it. This ordering creates bottlenecks that reduce the amount of parallelization that is possible and limits the overall speed at which the code can be built. Complexity/Correctness In a large body of code, there can be a large number of dependencies between the various objects. The makefiles or other build descriptions for these objects can become very complex and difficult to understand or maintain. The dependencies can change as the system evolves. This can cause a given set of makefiles to become slightly incorrect over time, leading to race conditions and mysterious rare build failures. Dependency Cycles It might be desirable to organize code as cooperating shared objects, each of which draw on the resources provided by the other. Such cycles cannot be supported in an environment where objects must be built before the objects that use them, even though the runtime linker is fully capable of loading and using such objects if they could be built. Stub shared objects offer an alternative method for building code that sidesteps the above issues. Stub objects can be quickly built for all the shared objects produced by the build. Then, all the real shared objects and executables can be built in parallel, in any order, using the stub objects to stand in for the real objects at link-time. Afterwards, the executables and real shared objects are kept, and the stub shared objects are discarded. Stub objects are built from a mapfile, which must satisfy the following requirements. The mapfile must specify the STUB_OBJECT directive. This directive informs the link-editor that the object can be built as a stub object, and as such causes the link-editor to perform validation and sanity checking intended to guarantee that an object and its stub will always provide identical linking interfaces. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data exported from the object must have an ASSERT symbol attribute in the mapfile to specify the symbol type, size, and bss attributes. In the case where there are multiple symbols that reference the same data, the ASSERT for one of these symbols must specify the TYPE and SIZE attributes, while the others must use the ALIAS attribute to reference this primary symbol. Given such a mapfile, the stub and real versions of the shared object can be built using the same command line for each, adding the '-z stub' option to the link for the stub object, and omiting the option from the link for the real object. To demonstrate these ideas, the following code implements a shared object named idx5, which exports data from a 5 element array of integers, with each element initialized to contain its zero-based array index. This data is available as a global array, via an alternative alias data symbol with weak binding, and via a functional interface. % cat idx5.c int _idx5[5] = { 0, 1, 2, 3, 4 }; #pragma weak idx5 = _idx5 int idx5_func(int index) { if ((index 4)) return (-1); return (_idx5[index]); } A mapfile is required to describe the interface provided by this shared object. % cat mapfile $mapfile_version 2 STUB_OBJECT; SYMBOL_SCOPE { _idx5 { ASSERT { TYPE=data; SIZE=4[5] }; }; idx5 { ASSERT { BINDING=weak; ALIAS=_idx5 }; }; idx5_func; local: *; }; The following main program is used to print all the index values available from the idx5 shared object. % cat main.c #include <stdio.h> extern int _idx5[5], idx5[5], idx5_func(int); int main(int argc, char **argv) { int i; for (i = 0; i The following commands create a stub version of this shared object in a subdirectory named stublib. elfdump is used to verify that the resulting object is a stub. The command used to build the stub differs from that of the real object only in the addition of the -z stub option, and the use of a different output file name. This demonstrates the ease with which stub generation can be added to an existing makefile. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o stublib/libidx5.so.1 -zstub % ln -s libidx5.so.1 stublib/libidx5.so % elfdump -d stublib/libidx5.so | grep STUB [11] FLAGS_1 0x4000000 [ STUB ] The main program can now be built, using the stub object to stand in for the real shared object, and setting a runpath that will find the real object at runtime. However, as we have not yet built the real object, this program cannot yet be run. Attempts to cause the system to load the stub object are rejected, as the runtime linker knows that stub objects lack the actual code and data found in the real object, and cannot execute. % cc main.c -L stublib -R '$ORIGIN/lib' -lidx5 -lc % ./a.out ld.so.1: a.out: fatal: libidx5.so.1: open failed: No such file or directory Killed % LD_PRELOAD=stublib/libidx5.so.1 ./a.out ld.so.1: a.out: fatal: stublib/libidx5.so.1: stub shared object cannot be used at runtime Killed We build the real object using the same command as we used to build the stub, omitting the -z stub option, and writing the results to a different file. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o lib/libidx5.so.1 Once the real object has been built in the lib subdirectory, the program can be run. % ./a.out [0] 0 0 0 [1] 1 1 1 [2] 2 2 2 [3] 3 3 3 [4] 4 4 4 Mapfile Changes The version 2 mapfile syntax was extended in a number of places to accommodate stub objects. Conditional Input The version 2 mapfile syntax has the ability conditionalize mapfile input using the $if control directive. As you might imagine, these directives are used frequently with ASSERT directives for data, because a given data symbol will frequently have a different size in 32 or 64-bit code, or on differing hardware such as x86 versus sparc. The link-editor maintains an internal table of names that can be used in the logical expressions evaluated by $if and $elif. At startup, this table is initialized with items that describe the class of object (_ELF32 or _ELF64) and the type of the target machine (_sparc or _x86). We found that there were a small number of cases in the Solaris code base in which we needed to know what kind of object we were producing, so we added the following new predefined items in order to address that need: NameMeaning ...... _ET_DYNshared object _ET_EXECexecutable object _ET_RELrelocatable object ...... STUB_OBJECT Directive The new STUB_OBJECT directive informs the link-editor that the object described by the mapfile can be built as a stub object. STUB_OBJECT; A stub shared object is built entirely from the information in the mapfiles supplied on the command line. When the -z stub option is specified to build a stub object, the presence of the STUB_OBJECT directive in a mapfile is required, and the link-editor uses the information in symbol ASSERT attributes to create global symbols that match those of the real object. When the real object is built, the presence of STUB_OBJECT causes the link-editor to verify that the mapfiles accurately describe the real object interface, and that a stub object built from them will provide the same linking interface as the real object it represents. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data in the object is required to have an ASSERT attribute that specifies the symbol type and size. If the ASSERT BIND attribute is not present, the link-editor provides a default assertion that the symbol must be GLOBAL. If the ASSERT SH_ATTR attribute is not present, or does not specify that the section is one of BITS or NOBITS, the link-editor provides a default assertion that the associated section is BITS. All data symbols that describe the same address and size are required to have ASSERT ALIAS attributes specified in the mapfile. If aliased symbols are discovered that do not have an ASSERT ALIAS specified, the link fails and no object is produced. These rules ensure that the mapfiles contain a description of the real shared object's linking interface that is sufficient to produce a stub object with a completely compatible linking interface. SYMBOL_SCOPE/SYMBOL_VERSION ASSERT Attribute The SYMBOL_SCOPE and SYMBOL_VERSION mapfile directives were extended with a symbol attribute named ASSERT. The syntax for the ASSERT attribute is as follows: ASSERT { ALIAS = symbol_name; BINDING = symbol_binding; TYPE = symbol_type; SH_ATTR = section_attributes; SIZE = size_value; SIZE = size_value[count]; }; The ASSERT attribute is used to specify the expected characteristics of the symbol. The link-editor compares the symbol characteristics that result from the link to those given by ASSERT attributes. If the real and asserted attributes do not agree, a fatal error is issued and the output object is not created. In normal use, the link editor evaluates the ASSERT attribute when present, but does not require them, or provide default values for them. The presence of the STUB_OBJECT directive in a mapfile alters the interpretation of ASSERT to require them under some circumstances, and to supply default assertions if explicit ones are not present. See the definition of the STUB_OBJECT Directive for the details. When the -z stub command line option is specified to build a stub object, the information provided by ASSERT attributes is used to define the attributes of the global symbols provided by the object. ASSERT accepts the following: ALIAS Name of a previously defined symbol that this symbol is an alias for. An alias symbol has the same type, value, and size as the main symbol. The ALIAS attribute is mutually exclusive to the TYPE, SIZE, and SH_ATTR attributes, and cannot be used with them. When ALIAS is specified, the type, size, and section attributes are obtained from the alias symbol. BIND Specifies an ELF symbol binding, which can be any of the STB_ constants defined in <sys/elf.h>, with the STB_ prefix removed (e.g. GLOBAL, WEAK). TYPE Specifies an ELF symbol type, which can be any of the STT_ constants defined in <sys/elf.h>, with the STT_ prefix removed (e.g. OBJECT, COMMON, FUNC). In addition, for compatibility with other mapfile usage, FUNCTION and DATA can be specified, for STT_FUNC and STT_OBJECT, respectively. TYPE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SH_ATTR Specifies attributes of the section associated with the symbol. The section_attributes that can be specified are given in the following table: Section AttributeMeaning BITSSection is not of type SHT_NOBITS NOBITSSection is of type SHT_NOBITS SH_ATTR is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SIZE Specifies the expected symbol size. SIZE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. The syntax for the size_value argument is as described in the discussion of the SIZE attribute below. SIZE The SIZE symbol attribute existed before support for stub objects was introduced. It is used to set the size attribute of a given symbol. This attribute results in the creation of a symbol definition. Prior to the introduction of the ASSERT SIZE attribute, the value of a SIZE attribute was always numeric. While attempting to apply ASSERT SIZE to the objects in the Solaris ON consolidation, I found that many data symbols have a size based on the natural machine wordsize for the class of object being produced. Variables declared as long, or as a pointer, will be 4 bytes in size in a 32-bit object, and 8 bytes in a 64-bit object. Initially, I employed the conditional $if directive to handle these cases as follows: $if _ELF32 foo { ASSERT { TYPE=data; SIZE=4 } }; bar { ASSERT { TYPE=data; SIZE=20 } }; $elif _ELF64 foo { ASSERT { TYPE=data; SIZE=8 } }; bar { ASSERT { TYPE=data; SIZE=40 } }; $else $error UNKNOWN ELFCLASS $endif I found that the situation occurs frequently enough that this is cumbersome. To simplify this case, I introduced the idea of the addrsize symbolic name, and of a repeat count, which together make it simple to specify machine word scalar or array symbols. Both the SIZE, and ASSERT SIZE attributes support this syntax: The size_value argument can be a numeric value, or it can be the symbolic name addrsize. addrsize represents the size of a machine word capable of holding a memory address. The link-editor substitutes the value 4 for addrsize when building 32-bit objects, and the value 8 when building 64-bit objects. addrsize is useful for representing the size of pointer variables and C variables of type long, as it automatically adjusts for 32 and 64-bit objects without requiring the use of conditional input. The size_value argument can be optionally suffixed with a count value, enclosed in square brackets. If count is present, size_value and count are multiplied together to obtain the final size value. Using this feature, the example above can be written more naturally as: foo { ASSERT { TYPE=data; SIZE=addrsize } }; bar { ASSERT { TYPE=data; SIZE=addrsize[5] } }; Exported Global Data Is Still A Bad Idea As you can see, the additional plumbing added to the Solaris link-editor to support stub objects is minimal. Furthermore, about 90% of that plumbing is dedicated to handling global data. We have long advised against global data exported from shared objects. There are many ways in which global data does not fit well with dynamic linking. Stub objects simply provide one more reason to avoid this practice. It is always better to export all data via a functional interface. You should always hide your data, and make it available to your users via a function that they can call to acquire the address of the data item. However, If you do have to support global data for a stub, perhaps because you are working with an already existing object, it is still easilily done, as shown above. Oracle does not like us to discuss hypothetical new features that don't exist in shipping product, so I'll end this section with a speculation. It might be possible to do more in this area to ease the difficulty of dealing with objects that have global data that the users of the library don't need. Perhaps someday... Conclusions It is easy to create stub objects for most objects. If your library only exports function symbols, all you have to do to build a faithful stub object is to add STUB_OBJECT; and then to use the same link command you're currently using, with the addition of the -z stub option. Happy Stubbing!

    Read the article

  • Begin the Clone Wars Have!

    - by Antony Reynolds
    Creating a New Virtual Machine from an Existing Virtual Disk In previous posts I described how I set up an OEL6 machine under VirtualBox that can run an 11gR2 database and FMW 11.1.1.5.  That is great if you want the DB and FMW running in the same virtual image and it has served me well for some proof of concepts and also for some testing of different JVMs.  However I also wanted to run some testing of FMW with the database running on a separate physical machine.  So in this post I will show how to take a VirtualBox image and create a new image based on the disks from that original image. What are my Options? There is more than one way to skin a cat, or in this case to create two separate VMs that can run on different hardware.  Some of the options include: Create new virtual disk images for each new VM. Clone the existing disk images and point the new VM at the cloned images. Point the new VM at the existing snapshots. #1 is too much like hard work, install OEL twice, install a database again, install FMW again, run RCU again!  Life is too short! #2 is probably the safest way of doing things.  VirtualBox allows you to clone a disk image for use in a separate machine.  However this of course duplicates the disk and means that it is now occupying 3 times the space, once for the original disk and twice more for the two clones I would need. #3 is the most space efficient way of doing things.  It does mean however that I can only run the new “cloned” images if I have access to the original image because that is where the base snapshots reside.  However this is not a problem for me as long as I remember to keep all threee images together.  So this is the approach we will follow. Snapshot, What Snapshot? As we are going to create new virtual machines based on existing snapshots we need to figure out which snapshot to use.  We do this by opening the “Media Manager” from within VirtualBox and moving the mouse over the snapshot images until we find the snapshots we want – the snapshot name is identified in the “Attached to:” comment.  In my case I wanted the FMW installed snapshot because that had a database configured for FMW alongside the FMW software.  I made a note of the filename of that snapshot (actually I just noted the first 5 characters as that was all that was needed to uniquely identify the snapshot file). When we create the new machines we will point them at the snapshot filename we have just checked. Network or NotWork? Because we want the two new machines to communicate with each other when hosted in different physical machines we can’t use the default NAT networking mode without a lot of hassle.  But at the same time we need them to have fixed IP addresses relative to each other so that they can see each other whilst also being able to see the outside world. To achieve all these requirements I created two network adapters for each machine.  Adapter 1 was a standard NAT mapping.  This will allow each machine to get a dynamic IP address (10.0.2.15 by default) that can be used to access the external world through the VBox provided NAT gateway.  This is the same as the existing configuration. The second adapter I created as a bridged adapter.  This gives the virtual machine direct access to the host network card and by using fixed IP addresses each machine can see the other.  It is important to choose fixed IP addresses that are not routable across your internal network so you don’t get any clashes with other machines on your network.  Of course you could always get proper fixed IP addresses from your network people, but I have serveral people using my images and as long as I don’t have two instances of the same VM on the same network segment this is easier and avoids reconfiguring the network every time someone wants a copy of my VM.  If it is available I would suggest using the 10.0.3.* network as 10.0.2.* is the default NAT network.  You can check availability by pinging 10.0.3.1 and 10.0.3.2 from your host machine.  If it times out then you are probably safe to use that. Creating the New VMs Now that I had collected the data that I needed I went ahead and created the new VMs. When asked for a “Boot Hard Disk” I used the “Choose a virtual hard disk file…” link to find the snapshot I had previously selected and set that to be the existing hard disk.  I chose the previously existing SOA 11.1.1.5 install for both the new DB and FMW machines because that snapshot had the database with the RCU completed that I wanted for my DB machine and it had the SOA software installed which I wanted for my FMW machine. After the initial creation of the virtual machine go into the network setting section and enable a second adapter which will be bridged.  Make a note of the MAC addresses (the last four digits should be sufficient) of the two adapters so that you can later set the bridged adapter to use fixed IP and the NAT adapter to use DHCP. We are now ready to start the VMs and reconfigure Linux. Reconfiguring Linux Because I now have two new machines I need to change their network configuration.  In particular I need to change the hostname, update the hosts file and change the network settings. Changing the Hostname I renamed both hosts by running the hostname command as root: hostname vboxfmw.oracle.com I also edited the /etc/sysconfig file and set the correct hostname in there. HOSTNAME=vboxfmw.oracle.com Changing the Network Settings I needed to change the network configuration to give the bridged network a fixed IP address.  I first explicitly set the MAC addresses of the two adapters, because the order of the virtual adapters in the VirtualBox Manager is not necessarily the same as the order of the adapters in the guest OS.  So I went in to the System->Preferences->Network Connections screen and explicitly set the “Device MAC address” for the two adapters. Having correctly mapped the Linux adapters to the VirtualBox adapters I then set the Bridged adapter to use fixed IP addressing rather than DHCP.  There is no need for additional routing or default gateways because we expect the two machine to be on the same LAN segment. Updating the Hosts File Having renamed the machines and reconfigured the network I then updated the /etc/hosts file to refer to the new machine name add a new line to the hosts file to provide an additional IP address for my server (the new fixed IP address) add a new line for the fixed IP address of the other virtual machine 10.0.3.101      vboxdb.oracle.com       vboxdb  # Added by NetworkManager 10.0.2.15       vboxdb.oracle.com       vboxdb  # Added by NetworkManager 10.0.3.102      vboxfmw.oracle.com      vboxfmw # Added by NetworkManager 127.0.0.1       localhost.localdomain   localhost ::1     vboxdb.oracle.com       vboxdb  localhost6.localdomain6 localhost6 To make sure everything takes effect I restarted the server. Reconfiguring the Database on the DB Machine Because we changed the hostname the listener and the EM console no longer start so I need to modify the listener.ora to use the new hostname and I also need to rebuild the EM configuration because it also relies on the hostname. I edited the $ORACLE_HOME/network/admin/listener.ora and changed the listening address to the new hostname:       (ADDRESS = (PROTOCOL = TCP)(HOST = vboxdb.oracle.com)(PORT = 1521)) After changing the listener.ora I was able to start the listener using: lsnrctl start I also had to reconfigure the EM database control.  I first deconfigured it using the command: emca -deconfig dbcontrol db -repos drop This drops the repository and removes any existing registered dbcontrols. I then re-configured it using the following command: emca -config dbcontrol db -repos create This creates the EM repository and then configures and starts dbcontrol. Now my database machine is ready so I can close it down and take a snapshot. Disabling the Database on the FMW Machine I set up the database to start automatically by creating a service called “dbora”.  On the FMW machine I do not need the database running so I can prevent it auto-starting by running the following command: chkconfig –del dbora Note that because I am using a snapshot it is not a waste of disk space to have the DB installed but not used.  As long as I don’t run it, it won’t cost me anything. I can now close the FMW machine down and take a snapshot. Creating a New Domain The FMW machine is now ready to create a new domain.  When creating the domain I can point it at the second machine which is running the database.  I can potentially run these machines on two separate physical machines as long as I have the original virtual machine available to both of the physical machines. Gotchas in Snapshotting VirtualBox does not support the concept of linked machines in a network like some virtualization technologies so when creating a snapshot it is a good idea to shut both VMs down and then take a snapshot on both of them.  This is because we want to keep the database in sync with the middleware.  One way to make sure that this happens would be to place all the domain configuration files on the database server via an NFS share, this would mean that all we would need to snapshot would be the database machine because that would hold all the state and configuration. The Sky’s the Limit We have covered a simple case of having just two machines.  I have a more complicated configuration in which two machine run a RAC database off the same base OS image, and two more machines run a SOA cluster based on the same OS image.  Just remember what machine holds state and what are the consequences of taking a snapshot.

    Read the article

  • Dynamic State Machine in Ruby? Do State Machines Have to be Classes?

    - by viatropos
    Question is, are state machines always defined statically (on classes)? Or is there a way for me to have it so each instance of the class with has it's own set of states? I'm checking out Stonepath for implementing a Task Engine. I don't really see the distinction between "states" and "tasks" in there, so I'm thinking I could just map a Task directly to a state. This would allow me to be able to define task-lists (or workflows) dynamically, without having to do things like: aasm_event :evaluate do transitions :to => :in_evaluation, :from => :pending end aasm_event :accept do transitions :to => :accepted, :from => :pending end aasm_event :reject do transitions :to => :rejected, :from => :pending end Instead, a WorkItem (the main workflow/task manager model), would just have many tasks. Then the tasks would work like states, so I could do something like this: aasm_initial_state :initial tasks.each do |task| aasm_state task.name.to_sym end previous_state = nil tasks.each do |tasks| aasm_event task.name.to_sym do transitions :to => "#{task.name}_phase".to_sym, :from => previous_state ? "#{task.name}_phase" : "initial" end previous_state = state end However, I can't do that with the aasm gem because those methods (aasm_state and aasm_event) are class methods, so every instance of the class with that state machine has the same states. I want it so a "WorkItem" or "TaskList" dynmically creates a sequence of states and transitions based on the tasks it has. This would allow me to dynamically define workflows and just have states map to tasks. Are state machines ever used like this?

    Read the article

  • Analyzing Windows crash dumps generated on XP/32 machines with Win7/64 ?

    - by Martin
    We have a problem with analyzing our Windows crash-dumps that were created on customer Windows XP/32 boxes on our development machines. Many of our development machines are now Win7/64 boxes, but it appears that the crash-dumps generated under Windows XP cannot full resolve their binary dependency, thereby leading to warnings when displaying the call stacks in Visual Studio (2005). For example, the msvcr80.dll cannot be resolved when loaded from a Win7 machine when the dump was generated on Windows XP: On XP, the WinSxS path appears to be C:\WINDOWS\WinSxS\x86_Microsoft.VC80.CRT_1fc8b3b9a1e18e3b_8.0.50727.4053_x-ww_e6967989\msvcr80.dll -- on Win7, the WinSxS path to the same DLL version seems to be: x86_microsoft.vc80.crt_1fc8b3b9a1e18e3b_8.0.50727.4053_none_d08d7da0442a985d (I got this info from a forum thread on codeguru that link to an msdn article.) Visual Studio (2005) can now no longer correctly resolve the binaries for the crash-dump. How can I get Visual Studio to resolve all the correct binaries for my dump file? Note: I have already correctly set up the symbol server. The public symbols for most system DLLs (kernel32.dll, etc) and our symbols of our own DLLs are correctly loaded. It is just that the symbols of DLLs that reside in the WinSxS folder are not loaded, because it appears that Vista/7 uses a different path scheme for these DLLs than XP does and therefore Visual Studio cannot find the dll (not the pdb) on the local dev machine and so cannot load the corresponding symbols for the dump file.

    Read the article

< Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >