Search Results

Search found 13451 results on 539 pages for 'physical environment'.

Page 93/539 | < Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >

  • What can I do to make my eService website customers feel it is a luxurious service? [closed]

    - by Farshid
    I'm developing an e-service website that its monetization model is via paid membership. Beside quality service and content, because I'm serving them for a high fee, I want to make them feel like it is a personal, unparalleled kind of service and I want to spend money for creating things that I give them after their registration such as a beautiful physical membership card so that I can use the effect of mouth-words better and beside that let them be proud about the service. I've tried my best to develop the site experience classy and I'm looking for things in real world to send them after their registration (such as membership card and a small paper tutorial). What are your suggestions? Have you seen things like this before that a website sends you some physical things for making you more loyal and/or something like that? Please kindly share your experiences/suggestions.

    Read the article

  • Can ping device from one computer and not the other

    - by Sean Duggan
    I've recently been assigned to work on a diagnostic program done in C++ which communicates with a piece of electronic equipment. Our normal scenario involves communicating via an RS232 interface, but I've been asked to make our program work over ethernet, source code having been done in Visual Basic. After much thrashing about trying to get the code to work and continuing to get 10049 Winsock errors when I tried to connect, I tried pinging the switch. From the computer the VB program is running on, I can see the switch via ping, nslookup, tracert, and pathping (I was going down the list of programs) and I can do this via URI or IP address. From my laptop, sending the same commands fails every time. They're both using the same network cable and the same USB-to-Ethernet device (I've been swapping them between tests) but one can see the switch and the other cannot. I'm working on the programming end, but the ping results makes me think that there might be a network issue stymieing me. wry grin I'm not much of a network guy, so I'm appealing to expert assistance. Both computers are running Windows XP if that helps. The connection is to an "IP-RS8" device which then connects to our VCU-C units. Each unit is accessible via URI or IP address on the desktop computer we usually have connected to the units (it's running the older VB program that I was asked to lift the networking code from). The connection is made via a USB-to-Ethernet adapter so as to leave the regular Ethernet port available for connecting to the company network. Hmm... come to think of it, I've probably been confusing the issue, talking about pinging "the switch" rather than indicating that it's the devices. My apologies. Communication is generally done with a DLL that uses Winsock functions to make queries for data from the VCU and then to receive. I'm failing when connecting. I haven't found anything on the firewall which should block these commands, but I'll keep poking. I don't know if it's potentially relevant, but on the desktop, the adapter maps to Local Area Connection 3 while on the laptop, it consistently maps to Local Area Connection 2. Currently reading up on DHCP. IPConfig /all results: Desktop Host Name . . . . . . . . . . . . : AMERDAEXXXXXX Primary Dns Suffix . . . . . . . : amer.example.com Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : COMPANY.com amer.example.com atle.example.com cone.example.com apac.example.com scan.example.com bYX.example.com Ethernet adapter Local Area Connection X: Connection-specific DNS Suffix . : amer.example.com Description . . . . . . . . . . . : Broadcom NetXtreme XYxx Gigabit Controller Physical Address. . . . . . . . . : YY-XX-YB-XX-XX-XX Dhcp Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes IP Address. . . . . . . . . . . . : XYY.XXX.XY.XXX Subnet Mask . . . . . . . . . . . : XXX.XXX.XXY.Y Default Gateway . . . . . . . . . : XYY.XXX.XY.X DHCP Server . . . . . . . . . . . : XY.XXX.XXY.XX DNS Servers . . . . . . . . . . . : XY.XXX.XXY.XX XY.XXY.XXY.XX Primary WINS Server . . . . . . . : XY.XXX.XXY.X Secondary WINS Server . . . . . . : XY.XXY.XXY.X Lease Obtained. . . . . . . . . . : Thursday, July XX, XYXX XY:XX:XX AM Lease Expires . . . . . . . . . . : Sunday, July XX, XYXX XY:XX:XX AM Ethernet adapter Local Area Connection X: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : ASIX axYYYYX USBX.Y to Fast Ethernet Adapter Physical Address. . . . . . . . . : YY-XY-BY-YX-XY-AY Dhcp Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes IP Address. . . . . . . . . . . . : XY.Y.Y.X Subnet Mask . . . . . . . . . . . : XXX.XXX.XXY.Y Default Gateway . . . . . . . . . : XY.Y.Y.X DHCP Server . . . . . . . . . . . : XY.Y.Y.XY DNS Servers . . . . . . . . . . . : XY.Y.Y.X Lease Obtained. . . . . . . . . . : Thursday, July XX, XYXX XY:XX:XY AM Lease Expires . . . . . . . . . . : Tuesday, August YX, XYXX XX:XY:XY AM Laptop Windows IP Configuration Host Name . . . . . . . . . . . . : AMERLAFYYXXYX Primary Dns Suffix . . . . . . . : amer.example.com Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : COMPANY.com amer.example.com atle.example.com cone.example.com apac.example.com scan.example.com bYX.example.com Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : amer.example.com Description . . . . . . . . . . . : Intel(R) 82567LM Gigabit Network Connection Physical Address. . . . . . . . . : YY-XY-BY-DY-XB-YX Dhcp Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes IP Address. . . . . . . . . . . . : XYY.XXX.XY.XY Subnet Mask . . . . . . . . . . . : XXX.XXX.XXY.Y Default Gateway . . . . . . . . . : XYY.XXX.XY.X DHCP Server . . . . . . . . . . . : XY.XXX.XXY.XX DNS Servers . . . . . . . . . . . : XY.XXX.XXY.XX XY.XXY.XXY.XX Primary WINS Server . . . . . . . : XY.XXX.XXY.X Secondary WINS Server . . . . . . : XY.XXY.XXY.X Lease Obtained. . . . . . . . . . : Thursday, July XX, XYXX XX:XX:XX AM Lease Expires . . . . . . . . . . : Sunday, July XX, XYXX XX:XX:XX AM Ethernet adapter {XYXAAYXX-YEDY-XXYX-YYEX-BYXYXXYEEYEX}: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Nortel IPSECSHM Adapter - Packet Scheduler iniport Physical Address. . . . . . . . . : XX-XX-XX-XX-XX-YY Dhcp Enabled. . . . . . . . . . . : No IP Address. . . . . . . . . . . . : Y.Y.Y.Y Subnet Mask . . . . . . . . . . . : Y.Y.Y.Y Default Gateway . . . . . . . . . : Ethernet adapter Leaf Networks Adapter: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Leaf Networks Adapter Physical Address. . . . . . . . . : YY-FF-FA-BC-YF-AY Dhcp Enabled. . . . . . . . . . . : No IP Address. . . . . . . . . . . . : X.XYY.XY.XX Subnet Mask . . . . . . . . . . . : XXX.Y.Y.Y Default Gateway . . . . . . . . . : Ethernet adapter Local Area Connection 3: Media State . . . . . . . . . . . : Media disconnected Description . . . . . . . . . . . : Bluetooth LAN Access Server Driver Physical Address. . . . . . . . . : YY-FX-AX-YA-BY-CA Ethernet adapter Wireless Network Connection 2: Media State . . . . . . . . . . . : Media disconnected Description . . . . . . . . . . . : Intel(R) WiFi Link 5300 AGN Physical Address. . . . . . . . . : YY-XX-YA-CX-FC-YE Ethernet adapter Local Area Connection 2: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : ASIX ax88772 USB2.0 to Fast Ethernet Adapter Physical Address. . . . . . . . . : YY-XY-BY-YX-XY-AY Dhcp Enabled. . . . . . . . . . . : No IP Address. . . . . . . . . . . . : XYX.XYY.X.X Subnet Mask . . . . . . . . . . . : XXX.XXX.XXX.Y Default Gateway . . . . . . . . . :

    Read the article

  • Oracle Virtualization at Oracle OpenWorld 2012

    - by Chris Kawalek
    Mini-Series Entry 1 of 3: Hands-On Virtualization This is the first entry of a 3 part mini-series aimed at highlighting server and desktop virtualization at this year’s Oracle OpenWorld.  Oracle OpenWorld 2012 is fast approaching! If you are as excited as we are about the fascinating new Oracle virtualization content featured at Oracle OpenWorld 2012, you won’t want to miss this blog mini-series. We will be highlighting sessions that cover advances and innovations in our products, our product strategy and roadmap, and hands on labs for step-by-step instructions from our field and product experts. In the blog mini-series you will learn about: The Oracle Virtualization general keynote session Hands-on labs  Key Oracle server and desktop virtualization sessions In this entry, we will cover the Oracle Virtualization keynote session and the hands-on labs you won't want to miss. General Session: Oracle Virtualization Strategy and Roadmap Session ID: GEN8725 Oracle offers the industry’s most complete and integrated virtualization portfolio enabling organizations to realize benefits beyond simple consolidation as they transform their data centers into flexible cloud-based infrastructures. Join Oracle executives and experts to learn about Oracle’s desktop-to-data-center virtualization solutions, such as the OS, with built-in management integration at all layers that can help you virtualize and manage the complete computing environment, from physical servers to virtual servers and applications. This “don’t-miss” session offers details of the latest product updates and strategy; product roadmaps; integration with enterprise applications; and real-world examples of how Oracle server, desktop, and storage virtualization is benefiting customers. Here are our top picks for Hands-On Labs for Oracle OpenWorld 2012: Oracle Virtual Desktop Infrastructure Performance and Tablet Mobility Session ID: HOL9907 This hands-on lab demonstrates the performance (using an industry-standard load tester) and roaming capabilities of Oracle Virtual Desktop Infrastructure with Oracle’s Sun Ray Clients, Apple iPad and other clients. Deploying an IaaS Environment with Oracle VM: Hands-On Lab  Session ID: HOL9558 This hands-on lab takes you through the planning and deployment of an infrastructure as a service (IaaS) environment with Oracle VM as the foundation. It covers a range of topics, from planning storage capacity, LUN creation, network bandwidth planning, and best practices to designing and streamlining the environment for ease of management. Learn from deeply experienced field engineers and product experts. Virtualize and Deploy Oracle Applications in Minutes with Oracle VM: Hands-On Lab Session ID: HOL9559 This hands-on lab is for application architects or system administrators who will need to deploy and manage Oracle Applications. You’ll learn how Oracle VM Templates can turn you into a power user who can virtualize and deploy complex Oracle Applications in minutes. Longtime field-experienced engineers and product experts will show you, step by step, how to download and import templates and deploy the applications. x86 Enterprise Cloud Infrastructure with Oracle VM 3.x and Sun ZFS Storage Appliance Session ID: HOL9870 The purpose of this hands-on lab is to demonstrate the functionality and usage of Oracle’s enterprise cloud infrastructure for x86 with Oracle VM 3.x. It covers:  Creation of VMs Migration of VMs  Quick and easy deployment of Oracle applications with Oracle VM Templates  Usage of the Storage Connect plug-in for the Sun ZFS Storage Appliance You can find these and other great sessions on the Oracle OpenWorld 2012 Content Catalogue. Start checking now to better plan and organize your week at the conference. Then you’ll be ready to sign up for all of your sessions in mid-July when the scheduling tool goes live. While the hands-on labs allow you to directly interact with Oracle virtualization products, the conference sessions allow you to hear from a wide variety of industry experts on how they're using they technology in real world deployments, solving specific challenges, and more. In tomorrow's entry, we'll start talking about the many conference sessions related to Oracle server and desktop virtualization you can attend during the show. See you then! - The Oracle Virtualization marketing team

    Read the article

  • Getting a Database into Source Control

    - by Grant Fritchey
    For any number of reasons, from simple auditing, to change tracking, to automated deployment, to integration with application development processes, you’re going to want to place your database into source control. Using Red Gate SQL Source Control this process is extremely simple. SQL Source Control works within your SQL Server Management Studio (SSMS) interface.  This means you can work with your databases in any way that you’re used to working with them. If you prefer scripts to using the GUI, not a problem. If you prefer using the GUI to having to learn T-SQL, again, that’s fine. After installing SQL Source Control, this is what you’ll see when you open SSMS:   SQL Source Control is now a direct piece of the SSMS environment. The key point initially is that I currently don’t have a database selected. You can even see that in the SQL Source Control window where it shows, in red, “No database selected – select a database in Object Explorer.” If I expand my Databases list in the Object Explorer, you’ll be able to immediately see which databases have been integrated with source control and which have not. There are visible differences between the databases as you can see here:   To add a database to source control, I first have to select it. For this example, I’m going to add the AdventureWorks2012 database to an instance of the SVN source control software (I’m using uberSVN). When I click on the AdventureWorks2012 database, the SQL Source Control screen changes:   I’m going to need to click on the “Link database to source control” text which will open up a window for connecting this database to the source control system of my choice.  You can pick from the default source control systems on the left, or define one of your own. I also have to provide the connection string for the location within the source control system where I’ll be storing my database code. I set these up in advance. You’ll need two. One for the main set of scripts and one for special scripts called Migrations that deal with different kinds of changes between versions of the code. Migrations help you solve problems like having to create or modify data in columns as part of a structural change. I’ll talk more about them another day. Finally, I have to determine if this is an isolated environment that I’m going to be the only one use, a dedicated database. Or, if I’m sharing the database in a shared environment with other developers, a shared database.  The main difference is, under a dedicated database, I will need to regularly get any changes that other developers have made from source control and integrate it into my database. While, under a shared database, all changes for all developers are made at the same time, which means you could commit other peoples work without proper testing. It all depends on the type of environment you work within. But, when it’s all set, it will look like this: SQL Source Control will compare the results between the empty folders in source control and the database, AdventureWorks2012. You’ll get a report showing exactly the list of differences and you can choose which ones will get checked into source control. Each of the database objects is scripted individually. You’ll be able to modify them later in the same way. Here’s the list of differences for my new database:   You can select/deselect all the objects or each object individually. You also get a report showing the differences between what’s in the database and what’s in source control. If there was already a database in source control, you’d only see changes to database objects rather than every single object. You can see that the database objects can be sorted by name, by type, or other choices. I’m going to add a comment such as “Initial creation of database in source control.” And then click on the Commit button which will put all the objects in my database into the source control system. That’s all it takes to get the objects into source control initially. Now is when things can get fun with breaking changes to code, automated deployments, unit testing and all the rest.

    Read the article

  • Windows Azure Use Case: Agility

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Agility in this context is defined as the ability to quickly develop and deploy an application. In theory, the speed at which your organization can develop and deploy an application on available hardware is identical to what you could deploy in a distributed environment. But in practice, this is not always the case. Having an option to use a distributed environment can be much faster for the deployment and even the development process. Implementation: When an organization designs code, they are essentially becoming a Software-as-a-Service (SaaS) provider to their own organization. To do that, the IT operations team becomes the Infrastructure-as-a-Service (IaaS) to the development teams. From there, the software is developed and deployed using an Application Lifecycle Management (ALM) process. A simplified view of an ALM process is as follows: Requirements Analysis Design and Development Implementation Testing Deployment to Production Maintenance In an on-premise environment, this often equates to the following process map: Requirements Business requirements formed by Business Analysts, Developers and Data Professionals. Analysis Feasibility studies, including physical plant, security, manpower and other resources. Request is placed on the work task list if approved. Design and Development Code written according to organization’s chosen methodology, either on-premise or to multiple development teams on and off premise. Implementation Code checked into main branch. Code forked as needed. Testing Code deployed to on-premise Testing servers. If no server capacity available, more resources procured through standard budgeting and ordering processes. Manual and automated functional, load, security, etc. performed. Deployment to Production Server team involved to select platform and environments with available capacity. If no server capacity available, standard budgeting and procurement process followed. If no server capacity available, systems built, configured and put under standard organizational IT control. Systems configured for proper operating systems, patches, security and virus scans. System maintenance, HA/DR, backups and recovery plans configured and put into place. Maintenance Code changes evaluated and altered according to need. In a distributed computing environment like Windows Azure, the process maps a bit differently: Requirements Business requirements formed by Business Analysts, Developers and Data Professionals. Analysis Feasibility studies, including budget, security, manpower and other resources. Request is placed on the work task list if approved. Design and Development Code written according to organization’s chosen methodology, either on-premise or to multiple development teams on and off premise. Implementation Code checked into main branch. Code forked as needed. Testing Code deployed to Azure. Manual and automated functional, load, security, etc. performed. Deployment to Production Code deployed to Azure. Point in time backup and recovery plans configured and put into place.(HA/DR and automated backups already present in Azure fabric) Maintenance Code changes evaluated and altered according to need. This means that several steps can be removed or expedited. It also means that the business function requesting the application can be held directly responsible for the funding of that request, speeding the process further since the IT budgeting process may not be involved in the Azure scenario. An additional benefit is the “Azure Marketplace”, In effect this becomes an app store for Enterprises to select pre-defined code and data applications to mesh or bolt-in to their current code, possibly saving development time. Resources: Whitepaper download- What is ALM?  http://go.microsoft.com/?linkid=9743693  Whitepaper download - ALM and Business Strategy: http://go.microsoft.com/?linkid=9743690  LiveMeeting Recording on ALM and Windows Azure (registration required, but free): http://www.microsoft.com/uk/msdn/visualstudio/contact-us.aspx?sbj=Developing with Windows Azure (ALM perspective) - 10:00-11:00 - 19th Jan 2011

    Read the article

  • News you can use, PeopleTools gems at OpenWorld 2012

    - by PeopleTools Strategy
    Here are some of the sessions which may not have caught your eyes during your scheduling of events you would like to attend at this year's Open World! CON9183 PeopleSoft Technology Roadmap Jeff Robbins Mon, Oct 1 4:45 PM Moscone West, Room 3002/4 Jeff's session is always very well attended. Come to hear, and see, what's going to be delivered in the new release and get some thoughts on where PeopleTools and the industry is heading. CON9186 Delivering a Ground-Breaking User Interface with PeopleTools Matt Haavisto Steve Elcock Wed, Oct 3 3:30 PM Moscone West, Room 3009 This session will be wonderfully engaging for participants.  As part of our demonstration, audience members will be able to interact live and real-time with our demo using their smart phones and tablets as if you are users of the system. CON9188 A Great User Experience via PeopleSoft Applications Portal Matt Haavisto Jim Marion Pramod Agrawal Mon, Oct 1 12:15 PM Moscone West, Room 3009 This session covers not only the PeopleSoft Portal, but new features like Workcenters and Dashboards, and how they all work together to form the PeopleSoft ecosystem. CON9192 Implementing a PeopleSoft Maintenance Strategy with My Update Manager Mike Thompson Mike Krajicek Tue, Oct 2 1:15 PM Moscone West, Room 3009 The LCM development team will show Oracle's My Update Manager for PeopleSoft and how it drastically simplifies deciding what updates are required for your specific environment. CON9193 Understanding PeopleSoft Maintenance Tools & How They Fit Together Mike Krajicek Wed, Oct 3 10:15 AM Moscone West, Room 3002/4 Learn about the portfolio of maintenance tools including some of the latest enhancements such as Oracle's My Update Manager for PeopleSoft, Application Data Sets, and the PeopleSoft Test Framework, and see what they can do for you. CON9200 PeopleTools Product Team Panel Discussion Jeff Robbins Willie Suh Virad Gupta Ravi Shankar Mike Krajicek Wed, Oct 3 5:00 PM Moscone West, Room 3009 Attend this session to engage in an open discussion with key members of Oracle's PeopleTools senior management team. You will be able to ask questions, hear their thoughts, and gain their insight into the PeopleTools product direction. CON9205 Securing Your PeopleSoft Integration Infrastructure Greg Kelly Keith Collins Tue, Oct 2 10:15 AM Moscone West, Room 3011 This session, with the senior integration developer, will outline Oracle's best practices for securing your integration infrastructure so that you know your web services and REST services are as secure as the rest of your PeopleSoft environment. CON9210 Performance Tuning for the PeopleSoft Administrator Tim Bower David Kurtz Mon, Oct 1 10:45 AM Moscone West, Room 3009 Meet long time technical consultants with deep knowledge of system tuning, Tim Bower of the Center of Excellence and David Kurtz, author of "PeopleSoft for the Oracle DBA". System administrators new to tuning a PeopleSoft environment as well as seasoned experts will come away with new techniques that will help them improve the performance of their PeopleSoft system. CON9055 Advanced Management of Oracle PeopleSoft with Oracle Enterprise Manager Greg Kelly Milten Garia Greg Bouras Thurs Oct 4 12:45 PM Moscone West, Room 3009 This promises to be a really interesting session as Milten Garia from CSU discusses lessons learned during the implementation of Oracle's Enterprise Manager with the PeopleSoft plug-in across a multi campus environment. There are some surprising things about Solaris 10 and the Bourne shell. Some creative work by the Unix administrators so the well tried scripts and system replication processes were largely unaffected. CON8932 New Functional PeopleTools Capabilities for the Line of Business User Jeff Robbins Tues, Oct 2 5:00 PM Moscone West, Room 3007 Using PeopleTools 8.5x capabilities like: related content, embedded help, pivot grids, hover-over, and more, Jeff will discuss how these can deliver business value and innovation which will positively impact your business without the high costs associated with upgrading your PeopleSoft applications. Check out a more detailed list here. We look forward to meeting you all there!

    Read the article

  • Tips &amp; Tricks: How to crawl a SSL enabled Oracle E-Business Suite

    - by Rajesh Ghosh
    Oracle E-Business Suite can be integrated with Oracle Secure Enterprise Search for a superior end user experience and enhanced data retrieval capabilities. Before end-users can perform search operations, data has to be crawled and indexed into Oracle SES server. However if the Oracle E-Business Suite instance is on SSL, some additional configurations are needed in Oracle SES server as well as in Oracle Search Modeler, before a search object can be deployed and crawled. The process involves the following steps: Step 1: Export the SSL certificate of Oracle E-Business Suite Access the Oracle E-business Suite instance from a web browser. You should be able to locate a security or certificate icon somewhere in the browser toolbar or status bar, depending on which browser you are using. Click on it and you should be able to view the certificate as well as export it to a local file. While exporting make sure that you use “DER encoded” format. Step 2: Import the SSL certificate into Oracle Secure Enterprise server’s java key-store Oracle SES (10.1.8.4) by default ships a JDK under $ORACLE_HOME. The Oracle SES mid-tier uses this jdk to start the oc4j container services. In this step the Oracle E-Business Suite’s SSL certificate which has been exported in step #1, has to be imported into the Oracle SES server’s java key store. Perform the following: Copy the certificate file onto the server where Oracle SES server is running; under $ORACLE_HOME/jdk/jre/lib/security/cacerts. “ORACLE_HOME” points to the Oracle SES oracle home. Set the JAVA_HOME environment variable to $ORACLE_HOME/jdk. Append $JAVA_HOME/bin to the PATH environment variable Issue the command :  “keytool -import -keystore keystore.jks -trustcacerts -alias myOHS –file ebs.crt” . Please substitute “ebs.crt” with the name of the certificate file you copied in step #2.1. The default key-store password “changeit”. Enter the same when prompted. If successful this process will end with a message saying “certificate successfully imported”. Step 3: Import the SSL certificate into Search Modeler java key-store Unlike Oracle SES, Search Modeler is not shipped with a bundled JDK. If you are using standalone OC4J, then you actually use an external JDK to start the oc4j container services. If you are using IAS instance then the JDK comes bundled with the IAS installation. Perform the following: Copy the certificate file onto the server where Search Modeler application is running; under $JDK_HOME/jre/lib/security/cacerts. “JDK_HOME” points to the JDK directory depending on whether you are using external JDK or a bundled one. Set the JAVA_HOME environment variable to JDK directory. Append $JAVA_HOME/bin to the PATH environment variable Issue the command :  “keytool -import -keystore keystore.jks -trustcacerts -alias myOHS –file ebs.crt” . Please substitute “ebs.crt” with the name of the certificate file you copied in step #3.1. The default key-store password “changeit”. Enter the same when prompted. If successful this process will end with a message saying “certificate successfully imported”. Once you have completed the above steps successfully, you can deploy the search objects using Search Modeler and then start crawling them as well.

    Read the article

  • Oracle OpenWorld Update: Oracle GoldenGate for High Availability

    - by Doug Reid
    0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} One of our primary themes this year for the Oracle OpenWorld Sessions featuring Oracle GoldenGate is High Availability. This is a pretty wide theme, but the focus will be on ways of maximizing uptime for critical systems during planned and unplanned events. We have a number of very informative sessions dedicated to exploring this theme in detail; from deep product implementation strategies up to lessons learned by our customers when using Oracle GoldenGate to meet strict SLAs. We kick this track off with our Customer Panel on Zero Downtime Operations on Monday, which I overviewed in my last posting. This is followed by Comcast, who will be hosting a sessions at 1:45PM in Moscone West 3014. Their session will discuss using Oracle GoldenGate to reduce downtime during a database upgrade. Here’s an overview: CON8571 - Oracle Database Upgrade with Oracle GoldenGate: Best Practices from Comcast Does your business demand high availability? In this session, Comcast, among the largest telecom firms in the world, shares tips on how to achieve zero downtime while upgrading to Oracle Database 11g Release 2, using a combination of Oracle technologies: Oracle Real Application Clusters (Oracle RAC), Oracle Database’s Grid Infrastructure and Oracle Recovery Manager (Oracle RMAN) features, Oracle GoldenGate, and Oracle Active Data Guard. This successful upgrade took place on a mission-critical system that handles more than 60 million business requests and service calls a day. You’ll also hear how Comcast leverages Oracle Advanced Customer Support Services, including an Oracle Solution Support Center, to maximize performance and availability of its Oracle technologies. On Tuesday, Joydip Kundu (Director of Software Development) will be presenting “Oracle GoldenGate and Oracle Data Guard: Working Together Seamlessly” at 10:15AM in Moscone South 3005. This session focuses on how both modes of Oracle GoldenGate extract (Classic and Integrated Capture) can be used with Oracle Data Guard for disaster recovery purposes or to offload extract processing. That afternoon at 1:15PM Comcast takes the stage again to discuss firsthand lessons learned implementing Oracle GoldenGate in a heterogeneous, highly available environment. Here’s a rundown of their session: CON8750 - High-Volume OLTP with Oracle GoldenGate: Best Practices from Comcast Does your business demand high availability in a mission-critical environment? In this session, Comcast, one of the largest telecom firms, shares best practices for leveraging Oracle GoldenGate to replicate high-volume online transaction processing data from Tandem NSK SQL/MX to Teradata. Hear critical success factors from Comcast for overall platform and component architectures as well as configuration and tuning techniques. Learn how it met the challenges of replication in a complex heterogeneous environment. You’ll also hear how Comcast leverages Oracle Advanced Customer Support Services, including an Oracle Solution Support Center, to provide mission-critical support for maximized performance and availability of its Oracle environment. The final session on the high availability track will be hosted by Patricia Mcelroy (Distinguished Product Manager) and Stephan Haisly (Principle Member of Technical Staff). Their session (CON8401 - Tuning and Troubleshooting Oracle GoldenGate on Oracle Database) covers techniques for performance tuning and troubleshooting of Oracle GoldenGate on Oracle Database. Using various types of workloads (OLTP, batch, Oracle’s PeopleSoft Enterprise), the presentation steps through the process of monitoring and troubleshooting the configuration to maximize performance and replication throughput within and between Oracle clouds. Join us at our sessions or stop by our demo pods in Moscone south and meet the product management and development teams.

    Read the article

  • WebCenter Customer Spotlight: Hyundai Motor Company

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryHyundai Motor Company is one of the world’s fastest-growing car manufacturers, ranked as the fifth-largest in 2011. The company also operates the world’s largest integrated automobile manufacturing facility in Ulsan, Republic of Korea, which can produce 1.6 million units per year. They  undertook a project to improve business efficiency and reinforce data security by centralizing the company’s sales, financial, and car manufacturing documents into a single repository. Hyundai Motor Company chose Oracle Exalogic, Oracle Exadata, Oracle WebLogic Sever, and Oracle WebCenter Content 11g, as they provided better performance, stability, storage, and scalability than their competitors.  Hyundai Motor Company cut the overall time spent each day on document-related work by around 85%, saved more than US$1 million in paper and printing costs, laid the foundation for a smart work environment, and supported their future growth in the competitive car industry. Company OverviewHyundai Motor Company is one of the world’s fastest-growing car manufacturers, ranked as the fifth-largest in 2011. The company also operates the world’s largest integrated automobile manufacturing facility in Ulsan, Republic of Korea, which can produce 1.6 million units per year. The company strives to enhance its brand image and market recognition by continuously improving the quality and design of its cars. Business Challenges To maximize the company’s growth potential, Hyundai Motor Company undertook a project to improve business efficiency and reinforce data security by centralizing the company’s sales, financial, and car manufacturing documents into a single repository. Specifically, they wanted to: Introduce a smart work environment to improve staff productivity and efficiency, and take advantage of rapid company growth due to new, enhanced car designs Replace a legacy document system managed by individual staff to improve collaboration, the visibility of corporate documents, and sharing of work-related files between employees Improve the security and storage of documents containing corporate intellectual property, and prevent intellectual property loss when staff leaves the company Eliminate delays when downloading files from the central server to a PC Build a large, single document repository to more efficiently manage and share data between 30,000 staff at the company’s headquarters Establish a scalable system that can be extended to Hyundai offices around the world Solution DeployedAfter conducting a large-scale benchmark test, Hyundai Motor Company chose Oracle Exalogic, Oracle Exadata, Oracle WebLogic Sever, and Oracle WebCenter Content 11g, as they provided better performance, stability, storage, and scalability than their competitors. Business Results Lowered the overall time spent each day on all document-related work by approximately 85%—from 4.5 hours to around 42 minutes on an average day Saved more than US$1 million per year in printer, paper, and toner costs, and laid the foundation for a completely paperless environment Reduced staff’s time spent requesting and receiving documents about car sales or designs from supervisors by 50%, by storing and managing all documents across the corporation in a single repository Cut the time required to draft new-car manufacturing, sales, and design documents by 20%, by allowing employees to reference high-quality data, such as marketing strategy and product planning documents already in the system Enhanced staff productivity at company headquarters by 9% by reducing the document-related tasks of 30,000 administrative and research and development staff Ensured the system could scale to hold 3 petabytes of car sales, manufacturing, and design data by 2013 and be deployed at branches worldwide We chose Oracle Exalogic, Oracle Exadata, and Oracle WebCenter Content to support our new document-centralization system over their competitors as Oracle offers stable storage for petabytes of data and high processing speeds. We have cut the overall time spent each day on document-related work by around 85%, saved more than US$1 million in paper and printing costs, laid the foundation for a smart work environment, and supported our future growth in the competitive car industry. Kang Tae-jin, Manager, General Affairs Team, Hyundai Motor Company Additional Information Hyundai Motor Company Customer Snapshot Oracle WebCenter Content

    Read the article

  • Review: Windows 8 - Initial Experience

    - by Tim Murphy
    I originally started this post when I had the Windows 8 preview setup on VirtualBox image.  I have since put the RTM bits on a Dell E6530 that is my new work laptop.  It isn’t a table so I am not getting the touch experience, but as a developer this makes the most sense for the moment. This is the first Windows OS that I have had to spend much time exploring to even get started.  The first thing I ran into was when I clicked on the desktop icon I was lost.  Where is the Start menu? Where are my programs?  How do I get back to the Metro environment?  I finally tried hitting the Windows button and it popped back out to the Metro screen. Once I got past that I found that the look of the Metro interface is clean and well organized.  It should be familiar to anyone who is already using a Zune or Windows Phone 7.  In the Desktop, aside from the lack of the Start button to bring up programs the desktop is just like the Windows 7 environment we are all used to.  I do have to say though that I don’t like popping out to the Metro screen to find program.  I think installers for programs like ones that developers usually work in for a desktop mode will need to give an option for creating a desktop icon and pinning to the task bar of the desktop. One of the things I do really enjoy is having live tiles in the Metro environment.  It is a nice way of feeding my need for constant information.  The one drawback though is that the task bar at the bottom of my screen used to be where I got this information without leaving what I was working on.  It allowed me to see current temperatures and when there were messages waiting.  I have since found that these still work as expected in the Desktop and Toast message keep you up on what is going on in the Metro apps. Thankfully familiar functionality like Alt-Tab and Windows-Tab still work regardless of if apps are in the Metro or the Desktop environment.  Add to this the ability to find any application on the Metro screen by simply typing and things get very comfortable. I also started exploring some of the apps.  If you want see a ton of stats on your team at a glance check out the Sports app.  What games are coming up? Who are the leaders in a number of stats?  The Weather and Finance apps have good features as well and I am sure they will improve as users supply feedback. I have had to install Visual Studio 2010 side-by-side with VS2012 because the Windows Phone 7 SDK would only install on VS2010.  This isn’t a Windows 8 issue per se, but something that you need to be aware of if you are a developer moving to the new ecosystem. The overall experience is a joy despite a few hiccups.  For anyone moving to Windows 8 in on a non-touch laptop or desktop I do suggest this list of keyboard shortcuts.  Enjoy. del.icio.us Tags: Windows 8,Win8,Metro,Review

    Read the article

  • pfSense: How to route traffic out the WAN port?

    - by Ian Boyd
    Expert version i want to create a route in pfSense that will send traffic out the physical WAN port, not the PPPoE WAN port. i want to talk to talk to the web-server on my DSL modem, but it doesn't see packets wrapped in a PPPoE header. Long version My pfSense router is responsible for setting up the PPPoE connection over DSL to my ISP. When a machine on the LAN wants to sent packets to the internet, the default route sends packets out over the PPPoE connection. Those packets, wrapped in a PPPoE header, are sent on the ethernet cable to my DSL modem. From there they are sent the ISP, and the internet at large. i want a way to send a packet out the WAN port itself - not the PPPoE WAN port. My modem is sitting out there, with a http interface where i can monitor connection speed signal-to-noise ratio bandwidth connection time Whenever i try to set a route for destination of 192.168.2.1 (the IP that the modem will listen to for HTTP requests) to go out the WAN port, they instead end up going out the PPPoE port. The difference being that they're wrapped in a PPPoE protocol packet, and the modem isn't being sent the packet, it's being delivered to the ISP. Given that pfSense has no ability to direct traffic out the physical WAN port: how can i direct traffic out the physical WAN port on pfSense?

    Read the article

  • JAVASCRIPT ENABLED

    - by kirchoffs415
    HI, I hope somebody can help, i keep getting the following message when i log on-- Your Javascript is disabled. Limited functionality is available. it will stay for maybe a day sometimes two.I have uninstalled javascript and reinstalled but still the same. Iam using chrome. any help would be gratefull many thanks Dominic p.s. my system spec is as follows System InformationOS Name Microsoft® Windows Vista™ Home Premium Version 6.0.6002 Service Pack 2 Build 6002 Other OS Description Not Available OS Manufacturer Microsoft Corporation System Name DOM-PC System Manufacturer Dell Inc. System Model Inspiron 1545 System Type X86-based PC Processor Pentium(R) Dual-Core CPU T4200 @ 2.00GHz, 2000 Mhz, 2 Core(s), 2 Logical Processor(s) BIOS Version/Date Dell Inc. A05, 25/02/2009 SMBIOS Version 2.4 Windows Directory C:\Windows System Directory C:\Windows\system32 Boot Device \Device\HarddiskVolume3 Locale United Kingdom Hardware Abstraction Layer Version = "6.0.6002.18005" User Name DOM-PC\DOM Time Zone GMT Standard Time Installed Physical Memory (RAM) 3.00 GB Total Physical Memory 2.96 GB Available Physical Memory 1.38 GB Total Virtual Memory 5.89 GB Available Virtual Memory 4.25 GB Page File Space 3.00 GB Page File C:\pagefile.sys My System Specs

    Read the article

  • Dual Monitor support rdp 7 to win 7 on esxi

    - by rphilli5
    I am trying to RDP from a Windows 7 Professional dual monitor physical machine to a Windows 7 Professional VM hosted on esxi 4.0. I can get the spanning option to work to both monitors, but I have tried 3 different methods of connecting but have not been able to use true multiple monitors. At different times, I tried checking the "use all monitors" option, command line mstsc /multimon and added the line use multimon:i:1 to the .rdp file. None of these worked. Any ideas? The physical machine can connect to other Windows 7 physical machines with true multi monitor access. I also have the same issue when going from a 32bit RC1 machine to a Windows 7 Professional x64, but not when going in the reverse direction. Here's the .rdp: screen mode id:i:2 use multimon:i:1 desktopwidth:i:1440 desktopheight:i:900 session bpp:i:16 winposstr:s:0,1,341,118,1139,568 compression:i:1 keyboardhook:i:2 audiocapturemode:i:0 videoplaybackmode:i:1 connection type:i:1 displayconnectionbar:i:1 disable wallpaper:i:1 allow font smoothing:i:0 allow desktop composition:i:0 disable full window drag:i:1 disable menu anims:i:1 disable themes:i:1 disable cursor setting:i:0 bitmapcachepersistenable:i:1 full address:s:192.168.1.5 audiomode:i:0 redirectprinters:i:1 redirectcomports:i:0 redirectsmartcards:i:1 redirectclipboard:i:1 redirectposdevices:i:0 redirectdirectx:i:1 autoreconnection enabled:i:1 authentication level:i:2 prompt for credentials:i:0 negotiate security layer:i:1 remoteapplicationmode:i:0 alternate shell:s: shell working directory:s: gatewayhostname:s: gatewayusagemethod:i:4 gatewaycredentialssource:i:4 gatewayprofileusagemethod:i:0 promptcredentialonce:i:1 use redirection server name:i:0 drivestoredirect:s:

    Read the article

  • GlusterFS on VMWare ESXi 5

    - by Dharmavir
    I want to build network file system on top of my VMWare ESXi based virtual nodes which are running Ubuntu 12.04 LTS. I am evalaluating options and found that GlusterFS (http://www.gluster.org/) can turn out to be a good choice. Purpose: I have about 2 dozen VM nodes with different configurations, on 2 physical nodes which has following configuration: 16 core Intel Xeon 1 TB 48 GB RAM Now as I said earlier each Physical server has about 1TB hdd and I can increase if I want additional so for now I have 2TB disk space available, these space is distributed in VM nodes I have created on which about 2 dozen VM nodes live. Now some of them being application server and mgmt server, they have plenty of free disk space which I want to utilize for some heavy storage which I can not design if I do that individually on single VM node. This way if my storage is distributed between dozens of VM nodes and about 2 or more physical nodes I have some sort of backup as well. I do not mind if data gets stored redundently but per my knowledge it might hapeen that individual VM nodes will not be able to store all of the data because complete data size for example if we take 100GB will exceed VM disk size of 70GB and then VM will also have system and program files on it. I need some suggestion that will GlusterFS be the solution for which I am looking forward to or I should go with something like hadoop? I am not too sure. But yes, I would like to utilize my free space on each VM node and while doing that if I get store data redundently I am okay because it will give me data security.

    Read the article

  • How to configure VirtualBox server for performance at home

    - by BluJai
    I currently have two physical Ubuntu Server 10.10 servers at home: one serves as our firewall/router/DHCP/VPN server and the other performs double-duty as a file server and a VirtualBox host for an Ubuntu Desktop 10.10 machine which I use from remote connections (via NoMachine) for many thin-client purposes which are irrelevant to my question. What I'd like to accomplish is to consolidate the two physical machines into one which is a dedicated VirtualBox host (most likely running Ubuntu Server 10.10). Note that I'd like to stick with VirtualBox (if possible) because I'm most comfortable with it and use it on a daily basis at both home and work. Specifically, I plan to have one VM set up as file server, another as the firewall/router/DHCP/VPN (or possibly split those a bit) and a third, which is the only current VM (already VirtualBox), which is the thin-client host. My question comes down to performance and/or recommendations about the file server VM. The file server hosts about 6 terabytes of data across 4 drives. What I'd like to do is use raw disk access from the VM directly to the existing disks. However, I'm curious what performance advantage/disadvantage that would have as compared to using shared folders from the VM host and basically just have the whole drive served as a shared folder to the VM which would then serve it to the other machines on the network. I don't know if virtual disks would even work in this scenario and I certainly wouldn't want a drive to be filled with just a single file which is 1.5 TB (disk image). To add understanding of context, but not to get additional advice, I want to virtualize these machines because I intend to regularly use the snapshot capabilities of VirtualBox for the system disks (which will be virtual drives) of the VMs and I have some physical space/power needs to address (as I mentioned, this is at home).

    Read the article

  • How to configure VirtualBox server for performance at home

    - by BluJai
    I currently have two physical Ubuntu Server 10.10 servers at home: one serves as our firewall/router/DHCP/VPN server and the other performs double-duty as a file server and a VirtualBox host for an Ubuntu Desktop 10.10 machine which I use from remote connections (via NoMachine) for many thin-client purposes which are irrelevant to my question. What I'd like to accomplish is to consolidate the two physical machines into one which is a dedicated VirtualBox host (most likely running Ubuntu Server 10.10). Note that I'd like to stick with VirtualBox (if possible) because I'm most comfortable with it and use it on a daily basis at both home and work. Specifically, I plan to have one VM set up as file server, another as the firewall/router/DHCP/VPN (or possibly split those a bit) and a third, which is the only current VM (already VirtualBox), which is the thin-client host. My question comes down to performance and/or recommendations about the file server VM. The file server hosts about 6 terabytes of data across 4 drives. What I'd like to do is use raw disk access from the VM directly to the existing disks. However, I'm curious what performance advantage/disadvantage that would have as compared to using shared folders from the VM host and basically just have the whole drive served as a shared folder to the VM which would then serve it to the other machines on the network. I don't know if virtual disks would even work in this scenario and I certainly wouldn't want a drive to be filled with just a single file which is 1.5 TB (disk image). To add understanding of context, but not to get additional advice, I want to virtualize these machines because I intend to regularly use the snapshot capabilities of VirtualBox for the system disks (which will be virtual drives) of the VMs and I have some physical space/power needs to address (as I mentioned, this is at home).

    Read the article

  • ADO 2.8, VMWare and SQL Server - sudden bulk dropped connections

    - by CodeByMoonlight
    We have an overworked server currently running a single SQL Server 2000 instance on physical hardware, and about 40 different apps interact with it on a daily basis. Last year, the RAID controller failed and we had no spare, so IT Support hurriedly migrated it overnight to a copy running on a VMWare Server. While it was on that server everything ran much quicker due to it being a big improvement in spec. However, the biggest app using it had occasional serious errors which never occurred on physical hardware. Specifically, several times a week it would disconnect batches of users - anywhere from just ten to hundreds at once, and all at the same time. It didn't affect any particular users or PCs or offices - all were affected equally. The only common thing was the app, which is a VB6 app using ADO 2.8 to connect. The other apps connecting to that virtualised instance of SQL Server seemingly had no problems, although they were (and are) responsible for only a tiny fraction of the work involving this server. The upshot is that after about two weeks of loving the speed and hating the random mass disconnections (which we were never able to find a cause for), we sadly took the decision to return to physical hardware and the disconnections vanished. Now we've reached the point where the old server just can't handle all that's being asked of it, and we're intending to migrate everything to 2 or more other servers. The snag is that there's a good chance they'll have to be virtual ones again. Given what happened last time, I'm trying to find out what possible reasons there could be for these mass disconnections. We were running VMWare ESX, but the network is Novell-based. Also, the server had a linked server setup to connect to an Informix server using a known-to-be-buggy ODBC driver, and this is used throughout the day. Any ideas on the cause(s)?

    Read the article

  • Ubuntu 13.04 to 13.10: Filesystem check or mount failed [migrated]

    - by SamHuckaby
    I attempted to upgrade from Ubuntu 13.04 to 13.10 today, and mid upgrade the system started flaking out, and eventually locked up entirely. I was forced to restart the computer, and am now unable to get the computer to boot up at all. When I boot currently, it takes me to the GRUB menu, and I can choose to boot normally, or boot in an older version. I have tried several things, which I list below, but no matter what, when I try to finish booting into Ubuntu, I receive the following error: Filesystem check or mount failed. A maintenance shell will now be started. CONTROL-D will terminate this shell and continue booting after re-trying filesystems. Any further errors will be ignored root@ubuntu-computername:~# I have fun fsck -f and everything appears correct, no errors are reported. and it passes all 5 checks. If I run fdisk -l then I get the following information: Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 4096 bytes / 4096 bytes Disk identifier: 0x00010824 Device Boot Start End Blocks Id System /dev/sda1 * 2048 608456703 304227328 83 Linux /dev/sda2 608458750 625141759 8341505 5 Extended Partition 2 does not start on physical sector boundary. /dev/sda5 608458752 625141759 8341504 82 Linux swap / Solaris Disk /dev/sdb: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x0fb4b7e8 Device Boot Start End Blocks Id System /dev/sdb1 8192 625139711 312565760 7 HPFS/NTFS/exFAT I am considering just installing a new OS on the other disk, that currently has nothing on it, and then just attempting to scrape my data off the old disk (thankfully I didn't encrypt the files). Really my question is this: Can I salvage this Ubuntu install, or should I give up and just reinstall?

    Read the article

  • No LPT port in Windows 7 virtual machines

    - by KeyboardMonkey
    Windows 7 has MS virtual PC integrated, the VM settings don't give a parallel LPT port mapping to the physical machine. Where did it go? Has anyone else noticed this, and found a solution? Update: After much digging, I found the one and only reference to this issue, on the VPC Blog: "Parallel port devices are not supported, as they are relatively rare today." -More details- It's a XP VM I've been using since VPC 2007 days, which did have this functionality. This is to configure barcode printers via the LPT port. Since the (new) MS VM can't map to my physical LPT port, I'm having a hard time configuring printers. My physical ports are enabled in the BIOS. It has worked the past 3 years, before switching to Win 7. Any help is appreciated. This screen shot of the VM settings shows COM ports, but LPT is no more In contrast, here is a screen shot of VPC 2007 (before it got integrated into Win 7). Notice how it has LPT support

    Read the article

  • Re-sizing disk partition linux/vm

    - by Tiffany Walker
    I VM Player running a linux guest and I was wanting to know how do I expand the disk? In the VM player I gave more disk space but I am not sure how to mount/expand/connect the new disk space to the system. My old disk space was 14GB [root@localhost ~]# df -h / Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup-lv_root 14G 4.5G 8.2G 36% / Then I expanded it and now I see sda2 which is the new space? [root@localhost ~]# fdisk -l Disk /dev/sda: 128.8 GB, 128849018880 bytes 255 heads, 63 sectors/track, 15665 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000cd44d Device Boot Start End Blocks Id System /dev/sda1 * 1 64 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 64 2611 20458496 8e Linux LVM Disk /dev/mapper/VolGroup-lv_root: 14.5 GB, 14537457664 bytes 255 heads, 63 sectors/track, 1767 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/VolGroup-lv_swap: 6408 MB, 6408896512 bytes 255 heads, 63 sectors/track, 779 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Do I need to mount the new space first? resize2fs -p /dev/mapper/VolGroup-lv_root 108849018880 resize2fs 1.41.12 (17-May-2010) The containing partition (or device) is only 3549184 (4k) blocks. You requested a new size of 1474836480 blocks. resize2fs -p /dev/mapper/VolGroup-lv_root 128849018880 resize2fs 1.41.12 (17-May-2010) resize2fs: Invalid new size: 128849018880 [root@localhost ~]# lvextend -L+90GB /dev/mapper/VolGroup-lv_root Extending logical volume lv_root to 103.54 GiB Insufficient free space: 23040 extents needed, but only 0 available [root@localhost ~]# lvextend -L90GB /dev/mapper/VolGroup-lv_root Extending logical volume lv_root to 90.00 GiB Insufficient free space: 19574 extents needed, but only 0 available EDIT: So after trying pvcreate/vgextend nothing has so far worked. I'm guessing the new disk space added from VM Player is not showing up? pvscan PV /dev/sda2 VG VolGroup lvm2 [19.51 GiB / 0 free] Total: 1 [19.51 GiB] / in use: 1 [19.51 GiB] / in no VG: 0 [0 ]

    Read the article

  • Problems mounting HPUX LVM+VXFS filesystem on Linux

    - by golimar
    I have a physical disk from a HPUX system that I need to access from a Debian Linux for ia64 system. From the hpux-lvm-tools project I have the tools to access the HPUX LVMs (Linux LVM has a different format) and I also have the freevxfs driver. I know beforehand that the disk has three partitions, and that the biggest one contains LVM volumes, and some of those are VxFS filesystems. I can see the partitions: # cat /proc/partitions major minor #blocks name 8 32 143374744 sdc 8 33 512000 sdc1 8 34 142452736 sdc2 8 35 409600 sdc3 It finds a VG in one of the disk partitions: # ./vgscan_hpux On /dev/sdc2 - vg1328874723 # ./pvdisplay_hpux /dev/sdc2 PV General Information ---------------------- VG Creation Time Fri Feb 10 12:52:03 2012 Physical Volume ID 1766760336 1328874723 Volume Group ID 1766760336 1328874723 Physical Volumes in VG 1766760336 1328874723 VG Actication Mode 0 - LOCAL PE Size 64 MBs Lvol sizes ---------- lvol1 - 8 Extents - 512 MBs lvol2 - 192 Extents - 12288 MBs lvol3 - 16 Extents - 1024 MBs ... lvol21 - 13 Extents - 832 MBs lvol22 - 224 Extents - 14336 MBs lvol23 - 16 Extents - 1024 MBs Then I activate that VG and some new devices appear in my system: # ./pvactivate_hpux /dev/sdc2 VG vg1328874723 Activated succesfully with 23 lvols. # # ll /dev/mapper/ total 0 crw------- 1 root root 10, 59 Nov 26 16:08 control lrwxrwxrwx 1 root root 7 Nov 26 16:38 vg1328874723-lvol1 -> ../dm-0 lrwxrwxrwx 1 root root 7 Nov 26 16:38 vg1328874723-lvol10 -> ../dm-9 ... lrwxrwxrwx 1 root root 7 Nov 26 16:38 vg1328874723-lvol8 -> ../dm-7 lrwxrwxrwx 1 root root 7 Nov 26 16:38 vg1328874723-lvol9 -> ../dm-8 But: # mount /dev/mapper/vg1328874723-lvol18 /mnt/tmp mount: you must specify the filesystem type # mount -t vxfs /dev/mapper/vg1328874723-lvol18 /mnt/tmp mount: wrong fs type, bad option, bad superblock on /dev/mapper/vg1328874723-lvol18, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so # lsmod |grep vxfs freevxfs 23905 0 I also tried to identify the raw data with the file command and it just says 'data': # file -s /dev/mapper/vg1328874723-lvol18 /dev/mapper/vg1328874723-lvol18: symbolic link to `../dm-17' # file -s /dev/dm-17 /dev/dm-17: data # Any clues?

    Read the article

  • Google Chrome warning that Javascript is disabled

    - by kirchoffs415
    I hope somebody can help. I keep getting the following message when I log on: Your Javascript is disabled. Limited functionality is available. It will stay for maybe a day sometimes two. I have uninstalled javascript and reinstalled but still the same. I am using chrome. Any help would be grateful many thanks Dominic My system spec is as follows System InformationOS Name Microsoft® Windows Vista™ Home Premium Version 6.0.6002 Service Pack 2 Build 6002 Other OS Description Not Available OS Manufacturer Microsoft Corporation System Name DOM-PC System Manufacturer Dell Inc. System Model Inspiron 1545 System Type X86-based PC Processor Pentium(R) Dual-Core CPU T4200 @ 2.00GHz, 2000 Mhz, 2 Core(s), 2 Logical Processor(s) BIOS Version/Date Dell Inc. A05, 25/02/2009 SMBIOS Version 2.4 Windows Directory C:\Windows System Directory C:\Windows\system32 Boot Device \Device\HarddiskVolume3 Locale United Kingdom Hardware Abstraction Layer Version = "6.0.6002.18005" User Name DOM-PC\DOM Time Zone GMT Standard Time Installed Physical Memory (RAM) 3.00 GB Total Physical Memory 2.96 GB Available Physical Memory 1.38 GB Total Virtual Memory 5.89 GB Available Virtual Memory 4.25 GB Page File Space 3.00 GB Page File C:\pagefile.sys My System Specs

    Read the article

  • File system concepts (df command)

    - by mkab
    I'm finding it difficult to understand some stuffs about the df command. Suppose I type df and I have the following output Filesystem 1k-blocks Used Avail Capacity Mounted on /dev/da0s1 some number some number number percentage /win /dev/da0s2 some number some number number percentage /win/home /dev/da0s3a some number some number number percentage / devfs some number some number number percentage /dev /dev/da0s3g some number some number number percentage /local /dev/da0s3h some number some number -number 102% /reste /dev/da0s3d some number some number number percentage /tmp /dev/da1s3f some number some number number percentage /usr /dev/da1s3e some number some number number percentage /var /dev/da1s1a some number some number number percentage /public Are the answers to the following questions correct? How many physical drives do I have? Ans: 2. da0s1 and da1s1 How many physical partitions on each disk? Ans: 8 for da0s1 and 1 for da1s1 How many BSD partition on each physical partition Ans: Impossible to determine. We have to use the -T to determine its type How is it possible for the file system /dev/da0s3h filled at 102%? And where is this overflowed data written?Ans: I have no idea for this one Thanks.

    Read the article

  • RAM being displayed is lesser than the actual in my Windows 7

    - by Prateek Somani
    I am using Windows 7 and Ubuntu on the same machine. Earlier I had 3 GB of RAM,but now the Windows is displaying just 1 GB of RAM. Please also find the output of the free command in my Ubuntu : total used free shared buffers cached Mem: 1008208 904808 103400 5736 13516 239596 -/+ buffers/cache: 651696 356512 Swap: 3127292 10252 3117040 Has the swap memory consumed my 2 GB of RAM? Will I be able to use the whole of 3GB of the RAM in my Windows? Regards, Prateek Update : I tried to run the lshw command and I got the following output : *-memory description: System Memory physical id: 1b slot: System board or motherboard size: 1GiB *-bank:0 description: SODIMM DDR3 Synchronous 1067 MHz (0.9 ns) product: HMT112S6BFR6C-H9 vendor: Hynix physical id: 0 serial: 2C71D069 slot: Bottom - Slot 1 size: 1GiB width: 64 bits clock: 1067MHz (0.9ns) *-bank:1 description: SODIMM DDR3 Synchronous 1067 MHz (0.9 ns) [empty] product: 16JSF25664HZ-1G4F1 vendor: Micron physical id: 1 serial: FD421821 slot: Bottom - Slot 2 width: 64 bits clock: 1067MHz (0.9ns) Why it is able to detect the vendor/product name of the bank-1 RAM, why can't it detect the RAM size and other details ? Has my RAM got faulty?

    Read the article

  • Restoring Windows 2008 Server X86 and X64

    - by rihatum
    Restoring Windows 2008 Server (Domain Controller) We are using Backup Exec System Recovery 2010 to Image our DC. Now this software has a feature to convert the backup into a vmware or hyper-v VM I have also used disk2vhd to convert one of our dc's to a vhd and when I connected it into Hyper-V, it booted fine, I can login - BUT :-) As soon as I login, I get the activation error, that change product key, this product key isn't good for this machine etc. Question is : When in a real recovery situation, what would be the procedure to restore it either virtual or onto a physical box but be able to login and change product key etc ? In this scenario its just locked down and I cant' do anything, if this is the case, how would I replicate my production environment via these tools ? Any Ideas ? Will be grateful for some real world examples here. Same thing happens with our exchange backup / test restore either physical or virtual, can login but nothing else. Now we don't have the keys as they are OEM keys and just wondering what will happen in a real scenario, would we be purchasing another KEY or using the OEM key on our new server ? This is a test environment I am trying to create by restoring our backups either into hyper-v or physical test machines. Also, If I build up a machine (Server 2008) in a VM (Hyper-V), How can I restore just the system state backup of my DC into it ? will that give me the activation error too ? even though I would use the TRIAL ISOs provided by Microsoft ? Kind regards

    Read the article

< Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >