Search Results

Search found 5514 results on 221 pages for 'rpm repository'.

Page 67/221 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • How to copy partition from one disc to another (boot partition keeping all the vital data)?

    - by Patryk
    I have bought a new laptop but the HDD, which runs at 5400 rpm, is not sufficient for me. The laptop runs Windows 7 64-bit. I have my 'old' one (a better one - Seagate Momentus 7200 rpm) and I would like to replace it but without reinstalling everything. And there my question arises: can I copy my boot partition from my laptop hard drive to my old drive so that it will boot from it properly? If so, then how to do it? Will Norton Ghost be useful here? My point would be to just replace this partition and leave the rest.

    Read the article

  • Best SQL Server Configuration with this hardware.

    - by DavidStein
    I just received my new SQL Server from Dell. The server will be serve approximately 15 OLTP databases which average 10GB in size. Here are the basic specs: Dell PowerEdge R510 with up to 12 Hot Swap HDDs,LED Intel Xeon E5649 2.53GHz, 12M Cache, 5.86 GT/s QPI, 6 core (Quantity of 2) 48GB Memory (6x8GB), 1333MHz Dual Ranked RDIMMs for 2 Processors, Optimized PERC H700 Integrated RAID Controller, 1GB NV Cache 300GB 15K RPM SA SCSI 6Gbps 3.5in Hotplug Hard Drive (Quantity of 4) 600GB 15K RPM SA SCSI 6Gbps 3.5in Hotplug Hard Drive (Quantity of 6) My first thought was to use 3 arrays. OS - Raid 1 - (2)300GB T-Log - Raid 1 (2)300GB DB - Raid 5 (5) 600GB Backup - (1) 600GB - non-raided. However, I could also do the following after purchasing one more drive for backup. OS and T-Log - Raid 10 - (4)300GB DB - Raid 10 (6)600GB The hard drive space is not an issue as the databases are not that large. I'm just trying to optimize the speed of the applications using these databases. So, what would you guys recommend?

    Read the article

  • EPM 11.1.2.2 Architecture: Financial Performance Management Applications

    - by Marc Schumacher
     Financial Management can be accessed either by a browser based client or by SmartView. Starting from release 11.1.2.2, the Financial Management Windows client does not longer access the Financial Management Consolidation server. All tasks that require an on line connection (e.g. load and extract tasks) can only be done using the web interface. Any client connection initiated by a browser or SmartView is send to the Oracle HTTP server (OHS) first. Based on the path given (e.g. hfmadf, hfmofficeprovider) in the URL, OHS makes a decision to forward this request either to the new Financial Management web application based on the Oracle Application Development Framework (ADF) or to the .NET based application serving SmartView retrievals running on Internet Information Server (IIS). Any requests send to the ADF web interface that need to be processed by the Financial Management application server are send to the IIS using HTTP protocol and will be forwarded further using DCOM to the Financial Management application server. SmartView requests, which are processes by IIS in first row, are forwarded to the Financial Management application server using DCOM as well. The Financial Management Application Server uses OLE DB database connections via native database clients to talk to the Financial Management database schema. Communication between the Financial Management DME Listener, which handles requests from EPMA, and the Financial Management application server is based on DCOM.  Unlike most other components Essbase Analytics Link (EAL) does not have an end user interface. The only user interface is a plug-in for the Essbase Administration Services console, which is used for administration purposes only. End users interact with a Transparent or Replicated Partition that is created in Essbase and populated with data by EAL. The Analytics Link Server deployed on WebLogic communicates through HTTP protocol with the Analytics Link Financial Management Connector that is deployed in IIS on the Financial Management web server. Analytics Link Server interacts with the Data Synchronisation server using the EAL API. The Data Synchronization server acts as a target of a Transparent or Replicated Partition in Essbase and uses a native database client to connect to the Financial Management database. Analytics Link Server uses JDBC to connect to relational repository databases and Essbase JAPI to connect to Essbase.  As most Oracle EPM System products, browser based clients and SmartView can be used to access Planning. The Java based Planning web application is deployed on WebLogic, which is configured behind an Oracle HTTP Server (OHS). Communication between Planning and the Planning RMI Registry Service is done using Java Remote Message Invocation (RMI). Planning uses JDBC to access relational repository databases and talks to Essbase using the CAPI. Be aware of the fact that beside the Planning System database a dedicated database schema is needed for each application that is set up within Planning.  As Planning, Profitability and Cost Management (HPCM) has a pretty simple architecture. Beside the browser based clients and SmartView, a web service consumer can be used as a client too. All clients access the Java based web application deployed on WebLogic through Oracle HHTP Server (OHS). Communication between Profitability and Cost Management and EPMA Web Server is done using HTTP protocol. JDBC is used to access the relational repository databases as well as data sources. Essbase JAPI is utilized to talk to Essbase.  For Strategic Finance, two clients exist, SmartView and a Windows client. While SmartView communicates through the web layer to the Strategic Finance Server, Strategic Finance Windows client makes a direct connection to the Strategic Finance Server using RPC calls. Connections from Strategic Finance Web as well as from Strategic Finance Web Services to the Strategic Finance Server are made using RPC calls too. The Strategic Finance Server uses its own file based data store. JDBC is used to connect to the EPM System Registry from web and application layer.  Disclosure Management has three kinds of clients. While the browser based client and SmartView interact with the Disclosure Management web application directly through Oracle HTTP Server (OHS), Taxonomy Designer does not connect to the Disclosure Management server. Communication to relational repository databases is done via JDBC, to connect to Essbase the Essbase JAPI is utilized.

    Read the article

  • Could a too low voltage during long periods damage my computer fan?

    - by Sopalajo de Arrierez
    Computer fans use to run at 12Volt, but most as for today they allow 9Volt or even less to slow down the fan speed (RPM, Revolutions Per Minute). In cases of too low voltage, the fan stops, but I can see it "trying to start again". For example: my Tacens 9dB fan stops at about 10 Volts, but to start it again, 10.5Volt is not enough, and the engine tries to move the fan (I can see a small movement as an "attempt" to move) each 1-5 seconds, but it does not succees, so the fan keeps at 0 RPM. Maybe that "attempt" to move could damage the internal engine of the fan when it last for hours or days?

    Read the article

  • PHP 5.2 installation with GD on CentOS 6

    - by Pratik Thakkar
    I am trying to install PHP 5.2.17 on CentOS 6.2. I have downloaded RPMs from http://www6.atomicorp.com/channels/atomic/centos/6/i386/RPMs/ Problem is that the PHP RPM seems to have GD disabled by default. Hence in spite installing php-gd RPM, GD is disabled. Is there any way that I can enable GD. Atomicorp seems to be the only website that has PHP 5.2.17 RPMs. I am not an advanced user to be able to compile PHP. I would appreciate help on this.

    Read the article

  • Choice and setup of version control

    - by Peter M
    I am about to set up an new laptop and in the process transition to a new version control system as part of a general cleanup. Currently I use a centralized version control system (yes it is VSS, and yes I know all the pro's and con's of that system, but as a single user system it works well for me). I have very little requirements for a new system and I am free to choose among any of the current mainstream players, but cost constraints will push me towards oss. Some of my requirements are: Runs on a single machine (ie the laptop in question) under windows I am not sharing things with other developers or workers - this is more for my own historical benefits. I want to version source code, documentation and binary files I have a large hierarchy of projects that are unrelated (see below) I have files within the hierarchy that don't need to be controlled (but could be) Some projects use Visual Studio, so some integration there could be nice. There could be some sharing of files between jobs. I generally only need a small about of branching in code files The directory hierarchy that I have at the moment is somewhat like: Root | |--Customer #1 | | | |--Job #1 | | | | | |--Data files received from Customer for Job (not controlled) | | |--Documentation files (controlled) | | |--Project information files (not controlled - but could be) | | |--Software Project Files (controlled) | | |--Scratch dir for job (not controlled) | | | |--Job #2 | | (same structure as above) | |--Customer #2 | |.. | |--Cusmtomer #n |.. Currently I have about 22 customers with differing numbers of projects underneath them. At the moment I have a single VSS repository based at the root of the directory structure. If I kept with a centralized system (ie SVN) I believe that I should keep the same approach and continue with a single repository based from the root dir. Is this a valid approach? However if I move to a distributed tool then I am unsure of how I should handle the situation. My initial guess is that I should not have a repository based on the root of my entire directory structure - but that is a guess so I really don't know how valid it is. Should I pitch a distributed approach at the Root, Customer, Job or sub-Job directory level? Also what I am not clear on with distributed tools (and perhaps with SVN as well), is if I can branch parts of a repository. For example, I can see branching source code in software projects as being useful, but branching my documentation as not being useful. So if I pitch a repository at the Job level, can I just branch the Software Project Files? Or would all files in that Job be branched? Every time I look at distributed tools I get a nagging feeling that they are not suited to my style of setup. I am uncomfortable with idea of having to manually set up something like 50 to 80 separate repositories (if I pitch at the Job level, or 20+ if at the Customer level) within my directory hierarchy. This feeling also extends to having all those repositories scattered around as well - however I do have a backup strategy that I trust, so this latter feeling is pretty well unfounded. So what advice can you all give me? Thanks in advance!

    Read the article

  • How to install INFORMIX (IDS) 11.50 in CENTOS 5.4

    - by d23
    Getting ERROR: The wizard cannot continue because of the following error: could not load wizard specified in /wizard.inf (104) Solution: Uninstall everything related with java and jre. Then, download the lastest version of jre for linux x86 or x64, rpm.bin one. And follow these instructions http://www.java.com/en/download/help/linux_install.xml "To install the Linux RPM (self-extracting) file". Make a user and group informix (as root), then uncompress the (informix package).tar in /opt/informix that you have created. And tun ./ids_install, and GUI will work ok. Hope it helps.

    Read the article

  • Upgrading OpenSSL in CentOS 5.3

    - by Lin
    I want to use one IP to host many domains with individual SSL certificates (requires SNI). In CentOS 5.3, the latest version of OpenSSL I can find an RPM for is 0.9.8e, which does not support SNI. I want to upgrade to 0.9.8k but I can't find an RPM. I could compile from source, but if I try to remove the existing OpenSSL package through yum, it wants me to remove all packages that depend on OpenSSL (100+ packages). EDIT: I ended up installing 0.9.8k without overwriting the previous version. Now I both avoid breaking dependencies and can use SNI. Was this the best action?

    Read the article

  • Qmail Installation CentosI386

    - by tike
    I was trying to install qmailtoster in my centos server, i did all of the following not for once but repetitively as i got error and continued but i felt i need some help. i did follow all the steps of this wiki documentation. http://wiki.qmailtoaster.com/index.php/CentOS_5_QmailToaster_Install#Begin_Install followed all procedure when i came in a point to install i always got this error. cnt50-install-script.sh: line 80: rpmbuild: command not found error: File not found by glob: /usr/src/redhat/RPMS/i386/daemontools-toaster*.rpm Installing ucspi-tcp-toaster . . . Shall we continue? (yes, skip, quit) [y]/s/q: cnt50-install-script.sh.4: line 90: rpmbuild: command not found error: File not found by glob: /usr/src/redhat/RPMS/i386/ucspi-tcp-toaster*.rpm Installing vpopmail-toaster . . . Shall we continue? (yes, skip, quit) [y]/s/q: any suggestions please?

    Read the article

  • What does the fan speed sensor really report?

    - by T. Verron
    I have an overheating issue on my netbook (ASUS EeePC 1015PW), which I'm trying to troubleshoot. Using lm-sensors while overheating gave me this output : acpitz-virtual-0 Adapter: Virtual device temp1: +86.0°C (crit = +100.0°C) eeepc-isa-0000 Adapter: ISA adapter fan1: 4089 RPM coretemp-isa-0000 Adapter: ISA adapter Core 0: +82.0°C (crit = +100.0°C) Core 1: +80.0°C (crit = +100.0°C) But I can't hear the fan. So I enabled manual pwm controling and set the fan to full speed, and after a few minutes I got this output : acpitz-virtual-0 Adapter: Virtual device temp1: +65.0°C (crit = +100.0°C) eeepc-isa-0000 Adapter: ISA adapter fan1: 4016 RPM coretemp-isa-0000 Adapter: ISA adapter Core 0: +62.0°C (crit = +100.0°C) Core 1: +58.0°C (crit = +100.0°C) And this time I could hear the fan spinning. So there's quite obviously an issue with either fan control or fan monitoring. Hence the question : what kind of physical information does the fan sensor really report? Thank you PS. I should have added that the computer is a small, hard-to-disassemble netbook, so I can't and don't want to try experiments like "block the fan and see what the sensor reports".

    Read the article

  • WebCenter Customer Spotlight: Hyundai Motor Company

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryHyundai Motor Company is one of the world’s fastest-growing car manufacturers, ranked as the fifth-largest in 2011. The company also operates the world’s largest integrated automobile manufacturing facility in Ulsan, Republic of Korea, which can produce 1.6 million units per year. They  undertook a project to improve business efficiency and reinforce data security by centralizing the company’s sales, financial, and car manufacturing documents into a single repository. Hyundai Motor Company chose Oracle Exalogic, Oracle Exadata, Oracle WebLogic Sever, and Oracle WebCenter Content 11g, as they provided better performance, stability, storage, and scalability than their competitors.  Hyundai Motor Company cut the overall time spent each day on document-related work by around 85%, saved more than US$1 million in paper and printing costs, laid the foundation for a smart work environment, and supported their future growth in the competitive car industry. Company OverviewHyundai Motor Company is one of the world’s fastest-growing car manufacturers, ranked as the fifth-largest in 2011. The company also operates the world’s largest integrated automobile manufacturing facility in Ulsan, Republic of Korea, which can produce 1.6 million units per year. The company strives to enhance its brand image and market recognition by continuously improving the quality and design of its cars. Business Challenges To maximize the company’s growth potential, Hyundai Motor Company undertook a project to improve business efficiency and reinforce data security by centralizing the company’s sales, financial, and car manufacturing documents into a single repository. Specifically, they wanted to: Introduce a smart work environment to improve staff productivity and efficiency, and take advantage of rapid company growth due to new, enhanced car designs Replace a legacy document system managed by individual staff to improve collaboration, the visibility of corporate documents, and sharing of work-related files between employees Improve the security and storage of documents containing corporate intellectual property, and prevent intellectual property loss when staff leaves the company Eliminate delays when downloading files from the central server to a PC Build a large, single document repository to more efficiently manage and share data between 30,000 staff at the company’s headquarters Establish a scalable system that can be extended to Hyundai offices around the world Solution DeployedAfter conducting a large-scale benchmark test, Hyundai Motor Company chose Oracle Exalogic, Oracle Exadata, Oracle WebLogic Sever, and Oracle WebCenter Content 11g, as they provided better performance, stability, storage, and scalability than their competitors. Business Results Lowered the overall time spent each day on all document-related work by approximately 85%—from 4.5 hours to around 42 minutes on an average day Saved more than US$1 million per year in printer, paper, and toner costs, and laid the foundation for a completely paperless environment Reduced staff’s time spent requesting and receiving documents about car sales or designs from supervisors by 50%, by storing and managing all documents across the corporation in a single repository Cut the time required to draft new-car manufacturing, sales, and design documents by 20%, by allowing employees to reference high-quality data, such as marketing strategy and product planning documents already in the system Enhanced staff productivity at company headquarters by 9% by reducing the document-related tasks of 30,000 administrative and research and development staff Ensured the system could scale to hold 3 petabytes of car sales, manufacturing, and design data by 2013 and be deployed at branches worldwide We chose Oracle Exalogic, Oracle Exadata, and Oracle WebCenter Content to support our new document-centralization system over their competitors as Oracle offers stable storage for petabytes of data and high processing speeds. We have cut the overall time spent each day on document-related work by around 85%, saved more than US$1 million in paper and printing costs, laid the foundation for a smart work environment, and supported our future growth in the competitive car industry. Kang Tae-jin, Manager, General Affairs Team, Hyundai Motor Company Additional Information Hyundai Motor Company Customer Snapshot Oracle WebCenter Content

    Read the article

  • Adaptec 5805 not recognized after reboot

    - by Rakedko ShotGuns
    After rebooting the system, the controller is not recognized. It only works if the computer is shut down and turned off. I have recently updated the firmware to "Adaptec RAID 5805 Firmware Build 18948". How do I fix the problem? Configuration summary --------------------------- 1. Server name.....................raid_test Adaptec Storage Manager agent...7.31.00 (18856) Adaptec Storage Manager console.7.31.00 (18856) Number of controllers...........1 Operating system................Windows Configuration information for controller 1 ------------------------------------------------------- Type............................Controller Model...........................Adaptec 5805 Controller number...............1 Physical slot...................2 Installed memory size...........512 MB Serial number...................8C4510C6C9E Boot ROM........................5.2-0 (18948) Firmware........................5.2-0 (18948) Device driver...................5.2-0 (16119) Controller status...............Optimal Battery status..................Charging Battery temperature.............Normal Battery charge amount (%).......37 Estimated charge remaining......0 days, 16 hours, 12 minutes Background consistency check....Disabled Copy back.......................Disabled Controller temperature..........Normal (40C / 104F) Default logical drive task priority High Performance mode................Dynamic Number of logical devices.......1 Number of hot-spare drives......0 Number of ready drives..........0 Number of drive(s) assigned to MaxCache cache0 Maximum drives allowed for MaxCache cache8 MaxCache Read Cache Pool Size...0 GB NCQ status......................Enabled Stay awake status...............Disabled Internal drive spinup limit.....0 External drive spinup limit.....0 Phy 0...........................No device attached Phy 1...........................No device attached Phy 2...........................No device attached Phy 3...........................1.50 Gb/s Phy 4...........................No device attached Phy 5...........................No device attached Phy 6...........................No device attached Phy 7...........................No device attached Statistics version..............2.0 SSD Cache size..................0 Pages on fetch list.............0 Fetch list candidates...........0 Candidate replacements..........0 69319...........................31293 Logical device..................0 Logical device name............. RAID level......................Simple volume Data space......................148,916 GB Date created....................09/19/2012 Interface type..................Serial ATA State...........................Optimal Read-cache mode.................Enabled Preferred MaxCache read cache settingEnabled Actual MaxCache read cache setting Disabled Write-cache mode................Enabled (write-back) Write-cache setting.............Enabled (write-back) Partitioned.....................Yes Protected by hot spare..........No Bootable........................Yes Bad stripes.....................No Power Status....................Disabled Power State.....................Active Reduce RPM timer................Never Power off timer.................Never Verify timer....................Never Segment 0.......................Present: controller 1, connector 0, device 0, S/N 9RX3KZMT Overall host IOs................99075 Overall MB......................4411203 DRAM cache hits.................71929 SSD cache hits..................0 Uncached IOs....................29239 Overall disk failures...........0 DRAM cache full hits............71929 DRAM cache fetch / flush wait...0 DRAM cache hybrid reads.........3476 DRAM cache flushes..............-- Read hits.......................0 Write hits......................0 Valid Pages.....................0 Updates on writes...............0 Invalidations by large writes...0 Invalidations by R/W balance....0 Invalidations by replacement....0 Invalidations by other..........0 Page Fetches....................0 0...............................0 73..............................10822 8...............................3 46138...........................4916 27184...........................15226 20875...........................323 16982...........................1771 1563............................5317 1948............................2969 Serial attached SCSI ----------------------- Type............................Disk drive Vendor..........................Unknown Model...........................ST3160815AS Serial Number...................9RX3KZMT Firmware level..................3.AAD Reported channel................0 Reported SCSI device ID.........0 Interface type..................Serial ATA Size............................149,05 GB Negotiated transfer speed.......1.50 Gb/s State...........................Optimal S.M.A.R.T. error................No Write-cache mode................Write back Hardware errors.................0 Medium errors...................0 Parity errors...................0 Link failures...................0 Aborted commands................0 S.M.A.R.T. warnings.............0 Solid-state disk (non-spinning).false MaxCache cache capable..........false MaxCache cache assigned.........false NCQ status......................Enabled Phy 0...........................1.50 Gb/s Power State.....................Full rpm Supported power states..........Full rpm, Powered off 0x01............................113 0x03............................98 0x04............................99 0x05............................100 0x07............................83 0x09............................75 0x0A............................100 0x0C............................99 0xBB............................100 0xBD............................100 0xBE............................61 0xC2............................39 0xC3............................69 0xC5............................100 0xC6............................100 0xC7............................200 0xC8............................100 0xCA............................100 Aborted commands................0 Link failures...................0 Medium errors...................0 Parity errors...................0 Hardware errors.................0 SMART errors....................0 End of the configuration information for controller 1

    Read the article

  • Friday Tips #6, Part 2

    - by Chris Kawalek
    Here is a question about updating Oracle VM: Question: How can I perform Oracle VM 3 server updates from Oracle VM Manager? Answer by Gregory King, Principal Best Practices Consultant, Oracle VM Product Management: Server Update Manager is a built-in feature of the Oracle VM Manager. Basically, Server Update Manager automatically configures YUM updates on all the Oracle VM Servers, pointing each to our Unbreakable Linux Network (ULN) update channel for Oracle VM. The servers periodically check with our Oracle YUM repository and notify the Oracle VM Manager that an update is available for each server. Actual server updates must be triggered by the Oracle VM administrator – they are not executed automatically. At this point, you can use the Oracle VM Manager to put a server into maintenance mode which live migrates all the running Oracle VM Guests to other Oracle VM Servers in the server pool. Once all the Oracle VM Guests have been migrated, the Oracle VM administrator can trigger the update on the server. The entire process is documented in the Installation and Upgrade Guide of Oracle VM Documentation so I won’t spend time detailing the steps. However, configuring the Server Update Manager is exceedingly simple. Simply navigate to the Tools and Resources tab in the Oracle VM Manager, select the link for Server Update Manager and ensure the following values are added to the text boxes as shown in the illustration below: YUM Base URL: http://public-yum.oracle.com/repo/OracleVM/OVM3/latest/x86_64 YUM GPG Key: file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle Every server in the pool will be automatically configured for YUM updates once you choose the Apply button. Many thanks to Greg and Rick for providing the answers to this week's questions. If you want to ask us something, hit up Twitter and use hashtag #AskOracleVirtualization. See you next week! -Chris 

    Read the article

  • RAID 1 not performing as expected

    - by Faken
    I recently bought a new 320Gb hard drive for my computer to set up RAID 1 on it for some added security. Installation went as smooth as could possibly be (plug in power, plug in data cable, start up computer, Intel software recognized new drive, right click create RAID 1, done!). However, for some inexplicable reason, I seem to have strange test results when using BENCH32. On my old configuration, a single 7200 rpm drive, I achieved about 60 MB/s write and 70 MB/s read. With a new RAID 1 configuration, I would expect the write to be slightly diminished but read to be significantly improved (though not exactly double speed). However, with the new configuration, I am getting 90 MB/s write and only about 80 MB/s read. I should NOT be getting improved write performance, especially NOT better than read! What's going on? My system setup is: q6600 2.4ghz CPU 4Gb DDR2 667mhz RAM on board Intel ICH9R "RAID chip" 2x Seagate 7200 RPM 320GB drives in RAID 1 Widows 7 home premium 64-bit

    Read the article

  • CentOS: safe to yum reinstall after removing 32-bit packages?

    - by virtualeyes
    as per the CentOS FAQ on removing 32-bit packages present in a 64-bit install, is it safe to perform the last step: You may also want to do this: yum reinstall \* The reason is that sometimes the /usr/share/ items (shared between BOTH packages) get removed when removing the 32-bit RPM packages. on an existing installation? (i.e. where data & settings of possibly affected applications need to be preserved) rpm -Va shows a number of entries like: /sbin/ethtool: at least one of file's dependencies has changed since prelinking S.?..... /sbin/ethtool /usr/libexec/mysqld: at least one of file's dependencies has changed since prelinking S.?..... /usr/libexec/mysqld along with /usr/share entries with T flag (apparently filetime diff, seems safe) The machine is up & running fine, but may not be whenever a reboot occurs. Clue-in as to the real state of the machine (hosed or OK) appreciated Thanks

    Read the article

  • How anti-virus on the host machne affects performance of virtual machines?

    - by Ladislav Mrnka
    I'm diagnosing some issue with Oracle virtual box where virtual machine sometimes perform terribly slow (much slower then notebook with worse configuration): Notebook i7 (2 cores with HT = 4 logical CPUs), 4GB RAM, 5400 rpm disk, Win 7 64bit Virtual machine (Oracle Virtual Box) Host: i7 (4 cores with HT = 8 logical CPUs, 12 GB RAM, system runs from SSD, virtual machine from 7200 rpm disk, Win 7 64bit) Virtual machine: 4 cores assigned, 8 GB RAM assigned, Win 2008 R2 Enterprise (64 bit) Virtual machine uses bridge to separate network interface (machine has two) VPN for network communication No other virtual machine runs on the host Host has installed ESET Smart Security All SW is updated with last version. My question is if anti-virus on the host machine can somehow affect performance of the virtual machine and if so how can I turn it off without turning the anti-virus itself?

    Read the article

  • seaudit report detail

    - by user1014130
    I've just started using selinux in the last 6 months and am getting to grips with it. However, using sealert on a new CENTOS 6 server, Im not getting the level of detail I was with CENTOS 5. To illustrate: Running sealert -a /var/log/audit/audit.log On CENTOS 5 I get: Summary: SELinux is preventing postdrop (postfix_postdrop_t) "getattr" to /var/log/httpd/error_log (httpd_log_t). Detailed Description: SELinux denied access requested by postdrop. It is not expected that this access is required by postdrop and this access may signal an intrusion attempt. It is also possible that the specific version or configuration of the application is causing it to require additional access. Allowing Access: Sometimes labeling problems can cause SELinux denials. You could try to restore the default system file context for /var/log/httpd/error_log, restorecon -v '/var/log/httpd/error_log' If this does not work, there is currently no automatic way to allow this access. Instead, you can generate a local policy module to allow this access - see FAQ (http://fedora.redhat.com/docs/selinux-faq-fc5/#id2961385) Or you can disable SELinux protection altogether. Disabling SELinux protection is not recommended. Please file a bug report (http://bugzilla.redhat.com/bugzilla/enter_bug.cgi) against this package. Additional Information: Source Context root:system_r:postfix_postdrop_t Target Context system_u:object_r:httpd_log_t Target Objects /var/log/httpd/error_log [ file ] Source postdrop Source Path /usr/sbin/postdrop Port Host Source RPM Packages postfix-2.3.3-2.1.el5_2 Target RPM Packages Policy RPM selinux-policy-2.4.6-279.el5_5.1 Selinux Enabled True Policy Type targeted MLS Enabled True Enforcing Mode Enforcing Plugin Name catchall_file Host Name server109-228-26-144.live-servers.net Platform Linux server109-228-26-144.live-servers.net 2.6.18-194.8.1.el5 #1 SMP Thu Jul 1 19:04:48 EDT 2010 x86_64 x86_64 Alert Count 1 First Seen Wed Jun 13 11:43:55 2012 Last Seen Wed Jun 13 11:43:55 2012 but on CENTOS 6 I just get: Summary: SELinux is preventing postdrop (postfix_postdrop_t) "getattr" to /var/log/httpd/error_log (httpd_log_t). Detailed Description: SELinux denied access requested by postdrop. It is not expected that this access is required by postdrop and this access may signal an intrusion attempt. It is also possible that the specific version or configuration of the application is causing it to require additional access. Allowing Access: Sometimes labeling problems can cause SELinux denials. You could try to restore the default system file context for /var/log/httpd/error_log, restorecon -v '/var/log/httpd/error_log' If this does not work, there is currently no automatic way to allow this access. Instead, you can generate a local policy module to allow this access - see FAQ (http://fedora.redhat.com/docs/selinux-faq-fc5/#id2961385) Or you can disable SELinux protection altogether. Disabling SELinux protection is not recommended. Please file a bug report (http://bugzilla.redhat.com/bugzilla/enter_bug.cgi) against this package. Im running exactly the same command. Does anyone have any idea why Im not getting the "Additional information" that I do with CENTOS 5? Thanks in advance Dylan

    Read the article

  • Fan is spinning too fast just in Windows - software?

    - by B. Roland
    I've recently replaced my fans (CPU, GPU, and bought a CHA fan). The GPU remained the same, but I've seen it when it was spinned 2 times faster, than it usual... but it is rarely. The problem is, that the CPU fan in Windows (especially 7) spinned too much, 'cos it keeps in under 40°C, and it is spinning with 3300-3600 RPM, which is too high I think. If I swich to Ubuntu, it keeps on ~40-45°C with with 2500-2800 RPM, which is a big difference in numbers, and in noise. I'm looking for a manual fan control solution, or just reduce the Windows' multipliers of fan speed control, somehow... I was bought the new fans because of the lower noise (and it does it, but not with 3.6k RMP). Thank you!

    Read the article

  • Hard drive placement

    - by zm15
    I'm a video editor working with large HD files. I am building a new computer and need some help. I will be running 2 hard drives. One with the operating system and all the programs. And one with all the project files I will be working from. I am keeping these seperate. I will be purchasing a 10k rpm hard drive. So i will have a 10k rpm drive, and a 7200rpm drive. Should I put the OS on the faster drive, or put my working files on the faster drive?

    Read the article

  • High-End Gaming Desktop PC in France [closed]

    - by Lerikunus
    Hello, I was looking to buy an ALIENWARE AREA 51, as my Desktop crashed some days ago. But the Preliminary Ship date is 03.01.11 and I need the new PC until Friday next week =( Does someone know a(n) (online)-store where Build&Shipping time is quite low so I can get it until next Friday? I am living in France (Nice). Or any place I can ask fot his kind of advice? This is my desired configuration: Overclocked Intel® Core™ i7 980x Extreme Six Core Processor (4.0GHz, 12MB Cache) 12GB Triple Channel 1600MHz DDR3 Dual 2GB ATI Radeon™ HD 6950 2TB RAID 1+0 (4x 1TB SATA-II, 7,200 RPM, 32MB Cache HDDs) 2TB RAID 1 (2x2TB SATA-II, 7,200 RPM, 32MB Cache HDDs) Thank you very much!

    Read the article

  • What makes an Apple hard drive special?

    - by Michael Shnitzer
    The Mac Pro has a specific hard drive for sale in the Apple Store for $549.00. The drive has the following specs: Serial ATA 3GB per second 7200 RPM Amazon has a hard drive with the same specs for $169.99. The only difference I can tell is that the Apple hard drive label says it has "Apple HDD Firmware". What exactly is the benefit of this firmware and is there something I am missing that make up for the price difference in these two drives? Update: My initial comparison between the two drive was unfair. Apparently 2TB drives that are 3 GB/S and 7200 RPM are quiet a bit more than $169.99. Dell has a 2 TB SATA Caviar Black from Western Digital that is $319.99, which is closer to Apple's price.

    Read the article

  • Usinng svnadmin dump to revert the latest revision committed

    - by Wux
    What I need is that the latest (mistake) revision being reverted and that the repository does not store it in anyway. That is, I'm trying to erase the latest revision out of existence, NOT trying to fix things by coming back to the latest-1 revision. In other words, I want to avoid the repository growing in size. Suppose head revision is 100. I knew that the suggested answer is that svnadmin dump -r0:80 old-repo | svnadmin load --force-uuid new-repo. What I'm confusing myself about is why not svnadmin dump -r81:100 old-repo Why the first and not the second solution? I suppose svnadmin dump will erase the repository completely? And keeping only revision 0 - 80 in a dump file? Is my understanding of "taking a part out of the repository into a dump file" about svnadmin dump completely wrong? (That is revision 81 - 100 is still there) Sincere apologies if this has been asked. I did spend some time searching though no specific things about this were found. A topic link in case I miss it would be greatly appreciated.

    Read the article

  • Large Django application layout

    - by Rob Golding
    I am in a team developing a web-based university portal, which will be based on Django. We are still in the exploratory stages, and I am trying to find the best way to lay the project/development environment out. My initial idea is to develop the system as a Django "app", which contains sub-applications to separate out the different parts of the system. The reason I intended to make these "sub" applications is that they would not have any use outside the parent application whatsoever, so there would be little point in distributing them separately. We envisage that the portal will be installed in multiple locations (at different universities, for example) so the main app can be dropped into a number of Django projects to install it. We therefore have a different repository for each location's project, which is really just a settings.py file defining the installed portal applications, and a urls.py routing the urls to it. I have started to write some initial code, though, and I've come up against a problem. Some of the code that handles user authentication and profiles seems to be without a home. It doesn't conceptually belong in the portal application as it doesn't relate to the portal's functionality. It also, however, can't go in the project repository - as I would then be duplicating the code over each location's repository. If I then discovered a bug in this code, for example, I would have to manually replicate the fix over all of the location's project files. My idea for a fix is to make all the project repos a fork of a "master" location project, so that I can pull any changes from that master. I think this is messy though, and it means that I have one more repository to look after. I'm looking for a better way to achieve this project. Can anyone recommend a solution or a similar example I can take a look at? The problem seems to be that I am developing a Django project rather than just a Django application.

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >