Search Results

Search found 5514 results on 221 pages for 'rpm repository'.

Page 87/221 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • How-To: Run CMSDK against a RAC cluster

    - by frank.closheim
    Using CMSDK in a production environment often requires a robust, reliable and failover enabled repository. When using Oracle Real Application Cluster (RAC) with your CMSDK repository you need to have a specific configuration in place to support such a setup. This post will explain the configuration steps required when running CMSDK 9.0.4.6 with Oracle WebLogic Server (WLS).In the previous CMSDK 9.0.4.2 version a RAC enabled connect string looked like this: (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rac1)(PORT = 1521))(ADDRESS = (PROTOCOL = TCP)(HOST = rac2)(PORT = 1521))(LOAD_BALANCE = NO)(FAILOVER = ON)(CONNECT_DATA =(SERVICE_NAME = rac)(failover_mode = (type=select)(method=basic)))CMSDK 9.0.4.6 makes use of data sources to connect to the underlying database. These data sources are configured inside your Application Server, such as Oracle WebLogic Server.In Oracle WebLogic Server 10.3.4, a single data source implementation has been introduced to support an RAC cluster. It responds to Fast Application Notification (FAN) events to provide Fast Connection Failover (FCF), Runtime Connection Load-Balancing (RCLB), and RAC instance graceful shutdown. XA affinity is supported at the global transaction Id level. The new feature is called WebLogic Active GridLink for RAC; which is implemented as the GridLink data source within WebLogic Server.This GridLink data source also works with Oracle Single Client Access Name (SCAN). SCAN is a feature used in RAC environments that provides a single name for clients to access any Oracle Database running in a cluster. You can think of SCAN as a cluster alias for databases in the cluster. The benefit is that the client’s connect information does not need to change if you add or remove nodes or databases in the cluster.The CMSDK 9.0.4.6 documentation describes how to create a regular JDBC data source named jdbc/OracleDS. Please refer to the following document which describes in detail how to create a GridLink data source in WLS.

    Read the article

  • Single-developer GIT workflow (moving from straightforward FTP)

    - by melat0nin
    I'm trying to decide whether moving to VCS is sensible for me. I am a single web developer in a small organisation (5 people). I'm thinking of VCS (Git) for these reasons: version control, offsite backup, centralised code repository (can access from home). At the moment I work on a live server generally. I FTP in, make my edits and save them, then reupload and refresh. The edits are usually to theme/plugin files for CMSes (e.g. concrete5 or Wordpress). This works well but provides no backup and no version control. I'm wondering how best to integrate VCS into this procedure. I would envisage setting up a Git server on the company's web server, but I'm not clear how to push changes out to client accounts (usually VPSes on the same server) - at the moment I simply log into SFTP with their details and make the changes directly. I'm also not sure what would sensibly represent a repository - would each client's website get their own one? Any insights or experience would be really helpful. I don't think I need the full power of Git by any means, but basic version control and de facto cloud access would be really useful.

    Read the article

  • Upgrading to 12.10 on an external hard drive

    - by Tom Childers
    I did some googling on this and didn't find anything specific for my situation. I currently have 12.04 installed on an external USB hard drive. It's working great. I want to upgrade it to 12.10. My bandwidth is very limited so I have a friend who will download 12.10 for me and put it on a flash stik. Then I can upgrade without having to do the download myself. Which particular version of the 12.10 download file(s) should I get? Are there alternate 12.10 downloads that have all the packages? How do I set it up so when I upgrade 12.04 I can specify that it look in some local repository for the 12.10 files? Can I just dump the 12.10 files in some local directory? Or do I have do go thru some complex commands to create a local repository? I'm pretty new to Linux so a long process of complex terminal commands will probably be a show stopper for me. Remember that my 12.04 install resides on an external hard drive. And I have a laptop with multiple USB ports. Thanks! Advait

    Read the article

  • Ubuntu 12.10 won't display properly after kernel upgrade

    - by Daniel
    After updating a system today, Ubuntu's doesn't display correctly. The desktop now looks like this. It was working properly before. I had to use the terminal to run synaptic package manager, so I could view the update history; which is as follows: Commit Log for Wed Nov 7 11:50:36 2012 Upgraded the following packages: linux-image-generic (3.5.0.17.19) to 3.5.0.18.21 Installed the following packages: linux-image-3.5.0-18-generic (3.5.0-18.29) linux-image-extra-3.5.0-18-generic (3.5.0-18.29) Prior to this issue, the last active driver was nvidia-current-updates, version 304.51. I tried using the nvidia-current driver, version 304.51.really.304.43 instead, but the problem persists. I tried running nvidia-settings from terminal, so I could try configuring something, but the application informs that the Nvidia driver is not being used. As the x-swat repository has nothing for Quantal, I desperately used the unstable x-edgers repository & upgraded, but to no avail; so I purged it. The display should normally be full HD, but the only available resolutions now are 1024x768(4:3) and 800x600(4:3). The system is Dell XPS-L702X, with NVIDIA GeForce GT 555M, and 17" screen. How can I fix this problem? Update: I tried using the Nouveau third-party driver & this fixes the issue. However, if you have any idea how to get the Nvidia drivers working properly with the latest kernel, please share; as I've noticed some videos playing very slowly on the system, though I'm not sure exactly why.

    Read the article

  • Can I install 12.04 packages on 11.10?

    - by Jason R
    I'm running 11.10 and am trying to apply the fix to this bug in Empathy, shown at the very bottom. There is an updated package for the offending component available in the Precise repository, and someone even posted a backported .deb package for use on Oneiric. However, when I try to install that package, it seems to have a dependency on a package that isn't available for Oneiric: (Reading database ... 254452 files and directories currently installed.) Preparing to replace telepathy-indicator 0.0.7-0ubuntu1 (using telepathy-indicator_0.1.1-0ubuntu1_amd64.deb) ... Unpacking replacement telepathy-indicator ... dpkg: dependency problems prevent configuration of telepathy-indicator: telepathy-indicator depends on libunity9 (>= 3.4.6); however: Package libunity9 is not installed. dpkg: error processing telepathy-indicator (--install): dependency problems - leaving unconfigured Errors were encountered while processing: telepathy-indicator The person who posted the backported telepathy-indicator package indicated that it depends upon libunity-dev-5.0; the latest version in the Oneiric repositories is a 4.0 vintage. I also can't find a libunity9 available for Oneiric, so I'm wondering: is it possible to just add the Precise repository to my list and pull the updated packages from there, or should I not expect that they would operate correctly?

    Read the article

  • Interfaces on an abstract class

    - by insta
    My coworker and I have different opinions on the relationship between base classes and interfaces. I'm of the belief that a class should not implement an interface unless that class can be used when an implementation of the interface is required. In other words, I like to see code like this: interface IFooWorker { void Work(); } abstract class BaseWorker { ... base class behaviors ... public abstract void Work() { } protected string CleanData(string data) { ... } } class DbWorker : BaseWorker, IFooWorker { public void Work() { Repository.AddCleanData(base.CleanData(UI.GetDirtyData())); } } The DbWorker is what gets the IFooWorker interface, because it is an instantiatable implementation of the interface. It completely fulfills the contract. My coworker prefers the nearly identical: interface IFooWorker { void Work(); } abstract class BaseWorker : IFooWorker { ... base class behaviors ... public abstract void Work() { } protected string CleanData(string data) { ... } } class DbWorker : BaseWorker { public void Work() { Repository.AddCleanData(base.CleanData(UI.GetDirtyData())); } } Where the base class gets the interface, and by virtue of this all inheritors of the base class are of that interface as well. This bugs me but I can't come up with concrete reasons why, outside of "the base class cannot stand on its own as an implementation of the interface". What are the pros & cons of his method vs. mine, and why should one be used over another?

    Read the article

  • How do you plan your asynchronous code?

    - by NullOrEmpty
    I created a library that is a invoker for a web service somewhere else. The library exposes asynchronous methods, since web service calls are a good candidate for that matter. At the beginning everything was just fine, I had methods with easy to understand operations in a CRUD fashion, since the library is a kind of repository. But then business logic started to become complex, and some of the procedures involves the chaining of many of these asynchronous operations, sometimes with different paths depending on the result value, etc.. etc.. Suddenly, everything is very messy, to stop the execution in a break point it is not very helpful, to find out what is going on or where in the process timeline have you stopped become a pain... Development becomes less quick, less agile, and to catch those bugs that happens once in a 1000 times becomes a hell. From the technical point, a repository that exposes asynchronous methods looked like a good idea, because some persistence layers could have delays, and you can use the async approach to do the most of your hardware. But from the functional point of view, things became very complex, and considering those procedures where a dozen of different calls were needed... I don't know the real value of the improvement. After read about TPL for a while, it looked like a good idea for managing tasks, but in the moment you have to combine them and start to reuse existing functionality, things become very messy. I have had a good experience using it for very concrete scenarios, but bad experience using them broadly. How do you work asynchronously? Do you use it always? Or just for long running processes? Thanks.

    Read the article

  • How to make the internal subwoofer work on an Asus G73JW?

    - by CodyLoco
    I have an Asus G73JW laptop which has an internal subwoofer built-in. Currently, the system detects the internal speakers as a 2.0 system (or I can change do 4.0 is the only other option). I found a bug report here: https://bugs.launchpad.net/ubuntu/+source/alsa-driver/+bug/673051 which discusses the bug and according to them a fix was sent upstream back at the end of 2010. I would have thought this would have made it into 12.04 but I guess not? I tried following the link given at the very bottom to install the latest ALSA drivers, here: https://wiki.ubuntu.com/Audio/InstallingLinuxAlsaDriverModules however I keep running into an error when trying to install: sudo apt-get install linux-alsa-driver-modules-$(uname -r) Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package linux-alsa-driver-modules-3.2.0-24-generic E: Couldn't find any package by regex 'linux-alsa-driver-modules-3.2.0-24-generic' I believe I have added the repository correctly: sudo add-apt-repository ppa:ubuntu-audio-dev/ppa [sudo] password for codyloco: You are about to add the following PPA to your system: This PPA will be used to provide testing versions of packages for supported Ubuntu releases. More info: https://launchpad.net/~ubuntu-audio-dev/+archive/ppa Press [ENTER] to continue or ctrl-c to cancel adding it Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /tmp/tmp.7apgZoNrqK --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver hkp://keyserver.ubuntu.com:80/ --recv 4E9F485BF943EF0EABA10B5BD225991A72B194E5 gpg: requesting key 72B194E5 from hkp server keyserver.ubuntu.com gpg: key 72B194E5: public key "Launchpad Ubuntu Audio Dev team PPA" imported gpg: Total number processed: 1 gpg: imported: 1 (RSA: 1) And I also ran an update as well (followed the instructions on the fix above). Any ideas?

    Read the article

  • New Solaris 11.2 beta features: SMF stencils

    - by user13366125
    As much as there is often a lot discussion about configuration items inside the SMF repository (like the hostname), it brings an important advantage: It introduces the concept of dependencies to configuration changes. What services have be restarted when i change a configuration item. Do you remember all the services that are dependent on the hostname and need a restart after changing it? SMF solves this by putting the information about dependencies into it configuration. You define it with the manifests. However, as much configuration you may put into SMF, most applications still insists to get it's configuration inside the traditional configuration files, like the resolv.conf for the resolver or the puppet.conf for Puppet. So you need a way to take the information out of the SMF repository and generate a config file with it. In the past the way to do so, was some scripting inside the start method that generated the config file before the service started. Solaris 11.2 offers a new feature in this area. It introduces a generic method to enable you to create config files from SMF properties. It's called SMF stencils. (read more)

    Read the article

  • Not able to install LTT tool in 10.04

    - by Ashoka
    I have tried the below commands to install LTT on VMware running Ubuntu 10.04, but got the following error. Please help. From link : https://launchpad.net/~lttng/+archive/ppa $ sudo apt-add-repository ppa:lttng/ppa $ sudo apt-get update $ sudo apt-get install lttng-tools lttng-modules-dkms babeltrace user@usr:~$ sudo apt-add-repository ppa:lttng/ppa Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /etc/apt/secring.gpg --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver keyserver.ubuntu.com --recv C541B13BD43FA44A287E4161F4A7DFFC33739778 gpg: requesting key 33739778 from hkp server keyserver.ubuntu.com gpg: unable to execute program `/usr/local/libexec/gnupg/gpgkeys_curl': No such file or directory gpg: no handler for keyserver scheme `hkp' gpg: keyserver receive failed: keyserver error user@usr:~$ user@usr:~$ sudo apt-get update ..... $ user@usr:~$ sudo apt-get install lttng-tools lttng-modules-dkms babeltrace Reading package lists... Done Building dependency tree Reading state information... Done E: Couldn't find package lttng-tools user@usr:~$ My systemm details: user@usr:~$ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=10.04 DISTRIB_CODENAME=lucid DISTRIB_DESCRIPTION="Ubuntu 10.04.4 LTS" user@usr:~$ user@usr:~$ user@usr:~$ uname -a Linux usr 2.6.32-42-generic #95-Ubuntu SMP Wed Jul 25 15:57:54 UTC 2012 i686 GNU/Linux user@usr:~$ Regards, Ashoka

    Read the article

  • No Unity after ubuntu 12.10 upgrade

    - by Aivaras
    After I upgraded to Ubuntu 12.10 from 12.04, there was low graphics and no Unity. Just the mouse and the wallpaper. So, I got into the terminal via Ctrl+Alt+T, launched Chrome and searched for a solution. As a result, I tried this: sudo sh amd-driver-installer-12.6-legacy-x86.x86_64.run It did not work. Then I tried this: sudo add-apt-repository ppa:makson96/fglrx sudo apt-get update sudo apt-get upgrade sudo apt-get install fglrx-legacy It did not work too. I removed the repository, got back the the xorg version to 1.13, and tried this: sudo sh /usr/share/ati/fglrx-uninstall.sh sudo apt-get remove --purge fglrx fglRx_* fglrx-amdcccle* fglrx-dev* xorg-driver-fglrx sudo apt-get remove --purge xserver-xorg-video-ati xserver-xorg-video-radeon sudo apt-get install xserver-xorg-video-ati sudo apt-get install --reinstall libgl1-mesa-glx libgl1-mesa-dri xserver-xorg-core It did return the screen resolution back, but still no Unity. Is there something what could I do? My graphics card is: lspci | grep VGA 01:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI RV620 [Mobility Radeon HD 3400 Series]

    Read the article

  • Does (should?) changing the URI scheme name change the semantics?

    - by Doug
    If we take: http://example.com/foo is it fair to say that: ftp://example.com/foo .. points to the same resource, just using a different mechanism for resolving it (and of course possibly a different representation, but perhaps not)? This came to light in a discussion we were having surrounding some internal tooling with Git. We have to process some Git repositories, and they come to use as "git@{authority}/{path}" , however the library we're using to interface with them doesn't support the git protocol. I suggested that we should make the service robust in of that it tries to use HTTP or SSH, in essence, discovering what protocols/schemes are supported for resolving the repository at {path} under each {authority}. This was met with some criticism: "We don't know if that's the same repository". My response was: "It had better be!" Looking at RFC 3986, I see this excerpt: URI "resolution" is the process of determining an access mechanism and the appropriate parameters necessary to dereference a URI; this resolution may require several iterations. To use that access mechanism to perform an action on the URI's resource is to "dereference" the URI. Which makes me think that the resolution process is permitted to try different protocols, because: Although many URI schemes are named after protocols, this does not imply that use of these URIs will result in access to the resource via the named protocol. The only concern I have, I guess, is that I only see reference to the notion of changing protocols when it comes to traversing relationships: it is possible for a single set of hypertext documents to be simultaneously accessible and traversable via each of the "file", "http", and "ftp" schemes if the documents refer to each other with relative references. I'm inclined to think I'm wrong in my initial beliefs, because the Normalization and Comparison section of said RFC doesn't mention any way of treating two URIs as equivalent if they use different schemes. It seems like schemes named/based on IP protocols ought to have this notion, at least?

    Read the article

  • "Attach to native process failed" with Apache 2.0 Agent 2.202 for RHEL5 Linux 64bit

    - by Richard
    In trying to install Apache 2.0 Agent 2.202 for RHEL5 Linux 64bit, the dialogue appears as follows. # export JAVAHOME=/usr/java/jdk1.6.0_24/; echo $JAVAHOME /usr/java/jdk1.6.0_24/ # ./setup Launching installer... Attach to native process failed On the server we have the following JREs and I've tried both. $ sudo rpm -qa | egrep "(openjdk|icedtea)" java-1.6.0-openjdk-1.6.0.0-1.27.1.10.8.el5_8 And SElinux appears to be off: # cat /etc/sysconfig/selinux SELINUX=disabled SELINUXTYPE=targeted

    Read the article

  • Installing VLC on CentOS 6.2

    - by suraj
    I'm using CentOS 6.2, and I tried to install VLC Player using yum, but it shows "No package vlc available". I tried below command: [root@localhost ~]# yum install vlc Loaded plugins: fastestmirror, refresh-packagekit Loading mirror speeds from cached hostfile * base: ftp.iitm.ac.in * extras: ftp.iitm.ac.in * updates: ftp.iitm.ac.in base | 3.7 kB 00:00 extras | 3.5 kB 00:00 updates | 3.5 kB 00:00 updates/primary_db | 3.4 MB 01:17 virtualbox | 951 B 00:00 Setting up Install Process No package vlc available. Error: Nothing to do Is there any rpm package available?

    Read the article

  • Veewee, Vagrant, Puppet, Erlang and RabbitMQ

    - by Tobias
    I am kinda stuck with a problem I am trying to wrap my head around for days now. Here is what I am doing: By using Veewee, I am creating a VirtualBox image and then I create a Vagrant box from it. See here, here Finally I run puppet from Vagrant to install RabbitMQ, see here. Veewee, Vagrant and VirtualBox all run on MacOS X 10.7.4. The vagrant box itself is CentOS 6.2. This worked fine for quite some time until I was recreating the VirtualBox image a couple of days ago. During installation of the rabbitmq-plugins during my puppet run I now get the following error: /Stage[main]/Rabbitmq/Exec[rabbitmq-plugins]/returns: erlexec: HOME must be set My RabbitMQ puppet configuration can be found on my GitHub repo for that project, but here is the most important part: $version = "2.8.7" $url = "http://www.rabbitmq.com/releases/rabbitmq-server/v${version}/rabbitmq-server-${version}-1.noarch.rpm" package{"erlang": ensure => "present", } package{"rabbitmq-server": provider => "rpm", source => $url, require => Package["erlang"] } exec{"rabbitmq-plugins": path => "/usr/bin:/usr/sbin:/bin", command => "rabbitmq-plugins enable rabbitmq_management", require => Package["rabbitmq-server"] } My additional repositories, e.g. epel, are defined in veewees postinstall.sh right at the top of the file. Finally, this is what I get when I do '/etc/init.d/rabbitmq-server status' [{pid,2834}, {running_applications,[{rabbit,"RabbitMQ","2.8.7"}, {ssl,"Erlang/OTP SSL application","4.1.6"}, {public_key,"Public key infrastructure","0.13"}, {crypto,"CRYPTO version 2","2.0.4"}, {mnesia,"MNESIA CXC 138 12","4.5"}, {os_mon,"CPO CXC 138 46","2.2.7"}, {sasl,"SASL CXC 138 11","2.1.10"}, {stdlib,"ERTS CXC 138 10","1.17.5"}, {kernel,"ERTS CXC 138 10","2.14.5"}]}, {os,{unix,linux}}, {erlang_version,"Erlang R14B04 (erts-5.8.5) [source] [64-bit] [rq:1] [async-threads:30] [kernel-poll:true]\n"}, {memory,[{total,24993120}, {processes,10328496}, {processes_used,10321296}, {system,14664624}, {atom,1175905}, {atom_used,1143841}, {binary,17192}, {code,11416020}, {ets,766168}]}, {vm_memory_high_watermark,0.4}, {vm_memory_limit,205851852}, {disk_free_limit,1000000000}, {disk_free,7089795072}, {file_descriptors,[{total_limit,924}, {total_used,4}, {sockets_limit,829}, {sockets_used,2}]}, {processes,[{limit,1048576},{used,131}]}, {run_queue,0}, {uptime,6}] Sources in the web suggest, that I have to set HOME. Of course I was logging into the box if HOME was set, for user vagrant it was '/home/vagrant' and for root it was 'root'. As always, any hints/ideas/suggestions/assumptions are more than welcome. Thanks a lot! Cheers, Tobi

    Read the article

  • MD3200 - 3 to 4x less throughput than MD1220. Am I missing something here?

    - by Igor Polishchuk
    I have two R710 servers with similar configuration. One in my office has MD1220 attached. Another one in the datacenter of my hosting services vendor has MD3200. I'm getting significantly worse throughput from MD3200 at my vendors setup. I'm mostly interested in sequential writes, and I'm getting these results in bonnie++ and dd tests: Seq. writes on MD1220 in my office: 1.1 GB/s - bonnie++, 1.3GB/s - dd Seq. writes on MD3200 at my vendor's: 240MB/s - bonnie++, 310MB/s - dd Unfortunately, I could not test the exactly the same configurations, but the two I have should be comparable. If anything, my good performing environment is cheaper than the bad performing. I expect at least similar throughput from these two setups. My vendor cannot really help me. Hopefully, somebody more familiar with the DAS performance can look at it and tell if I'm missing something here and my expectations are too high. To summarize, the question here is it reasonable to expect about 100MB/s of sequential write throughput per each couple of drives in RAID10 on MD3200? Is there any trick to enable such performance in MD3200 with dual controller as opposed to simple MD1220 with a single H800 adapter? More details about the configurations: A good one in my office: Dell R710 2CPU X5650 @ 2.67GHz 12 cores 96GB DDR3, OS: RHEL 5.5, kernel 2.6.18-194.26.1.el5 x86_64 20x300GB 2.5" SAS 10K in a single RAID10 1MB chunk size on MD1220 + Dell H800 I/O controller with 1GB cache in the host Not so good one at my vendor's: Dell R710 2CPU L5520 @ 2.27GHz 8 cores 144GB DDR3, OS: RHEL 5.5, kernel 2.6.18-194.11.4.el5 x86_64 20x146GB 2.5" SAS 15K in a single RAID10 512KB chunk size, Dell MD3200, 2 I/O controllers in array with 1GB cache each Additional information. I've also ran the same tests on the same vendor's host, but the storage was: two raids of 14x146GB 15K RPM drives RAID 10, striped together on the OS level on MD3000+MD1000. The performance was about 25% worse than on MD3200 despite having more drives. When I ran similar tests on the internal storage of my vendor's host (2x146GB 15K RPM drives RAID1, Perc 6i) I've got about 128MB/s seq. writes. Just two internal drives gave me about a half of 20 drives' throughput on MD3200. The random I/O performance of the MD3200 setup is ok, it gives me at least 1300 IOPS. I'm mostly have problems with sequentioal I/O throughput. Thank you for looking into it. Regards Igor

    Read the article

  • Dependency issue while installing Nagios plugins

    - by M. Saâd
    I have a dependency problem while installing nagios-plugins : yum install nagios-plugins-all ... --> Processing Dependency: /usr/bin/sensors for package: nagios-plugins-sensors-1.4.15-7.el6.i686 --> Finished Dependency Resolution Error: Package: nagios-plugins-sensors-1.4.15-7.el6.i686 (epel) Requires: /usr/bin/sensors You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest OS : RHEL 6.1 Installed packages : nagios.i686 3.2.3-3.el6.rf nagios-plugins.i686 1.4.15-7.el6

    Read the article

  • How to measure disk performance?

    - by Jakub Šturc
    I am going to "fix" a friend's computer this weekend. By the symptoms he describes it looks like he has a disk performance problem with his 5400 rpm disk. I want to be sure that disk is the problem so I want to "scientificaly" measure the performance. Which tools do you recommend me for this job? Is there any standard set of numbers I can compare the result of measurement with?

    Read the article

  • SVN checkout/export too long to download

    - by user41671
    Hi, My checkout/export session in svn is kinda weird. The file is just a 300KB in size but the downloading keeps going and it reaches a megabytes in size. The file is in RPM format. I don't know if the file is corrupt or the SVN has a bug. I tried to download the file using web browser and seems the downloading works fine. What probably is the main problem is here?

    Read the article

  • SVN checkout/export too long to download

    - by sasayins
    Hi, My checkout/export session in svn is kinda weird. The file is just a 300KB in size but the downloading keeps going and it reaches a megabytes in size. The file is in RPM format. I don't know if the file is corrupt or the SVN has a bug. I tried to download the file using web browser and seems the downloading works fine. What probably is the main problem is here?

    Read the article

  • Setting up sound on a LTSP server

    - by hfranco
    I've got a Fedora 13 server setup and running successfully. Unfortunately I'm having issues making sure the clients can hear sound. I've installed the necessary alsa packages but I'm not having any luck. # rpm -qa | grep alsa alsa-oss-1.0.17-4.fc12.x86_64 alsa-lib-1.0.23-1.fc13.x86_64 alsa-utils-1.0.23-3.fc13.x86_64 alsa-oss-libs-1.0.17-4.fc12.x86_64 alsa-plugins-pulseaudio-1.0.22-1.fc13.x86_64 Any idea what I should be looking at?

    Read the article

  • Does extra hard drive cache make a difference for streaming video?

    - by johnny
    I am looking at the following two drives for a RAID device, which will be streaming normal things but also a lot of video: Seagate Constellation ES.3 ST4000NM0033 - hard drive - 4 TB - SATA-600 TOSHIBA DT01ACA300 3TB 7200 RPM 64MB Cache SATA 6.0Gb/s 3.5" Will the 128 MB cache on the Seagate have an effect in my described scenario, compared to the 64 MB on the Toshiba? If so, what sort of difference can I expect? I'm using a qnap device, if that matters.

    Read the article

  • Is bigger capacity ram faster then smaller capacity ram for same clock and CL?

    - by didibus
    I know that bigger capacity hard-drives with the same RPM are faster then smaller capacity hard-drives. I was wondering if the same is true for ram. Given two ram clocked at 1600mhz and with identical CLs: 9-9-9-24. Is a 2x8 going to perform better then a 2x4 ? Note that I am not asking if having more ram will improve the performance of my PC, I'm asking if the bigger capacity ram performs better. Thank You.

    Read the article

  • Very slow first handshake Apache

    - by Johan Larsson
    Any one having any ideas where should I start to fix this issue, the first handshake take sometimes up to 20s, but refreshes after that takes only 0.9s. The setup, 100/10 Mbps Windows OS 4GB RAM Intel Core 2 @ 3.0 GHz And 7200 RPM HDD Apache 2.4 No SSL Mod_Security Enabled Mod_Deflate Enabled Mod_Expires Enabled Mod_ReWrite Enabled PHP & MySQL on same machine. I have seen much slower machines preforming better, therefor I think my problem is ony an optimization issue.

    Read the article

  • setting up Windows 2008 STD EN on dedicated server

    - by sunny
    Dear experts, I just purchased a dedicated server and i need to setup for hosting my website. Please help me how do i setup the server step by step. server details : Windows 2008 STD EN SQL Server Web 2008 Core2 Quad 2.4GHz 6GB RAM Single-Power 150GB Velociraptor 10K RPM Please help me for : 1.Setting up server and hosting website 2.Email settings 3.How to set DNS as domain is from another host. regards, Sunny

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >