Search Results

Search found 8744 results on 350 pages for 'yann core'.

Page 146/350 | < Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >

  • MariaDB doesn't upgrade, 2 versions are installed

    - by zahorak
    I have a server running on Debian wheezy with MaraiDB and OwnCloud. Few days ago, I wanted to update the packages because of the OwnCloud updates but something went wrong. Usually in this case I'd probably try to remove and again install the problematic packages, but on a server which is used by different people it doesn't seem like a valid solution anymore. Here you can see my console output: user@server:~$ sudo apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: libmariadbclient18 : Depends: libmysqlclient18 (= 10.0.4+maria-1~wheezy) but 10.0.5+maria-1~wheezy is installed libmysqlclient18 : Depends: libmariadbclient18 (= 10.0.5+maria-1~wheezy) but 10.0.4+maria-1~wheezy is installed mariadb-client-10.0 : Depends: libmariadbclient18 (>= 10.0.5+maria-1~wheezy) but 10.0.4+maria-1~wheezy is installed mariadb-client-core-10.0 : Depends: libmariadbclient18 (>= 10.0.5+maria-1~wheezy) but 10.0.4+maria-1~wheezy is installed mariadb-server : Depends: mariadb-server-10.0 (= 10.0.5+maria-1~wheezy) but 10.0.4+maria-1~wheezy is installed mariadb-server-core-10.0 : Depends: libmariadbclient18 (>= 10.0.5+maria-1~wheezy) but 10.0.4+maria-1~wheezy is installed E: Unmet dependencies. Try using -f. user@server:~$ sudo apt-get upgrade -f Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages will be upgraded: libmariadbclient18 mariadb-server-10.0 owncloud 3 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 7 not fully installed or removed. Need to get 0 B/37.2 MB of archives. After this operation, 3,565 kB of additional disk space will be used. Do you want to continue [Y/n]? Y Preconfiguring packages ... (Reading database ... 35901 files and directories currently installed.) Preparing to replace libmariadbclient18 10.0.4+maria-1~wheezy (using .../libmariadbclient18_10.0.5+maria-1~wheezy_amd64.deb) ... Unpacking replacement libmariadbclient18 ... dpkg: error processing /var/cache/apt/archives/libmariadbclient18_10.0.5+maria-1~wheezy_amd64.deb (--unpack): trying to overwrite '/usr/lib/mysql/plugin/dialog.so', which is also in package mariadb-server-10.0 10.0.4+maria-1~wheezy Errors were encountered while processing: /var/cache/apt/archives/libmariadbclient18_10.0.5+maria-1~wheezy_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) I tried removing the 10.0.4 package version of libmariadbclient18 but I wasn't really successful in doing that. So my last hope is here, do you have any ideas how exactly I could fix this issue? Thx very much

    Read the article

  • OSX 10.6.6 SSH md5 break-in check

    - by Alex
    Information Recently one of the linux servers that I access was compromised to steal passwords and ssh keys using a modified ssh binary. This lead me to question if the attacker had compromised my OSX Laptop which had ssh access turned on. A sophos virus scan turned up nothing, and I did not have rkhunter installed before the attack, so I could not compare hashes of the system binaries to be sure. However because OSX is relatively standard for each of their major releases, I asked fiends for md5 hashes md5 /usr/bin/ssh and md5 /usr/sbin/sshd as a basic first check to see if there was anything different about my machine. A few emails later I have found the following data: Version (Arch) [N] MD5 (/usr/bin/ssh) MD5 (/usr/sbin/sshd) OSX 10.5.8 (PPC) [3] 1e9fd483eef23464ec61c815f7984d61 9d32a36294565368728c18de466e69f1 OSX 10.5.8 (intel) [5] 1e9fd483eef23464ec61c815f7984d61 9d32a36294565368728c18de466e69f1 OSX 10.6.x (intel) [7] 591fbe723011c17b6ce41c537353b059 e781fad4fc86cf652f6df22106e0bf0e OSX 10.6.x (intel) [4] 58be068ad5e575c303ec348a1c71d48b 33dafd419194b04a558c8404b484f650 Mine 10.6.6 (intel) df344cc00a294c91230c65e8b7332a79 b5094ccf4cd074aaf573d4f5df75906a where N is the number of machines with with that MD5, and the last row is my laptop. The sample is relatively heterogeneous spaning a few years of different makes and models of Apples, and different versions of 10.6.x. The different hash for my system made me worried that these binaries might have been compromised. So I made sure that my backup for the week was good, and dived into formatting my system and reinstalling OSX. After reinstalling OSX from the manufacturer DVD, I found that the MD5 hash did not change for either ssh, or sshd. Goal Make sure that my system is does not have any malicious software. Should I be worried that this base install of OSX (with no other software installed) has been compromised? I have also updated my system to 10.6.6 and found no change as well. Other Information I am not sure if this is helpful information, but my laptop is a i7 15 inch MacBook Pro bought in Nov 2010, and here is some output from system_profiler: System Software Overview: System Version: Mac OS X 10.6.6 (10J567) Kernel Version: Darwin 10.6.0 64-bit Kernel and Extensions: No Time since boot: 1:37 Hardware: Hardware Overview: Model Name: MacBook Model Identifier: MacBook6,2 Processor Name: Intel Core i7 Processor Speed: 2.66 GHz Number Of Processors: 1 Total Number Of Cores: 2 L2 Cache (per core): 256 KB L3 Cache: 4 MB Memory: 4 GB Processor Interconnect Speed: 4.8 GT/s Boot ROM Version: MBP61.0057.B0C SMC Version (system): 1.58f16 Sudden Motion Sensor: State: Enabled On the laptop, I find: $ codesign -vvv /usr/bin/ssh /usr/bin/ssh: valid on disk /usr/bin/ssh: satisfies its Designated Requirement $ codesign -vvv /usr/sbin/sshd /usr/sbin/sshd: valid on disk /usr/sbin/sshd: satisfies its Designated Requirement $ ls -la /usr/bin/ssh -rwxr-xr-x 1 root wheel 1001520 Feb 11 2010 /usr/bin/ssh $ ls -la /usr/sbin/sshd -rwxr-xr-x 1 root wheel 1304800 Feb 11 2010 /usr/sbin/sshd $ ls -la /sbin/md5 -r-xr-xr-x 1 root wheel 65232 May 18 2009 /sbin/md5 Update So far I have not gotten an answer about this question, but if you could help by increasing the number of hashes that I can compare against, that would be great. To get hashes, and version numbers, run the following on osx: md5 /usr/bin/ssh md5 /usr/sbin/sshd ssh -V sw_vers

    Read the article

  • How do I limit the CPU share of TrustedInstaller.exe on a Vista system

    - by Dan Neely
    I'm trying to fix a few low end single core desktops running Vista. In normal use they're fast enough not to be a problem. The issue is that because these machines are only on when being used, primarily for school work, windowsupdate begins installing patches it launches TrustedInstaller which in turn hogs 100% of the CPU and renders the machines all but unusable for however long it takes to patch them.

    Read the article

  • 50um vs. 62.5um fiber compatability

    - by murisonc
    I've heard that there are compatibility problems when using 50um fiber with some fiber converters. After some research I'm thinking this is a legacy issue when using slower devices (100 Base FX) that used LEDs. I was told that the fiber converters are made for a certain size of fiber core and wont work with 50um fiber. Am I right in thinking this is just a corporate knowledge thing that is outdated when using 1000 Base SX converters (which should be using lasers instead of LEDs)?

    Read the article

  • Why does limiting my virtual memory to 512MB with ulimit -v crash the JVM?

    - by Narinder Kumar
    I am trying to enforce maximum memory a program can consume on a Unix system. I thought ulimit -v should do the trick. Here is a sample Java program I have written for testing : import java.util.*; import java.io.*; public class EatMem { public static void main(String[] args) throws IOException, InterruptedException { System.out.println("Starting up..."); System.out.println("Allocating 128 MB of Memory"); List<byte[]> list = new LinkedList<byte[]>(); list.add(new byte[134217728]); //128 MB System.out.println("Done...."); } } By default, my ulimit settings are (output of ulimit -a) : core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 31398 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 31398 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited When I execute my java program (java EatMem), it executes without any problems. Now I try to limit max memory available to any program launched in the current shell to 512MB by launching the following command : ulimit -v 524288 ulimit -a output shows the limit to be set correctly (I suppose): core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 31398 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 31398 virtual memory (kbytes, -v) 524288 file locks (-x) unlimited If I now try to execute my java program, it gives me the following error: Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. Ideally it should not happen as my Java program is only taking around 128MB of memory which is well within my specified ulimit parameters. If I change the arguments to my Java program as below: java -Xmx256m EatMem The program again works fine. While trying to give more memory than limited by ulimit like : java -Xmx800m EatMem results in expected error. Why the program fails to execute in the first case after setting ulimit ? I have tried the above test on Ubuntu 11.10 and 12.0.4 with Java 1.6 and Java 7

    Read the article

  • VMWare Esxi Looking for Bottlenecks

    - by nextgenneo
    I have a VMWare ESxi box, 22GB ram, Dual Quad Core Xeon, 2 Sas drives + Write caching raid controller etc. Anyways, have about 30 small XP VM's running on it and starting to get some very slow boot times and other performance issues. I THINK its I/O but looking at the graphs not too sure what to look for. Any ideas on what to look for would be appreciated. Here is the data I've got so far: (I feel like my IO is high but not sure what to bench it against)

    Read the article

  • few questions about imaging the os

    - by user23950
    How much time will it take to backup 49Gb? Here are the details: os: w7, processor-dual core 2.50 Ghz, 2Gb ram. -I'll use the free version of macrium reflect. I will back it up on a phd from seagate. And I have installed. Ms visual studio 2008, netbeans, and some from master collection cs4. I will only backup 1 partition.

    Read the article

  • Banking applications

    - by Rohit
    Is there still scope left for a banking software? Almost all the banks now run core-banking solution, still I could see new companies coming with their banking solutions. Is there still scope left for the new comers in this segment?

    Read the article

  • RAID options for a LAMP web server

    - by jetboy
    I'm due to set up a LAMP web server with four drives and a RAID controller to act as a web server. The drives are 146Gb SAS, and the machine has two quad core processors and 16Gb RAM. There will be very few write operations to the MySQL database, and I'll be using as much caching as possible to reduce disk I/O. Question is: Would I be better off splitting the drives into two RAID 1 arrays, splitting up sequential and random disk I/O, or would I get better overall performance putting them all in a single RAID 1+0 array?

    Read the article

  • Do i need to join Graphics Card with CPU fan

    - by Mirage
    Initially i had inno 3d 256MB Nvidia GTS graphics card. I also had another Big FAN above the processor (Vendor put in quad core) In that card there was one cable which was joined with that FAN. Now i have changed the CARD to 1GB Nvidia GT9600 . But there is no pins to join the fan with that card. Is it ok . i don't know why old card was joined with FAN

    Read the article

  • Unreal Development Kit Hardware requirements?

    - by gojira666
    I am very interested in trying out the Unreal Development Kit for my own small to medium-sized hobby projects. I am wondering about the minimum hardware requirements. I have a Vaio Z laptop with dual-core 2.4 GHZ CPU and 2 GB RAM, and graphics chip is GeForce 9300M GS. Is it even practicable to run UDK on this hardware? Or do I need a "real" desktop PC?

    Read the article

  • EC2 Filesystem / Files stored on the wrong partiton after launching new instance from AMI

    - by Philip Isaacs
    Today I set up a new EC2 Instance from and AMI I created from an older EC2 instance. When I launched the new instance I took the AMI that was on a small instance and launched with a medium instance. From what I can tell this is pretty standard stuff. But here's the stang part. According to AWS these are the differences Small Instance (Default) 1.7 GB of memory, 1 EC2 Compute Unit (1 virtual core with 1 EC2 Compute Unit), 160 GB of local instance storage, 32-bit or 64-bit platform Medium Instance 3.75 GB of memory, 2 EC2 Compute Units (1 virtual core with 2 EC2 Compute Units each), 410 GB of local instance storage, 32-bit or 64-bit platform Okay now here's where I'm having an issue. I when I log into the new bigger instance it still reports only having 1.7 GB of ram. The other strange part is that all my old partitions are still their in the same configurations. I see a new larger partition /mnt which is essential empty. Filesystem Size Used Avail Use% Mounted on /dev/sda1 7.9G 5.9G 1.6G 79% / none 846M 120K 846M 1% /dev none 879M 0 879M 0% /dev/shm none 879M 76K 878M 1% /var/run none 879M 0 879M 0% /var/lock none 879M 0 879M 0% /lib/init/rw /dev/sda2 335G 195M 318G 1% /mnt /dev/sdf 16G 9.9G 5.1G 67% /var2 This EC2 is a web server and I was serving files off the /var2 directory but for some reason the instance is storing everything on / Okay here's what I'd like to do. Move all my website files to /mnt and have the web server point to that. Any suggestions? If it helps here is what my fstab looks like as well. root@myserver:/var# mount -l /dev/sda1 on / type ext3 (rw) [cloudimg-rootfs] proc on /proc type proc (rw,noexec,nosuid,nodev) none on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) none on /dev type devtmpfs (rw,mode=0755) none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) none on /dev/shm type tmpfs (rw,nosuid,nodev) none on /var/run type tmpfs (rw,nosuid,mode=0755) none on /var/lock type tmpfs (rw,noexec,nosuid,nodev) none on /lib/init/rw type tmpfs (rw,nosuid,mode=0755) /dev/sda2 on /mnt type ext3 (rw) /dev/sdf on /var2 type ext4 (rw,noatime) I hope this question makes sense. Basically i want my old files on this new partition. Thanks in advance

    Read the article

  • The laptop overheat when use linux ubuntu

    - by Rienna
    I use 2 operating system in my laptop. I am using windows 7 and ubuntu 12.04 When I use ubuntu, it's often make my laptop turned into overheat and sometimes turned off suddenly. Why it happen? Is it caused damage to my hardware or because I am using 2 OS? My laptop'specification Processor : Intel(R)Core(TM)2 Duo CPU T6600 @ 2.20Ghz 2.20 GHz RAM : 2 GB System type : 64-bit Operating System

    Read the article

  • radvd is not assigning prefix

    - by Samik
    I'm currently trying to setup IPv6 address auto-configuration with router advertisement daemon (radvd) on a virtual machine running CentOS 6.5. But the eth0 interface is not obtaining that prefix. I've obtained the ULA prefix from here. Contents of /etc/sysctl.conf # Kernel sysctl configuration file for Red Hat Linux # # For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and # sysctl.conf(5) for more details. # Controls IP packet forwarding net.ipv4.ip_forward = 0 net.ipv6.conf.all.forwarding = 1 # Controls source route verification net.ipv4.conf.default.rp_filter = 1 # Do not accept source routing net.ipv4.conf.default.accept_source_route = 0 # Controls the System Request debugging functionality of the kernel kernel.sysrq = 0 # Controls whether core dumps will append the PID to the core filename. # Useful for debugging multi-threaded applications. kernel.core_uses_pid = 1 # Controls the use of TCP syncookies net.ipv4.tcp_syncookies = 1 # Disable netfilter on bridges. net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-arptables = 0 # Controls the default maxmimum size of a mesage queue kernel.msgmnb = 65536 # Controls the maximum size of a message, in bytes kernel.msgmax = 65536 # Controls the maximum shared segment size, in bytes kernel.shmmax = 68719476736 # Controls the maximum number of shared memory segments, in pages kernel.shmall = 4294967296 Contents of /etc/radvd.conf # NOTE: there is no such thing as a working "by-default" configuration file. # At least the prefix needs to be specified. Please consult the radvd.conf(5) # man page and/or /usr/share/doc/radvd-*/radvd.conf.example for help. # # interface eth0 { AdvSendAdvert on; MinRtrAdvInterval 3; MaxRtrAdvInterval 10; AdvDefaultPreference low; AdvHomeAgentFlag off; prefix fd8a:8d9d:808f:1::/64 { AdvOnLink on; AdvAutonomous on; AdvRouterAddr on; }; }; Contents of /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 HWADDR=52:54:00:74:d7:46 TYPE=Ethernet UUID=af5db1cb-e809-4098-be1a-5a74dbb767b1 ONBOOT=yes NM_CONTROLLED=no BOOTPROTO=dhcp IPV6INIT=yes IPV6_AUTOCONF=yes I've also enabled radvd at startup through chkconfig. Though I noticed that radvd is starting after interfaces are brought up. I've tried restarting the network service afterwards but still I get the following link-local address only #ip -6 addr show 1: lo: mtu 16436 inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qlen 1000 inet6 fe80::5054:ff:fe74:d746/64 scope link valid_lft forever preferred_lft forever Edit: Based on the answer given by Sander Steffann I still need clarification on some points but I'm posting here what worked. Contents of /etc/sysconfig/network NETWORKING=yes HOSTNAME=syslog-ng-server NETWORKING_IPV6=yes IPV6FORWARDING=yes Contents of /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 HWADDR=52:54:00:74:d7:46 TYPE=Ethernet UUID=af5db1cb-e809-4098-be1a-5a74dbb767b1 ONBOOT=yes NM_CONTROLLED=no BOOTPROTO=dhcp IPV6INIT=yes IPV6_AUTOCONF=yes IPV6FORWARDING=no Removed following line from /etc/sysctl.conf net.ipv6.conf.all.forwarding = 1 Contents of /etc/radvd.conf is as previous.

    Read the article

  • Can I reduce the CPU speed of my MacBook when on battery?

    - by Greg Hewgill
    I've got a MacBook with a Core 2 Duo CPU. I've got CoreDuoTemp installed which can show the current speed of the CPU. It appears to always show: Mini : 1.0 GHz Maxi : 2.0 GHz Current : 2.0 GHz I believe my laptop would run longer on battery if it were to run at a maximum of 1 GHz. Is there a way to configure this, or is the CPU speed adjustment completely automatic?

    Read the article

  • Eyefinity resolution on third monitor

    - by Sam2299
    I'm now using three monitors with resolution capability of 1920x1080. The one monitor connected to Active Display Port is (from some reason) limited to 1440x900. Is there a way to increase the resolution of the third monitor to have all three at 1920X1080? My machine: Graphics card - AMD 5770 Processor - Intel Core i7-2600 Ram - 8GB OS - Windows 7 64bit. VGC(Video Graphics Controller) driver is up to date. Got this message:

    Read the article

  • is there anyway to know if your supposedly fully dedicated server is really a virtually resource-sha

    - by siran
    Hi, sometimes I feel my server not responding as smoothly as I would expect (i have a Intel(R) Xeon(TM) CPU 2.80GHz Quad Core), given that for example, the 'top' commands reports a low load < 0.5, CPU are almost completely idle ... I maybe have internet connectivity issues, so I don't really know if it's me or if it's the server itself. Is there anykind of benchmarking script (or something analogous) I could run and see the actual performance of the server ?

    Read the article

  • Clean thermal paste from inside CPU

    - by Karolinger
    I bought an used CPU (Intel "Core 2" E4700) on eBay. And it came dirty inside with something that seems it´s thermal paste. Before I send it back, I wonder if it could it still work without cleaning this dirt. The seller supposedly "tested" it to be fully working. Final, is there a way to clean this dirt without damaging the CPU? Or, is it too risky (or too much work) to do so? This is the CPU:

    Read the article

  • Shouldn't WP8 Emulator Work With an Intel Q9650?

    - by Al Bundy
    My comp is as follows: -Windows 8 Pro -Visual Studio 12 Pro -Asus P5Q Pro Turbo -Intel Q9650 (Core 2 Quad 3.0 Ghz) As far as I can tell, this setup should support Windows Phone 8 Emulator, but When I installed Windows Phone 8 SDK, it said that my computer doesn't support hardware virtualization. It says here that it does: http://ark.intel.com/products/35428/Intel-Core2-Quad-Processor-Q9650-(12M-Cache-3_00-GHz-1333-MHz-FSB)

    Read the article

  • Price drop patterns

    - by doug
    I'm looking to buy a new laptop, and i don't need the top hi-tech because I'll use it for office type applications. In this case, the CPU and RAM are those who are mostly used. For example Intel i3 CPU was launched in Jan 2010 and in this case, the prices for Core Duo technology will drop. Do you know when or which are the signs? Can we talk about such a pattern?

    Read the article

  • rTorrent, too low memory usage !?

    - by Claudiu
    I want to know from more experienced rTorrent users how to tweak the .rtorrent.rc so that rTorrent will cache disk reading and writing (same as uTorrent does). I have set the max_memory_usage = 1GB but this amount is not used. I run 6 rTorrent instances on a Quad Core, 8 GB Ram machine and total used memory reported by htop is only ~500MB. I need to use memory buffers cause disk IO activity is very high.

    Read the article

< Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >