Search Results

Search found 10543 results on 422 pages for 'big bang theory'.

Page 214/422 | < Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >

  • How do I put back different SCSI hard drives into their original RAID arrays across different servers?

    - by Edgar
    I have potentially a big mess in my hands: I received today a box with several hard drives that used to be connected to different servers each one of them using an unknown - at least as of right now- RAID configuration. Regretfully, these are not marked and I'm not sure how to go about putting them back into their original servers. Currently I don't have much more information: I don't know what type of array was being used on each instance and I don't have any specifics about the RAID controller originally used on each one of the servers (currently these servers are at a remote location with no easy access). Is there a way to sort through this mess? What would be the consequences of using trial and error to go about it? This might be a very basic question but I don't have much experience dealing with RAID arrays.

    Read the article

  • yum update with shared cache

    - by Sammitch
    We've got a big batch of RHEL6 machines that are due for patching, and for some reason the process here does not involve a local repo. I'm new here, I've asked why, ["it just didn't work"] and I don't have enough time to make it work before the window that's already scheduled. So the usual method is to install yum-downloadonly and run yum update --downloadonly --downloaddir=/mnt/cifs_share and then yum update /mnt/cifs_share/*.rpm which just does not look right to me since not all of these machines have the same set of installed packages. The method I tried today was mounting the share to /var/cache/yum/x86_64/6Server/rhel-x86_64-server-6/packages/ which worked, but then yum automatically deleted everything once it finished. I've looked over the yum man page, but I don't see any flag I can feed it to stop it from deleting everything, nor a flag like up2date's --tmpdir=/mnt/cifs_share. Can anyone out there help me kludge this together until I can get a local repository working?

    Read the article

  • How can I perform a ping every X minuts and check the response time?

    - by Profete162
    I am currently working in a big company and we have serious latency issues. This is happening in a process control system, and is unacceptable (Open a valve sometimes take 2 minuts before command start) The Network team seems very lazy and I want to check when they say "everything alright on the network". So, I want to create a loop that ping the server and write the result in a text file. I am not a batch expert, but do you think this code is correct to use? @ECHO OFF :LOOPSTART time /T ping xxx.xx.x.x -t >> filename.txt sleep -m 3000 GOTO LOOPSTART

    Read the article

  • Why does't rsync use delta-transfer for local files ?

    - by o_O Tync
    I have a big iso image which is currently being downloaded by a torrent client with space-reservation turned on: that means, file size is not changing while some chunks in in (4 Mib) are constantly changing because of a download. At 90% download I do the initial rsync to save time later: $ rsync -Ph DVD.iso /some/target/ sending incremental file list DVD.iso 2.60G 100% 40.23MB/s 0:01:01 (xfer#1, to-check=0/1) sent 2.60G bytes received 73 bytes 34.59M bytes/sec total size is 2.60G speedup is 1.00 Then, when the file's fully downloaded, I rsync again: total size is 2.60G speedup is 1.00 Speedup=1 says delta-transfer was not used, although 90% of the file has not changed. Why?!

    Read the article

  • Deleting Time Machine in Mac OS X 10.6.4

    - by cappuccino
    Does anyone know how to delete Time Machine in Mac OS X 10.6.4? Before answering: sudo rm -rf /whateverthetimemachineis does not work Disabling the ACL permissions first with sudo fsaclctl -p /whatever -d does not work, sudo: fsaclctl: command not found Use the delete all backup feature in Time Machine... this is slow as hell, would take days. Need a command line solution. No I don't want to reformat the drive, I have other content on it, and no don't say I should have separated on two partition or two drives, I did it this say since partitions cannot be dynamically changed, and two drives is annoying since, whats the point of having a big drive?... plus has no relation to the issue at hand. Already googlied for hours and read everything on Super User, nothing working. and all solutions are the first 4. Any clues?

    Read the article

  • emacs AucTeX:Turn off auto-fill-mode inside a particular LaTeX environment

    - by Seamus
    I like using auto-fill-mode for hard line wrapping. However, when I'm making a big tabular in a .tex file, I like using align-current to have the table look somewhat like it would when printed. The difficulty is that if I have a table that is longer than the line width, auto-fill-mode breaks it, and then align-current can't put things to rights and gets confused. Is there a way to tell emacs that when I'm between the \begin and \end tags of a particular kind of environment (in this case, tabular), don't word wrap...

    Read the article

  • Set 802.1Q tagged port on VLAN1 on Dell PowerConnect switch

    - by Javier
    I'm having big troubles when adding this Dell switch to my network. Here we use several VLANs to segment traffic. All switches (3com and DLink mostly) have configured the same VLANs, most ports are 'untagged' and belong to a single VLAN, except for the ports used to join together the switches (in a star topology), these ports belong to all VLANs and use 802.1Q tags. So far, it works really well. But on this new switch (a Dell PowerConnect 5448), the settings are very different (and confusing). I have configured the same VLANs, an the uplink ports are set in 'general' mode (supposed to be fully 802.1Q compliant), I can set the VLAN membership as 'T' on these ports for all VLANs except VLAN 1. It always stay as 'U' on VLAN 1. Any ideas?

    Read the article

  • azure website restart and take old dll version

    - by vipul dumaniya
    One of my site is hosted on windows azure and when azure restart site from manage windows azure panel. then it take old version dll and site is down until we restart the site by deploying global.asax or change in web.config to restart the site. after deployment of global.asax or change in web.config site is restart and then it work perfectly and take latest dll. so if any issues with my code then it should not work after the restart by deploying global.asax file so i think issues is not from code side. Error like "Could not load type 'DSF.DATA.Repository.RecurringOrderLogResposity' from 'DSF.DATA Version 1.0.0" I am just deploying changed dll using FTP & site restart and take effect successfully I have already resolve this error and uploaded latest dll too but when site restart from azure panel it back and then site down until i restart the site by deploying global.asax file so i think issues is not from code side. please please help I am in big trouble as my site is live site and there are lot of traffic Thanks Vipul

    Read the article

  • how to correctly download tomcat 6 on centos 5.5

    - by user582862
    hi guys, i am a big confused about how to install tomcat 6 on centos 5.5 final. this is what i am trying to do: # cd /etc/yum.repos.d/ # wget http://jpackage.org/jpackage50.repo # yum install tomcat6 tomcat6-webapps tomcat6-admin-webapps but when i type the widget command, this is what i get: Resolving www.jpackage.org... failed: Temporary failure in name resolution. wget: unable to resolve host address `www.jpackage.org' could anyone kindly show me the right way please. really in trouble at the moment with this. thanks in advance.

    Read the article

  • High Lock Wait ratio in MySQL

    - by FunkyChicken
    on my site I log every pageview (date,ip,referrer,page,etc) in a simple mysql table. This table gets very little selects (3 per minute), but a lot of inserts. (about 100 per second) Today I changed this table from an InnoDB table to a MEMORY table, this made sense to me to prevent unnecessary hard disk IO. I also prune this table once per minute, to make sure it never get's too big. -- Performance wise, things are running fine. But I noticed that while running tuning-primer, that my Current Lock Wait ratio is quite high. Current Lock Wait ratio = 1 : 561 My question: Should I worry about this Lock Wait Ratio? And is there something I can change in my my.cnf to improve things so that the lock wait ratio isn't so high?

    Read the article

  • Spam mail through SMTP and user spoofing

    - by Josten Moore
    I have noticed that it's possible to telnet into a mailserver that I own and send spoofed messages to other clients. This only works for the domain that the mail server is regarding; I cannot do it for other domains. For example; lets say that I own example.com. If I telnet example.com 25 I can successfully send a message to another user without authentication: HELO local MAIL FROM: [email protected] RCPT TO: [email protected] DATA SUBJECT: Whatever this is spam Spam spam spam . I consider this a big problem; how do I secure this?

    Read the article

  • e2fsck extremly slow, although enough memory exists

    - by kaefert
    I've got this external USB-Disk: kaefert@blechmobil:~$ lsusb -s 2:3 Bus 002 Device 003: ID 0bc2:3320 Seagate RSS LLC As can be seen in this dmesg output, there are some problems that prevents that disk from beeing mounted: kaefert@blechmobil:~$ dmesg | grep sdb [ 114.474342] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.475089] sd 5:0:0:0: [sdb] Write Protect is off [ 114.475092] sd 5:0:0:0: [sdb] Mode Sense: 43 00 00 00 [ 114.475959] sd 5:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 114.477093] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.501649] sdb: sdb1 [ 114.502717] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.504354] sd 5:0:0:0: [sdb] Attached SCSI disk [ 116.804408] EXT4-fs (sdb1): ext4_check_descriptors: Checksum for group 3976 failed (47397!=61519) [ 116.804413] EXT4-fs (sdb1): group descriptors corrupted! So I went and fired up my favorite partition manager - gparted, and told it to verify and repair the partition sdb1. This made gparted call e2fsck (version 1.42.4 (12-Jun-2012)) e2fsck -f -y -v /dev/sdb1 Although gparted called e2fsck with the "-v" option, sadly it doesn't show me the output of my e2fsck process (bugreport https://bugzilla.gnome.org/show_bug.cgi?id=467925 ) I started this whole thing on Sunday (2012-11-04_2200) evening, so about 48 hours ago, this is what htop says about it now (2012-11-06-1900): PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 3704 root 39 19 1560M 1166M 768 R 98.0 19.5 42h56:43 e2fsck -f -y -v /dev/sdb1 Now I found a few posts on the internet that discuss e2fsck running slow, for example: http://gparted-forum.surf4.info/viewtopic.php?id=13613 where they write that its a good idea to see if the disk is just that slow because maybe its damaged, and I think these outputs tell me that this is not the case in my case: kaefert@blechmobil:~$ sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 3562 MB in 2.00 seconds = 1783.29 MB/sec Timing buffered disk reads: 82 MB in 3.01 seconds = 27.26 MB/sec kaefert@blechmobil:~$ sudo hdparm /dev/sdb /dev/sdb: multcount = 0 (off) readonly = 0 (off) readahead = 256 (on) geometry = 364801/255/63, sectors = 5860533160, start = 0 However, although I can read quickly from that disk, this disk speed doesn't seem to be used by e2fsck, considering tools like gkrellm or iotop or this: kaefert@blechmobil:~$ iostat -x Linux 3.2.0-2-amd64 (blechmobil) 2012-11-06 _x86_64_ (2 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 14,24 47,81 14,63 0,95 0,00 22,37 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0,59 8,29 2,42 5,14 43,17 160,17 53,75 0,30 39,80 8,72 54,42 3,95 2,99 sdb 137,54 5,48 9,23 0,20 587,07 22,73 129,35 0,07 7,70 7,51 16,18 2,17 2,04 Now I researched a little bit on how to find out what e2fsck is doing with all that processor time, and I found the tool strace, which gives me this: kaefert@blechmobil:~$ sudo strace -p3704 lseek(4, 41026998272, SEEK_SET) = 41026998272 write(4, "\212\354K[_\361\3nl\212\245\352\255jR\303\354\312Yv\334p\253r\217\265\3567\325\257\3766"..., 4096) = 4096 lseek(4, 48404766720, SEEK_SET) = 48404766720 read(4, "\7t\260\366\346\337\304\210\33\267j\35\377'\31f\372\252\ffU\317.y\211\360\36\240c\30`\34"..., 4096) = 4096 lseek(4, 41027002368, SEEK_SET) = 41027002368 write(4, "\232]7Ws\321\352\t\1@[+5\263\334\276{\343zZx\352\21\316`1\271[\202\350R`"..., 4096) = 4096 lseek(4, 48404770816, SEEK_SET) = 48404770816 read(4, "\17\362r\230\327\25\346//\210H\v\311\3237\323K\304\306\361a\223\311\324\272?\213\tq \370\24"..., 4096) = 4096 lseek(4, 41027006464, SEEK_SET) = 41027006464 write(4, "\367yy>x\216?=\324Z\305\351\376&\25\244\210\271\22\306}\276\237\370(\214\205G\262\360\257#"..., 4096) = 4096 lseek(4, 48404774912, SEEK_SET) = 48404774912 read(4, "\365\25\0\21|T\0\21}3t_\272\373\222k\r\177\303\1\201\261\221$\261B\232\3142\21U\316"..., 4096) = 4096 ^CProcess 3704 detached around 16 of these lines every second, so 4 read and 4 write operations every second, which I don't consider to be a lot.. And finally, my question: Will this process ever finish? If those numbers from fseek (48404774912) represent bytes, that would be something like 45 gigabytes, with this beeing a 3 terrabyte disk, which would give me 134 days to go, if the speed stays constant, and he scans the disk like this completly and only once. Do you have some advice for me? I have most of the data on that disk elsewhere, but I've put a lot of hours into sorting and merging it to this disk, so I would prefer to getting this disk up and running again, without formatting it anew. I don't think that the hardware is damaged since the disk is only a few months and since I can't see any I/O errors in the dmesg output. UPDATE: I just looked at the strace output again (2012-11-06_2300), now it looks like this: lseek(4, 1419860611072, SEEK_SET) = 1419860611072 read(4, "3#\f\2447\335\0\22A\355\374\276j\204'\207|\217V|\23\245[\7VP\251\242\276\207\317:"..., 4096) = 4096 lseek(4, 43018145792, SEEK_SET) = 43018145792 write(4, "]\206\231\342Y\204-2I\362\242\344\6R\205\361\324\177\265\317C\334V\324\260\334\275t=\10F."..., 4096) = 4096 lseek(4, 1419860615168, SEEK_SET) = 1419860615168 read(4, "\262\305\314Y\367\37x\326\245\226\226\320N\333$s\34\204\311\222\7\315\236\336\300TK\337\264\236\211n"..., 4096) = 4096 lseek(4, 43018149888, SEEK_SET) = 43018149888 write(4, "\271\224m\311\224\25!I\376\16;\377\0\223H\25Yd\201Y\342\r\203\271\24eG<\202{\373V"..., 4096) = 4096 lseek(4, 1419860619264, SEEK_SET) = 1419860619264 read(4, ";d\360\177\n\346\253\210\222|\250\352T\335M\33\260\320\261\7g\222P\344H?t\240\20\2548\310"..., 4096) = 4096 lseek(4, 43018153984, SEEK_SET) = 43018153984 write(4, "\360\252j\317\310\251G\227\335{\214`\341\267\31Y\202\360\v\374\307oq\3063\217Z\223\313\36D\211"..., 4096) = 4096 So this number of the lseeks before the reads, like 1419860619264 are already a lot bigger, standing for 1.29 terabytes if the numbers are bytes, so it doesn't seem to be a linear progress on a big scale, maybe there are only some areas that need work, that have big gaps in between them. (times are in CET)

    Read the article

  • How to make local apache server public/visible ?

    - by George
    Hello. I am running an Apache2 server on a Fedora 13. I'd like to make it publicly accessible(visible).For example I'd like when somebody types http://my.ip.numbes/ that they would see what I have in my document root folder. Just for a presentation of a course work at university. Permissions are set to 755. User owning the document root is apache. SELinux is temporarily disabled. But port 80 is closed. I tried to open it by adding an entry to iptables and restarting them, no change. I guess I am missing something big here. Help would be greatly appreciated. Note: I have a static (public, real) IP address.

    Read the article

  • Web log files analyzer

    - by Peter Štibraný
    I already use Google Analytics on my page, but I'd like to get additional info from log files. I've looked at various packages during last days, but nothing impressed me so far. Some requirements: must work on log file level (I use apache combined logs, but can configure apache to produce other types of logs) can generate static reports (windows/linux) or use GUI (windows only) should be easy to add custom user agents, and rerun analysis if it can recognize installation of eclipse plugins from log, that would be big plus understands google serp position referer should not require two days to setup (awstats, I am looking at you) should be still under active developement (i.e. analog isn't good answer) preferrably free, or at not very expensive :-) Any good analyzers programs out there?

    Read the article

  • Upload large database SQL file

    - by Devy
    I've a database of more than 20Gb of size on my hard disk. What is the best way to upload it with the least (money) load possible on the server? - I'm on Windows 7. - I have FTP and SSH access on the server. I avoid using FTP because my connection cuts off a lot, I can't imagine I re-upload again the file after failing on 99%. I found some tools that split the large .sql file to small .sql files, but they didn't mention how to gather these files again into one file. Another way is to archive the big .sql file to .rar with -v option, upload them through FTP then unpack them. But unpacking will also cost, right? I know it will cost in any cases, but any best practice will be strongly appreciated.

    Read the article

  • Standalone server setup for compute capacity

    - by mikera
    I'm developing an application for my company that will require a lot of compute capacity (running some very big mathematical calculations), and looking for some form of server setup to do this. For various reasons, we want to run this on-site in our office rather than hosting it externally. It's been a while since I last had to set up my own servers so I thought I would tap into the collective wisdom of serverfault! My broad requirements are: Budget $30-50k, with an aim to get as much compute capacity as possible for that budget 64-bit servers suitable to run Ubuntu Linux + Java Some relatively standalone rack that can be installed in secure office space Fast/low latency network connections between the servers, but don't really care about connectivity to the outside world Storage capacity shared between the servers - they don't necessarily need their own storage providing they can be booted from a common image Downtime can be tolerated (since the calculations are run in batch mode) The software itself is fault-tolerant, so there is no need for extra resiliency in the server setup (cheap replaceable commodity parts will be fine in general) Given these requirements what kind of setup would you recommend and why?

    Read the article

  • monitoring TCP/IP performance on Solaris

    - by Andy Faibishenko
    I am trying to tune a high message traffic system running on Solaris. The architecture is a large number (600) of clients which connect via TCP to a big Solaris server and then send/receive relatively small messages (.5 to 1K payload) at high rates. The goal is to minimize the latency of each message processed. I suspect that the TCP stack of the server is getting overwhelmed by all the traffic. What are some commands/metrics that I can use to confirm this, and in case this is true, what is the best way to alleviate this bottleneck?

    Read the article

  • Installing VMWare Tools in Windows Server 2008 fails system startup

    - by Hoghweed
    I recently created a vmware virtual machine with windows server 2008 enterprise as Guest. My host is Ubuntu 10.04 on my Lenovo laptop. I fall into a big trouble which makes my created VM unusable after I've installed VMWare Tools. After installing tools I'm able to run the system only in safe mode. After some event manager analysis I found the issue is with drivers installed by vnmware tools. Any one has got the same issue? Is there any good practice for doing that? The configuration of vm machine is the following CPU : 1 RAM : 1020 HD : 40GB Splitted files, SCSI CD : IDE Thanks in advance

    Read the article

  • What are the essential considerations for setting up systems in a location with unreliable power?

    - by dunxd
    I deal with a lot of remote offices located in parts of the world where the local grid power supply is unreliable. Power can go off anytime with no warning, with outages ranging from minutes to days Power fluctuation is wild, with spikes and brown outs Currently the offices will have some or all of the following: A generator, with an inverter, or some sort of manual switch A big UPS or battery array connecting a number of devices Several smaller APC UPS with computers attached Low cost Voltage Regulators sometimes connected between mains and UPS or device. I know that each of these things needs to be appropriately rated for the equipment to which it is connected (although I am not sure how to calculate the correct rating). The offices will generally have the following equipment (in varying quantities): some sort of internet connection device (VSAT router, ADSL modem, WiMax router) Cisco ASA 5505 firewall a bunch of PCs printers one server I don't seek to replace the advice of an electrician, but in some of these locations they only answer the questions you ask them, so I need to make sure I have enough understanding of the essentials to protect equipment from damage, and possibly get through some power cuts.

    Read the article

  • MariaDB, Galera, xtrabackup - do I need the binary log?

    - by bernhardrusch
    We are using a MariaDB Galera Cluster with 3 nodes. For the state transfer we are using xtrabackup. We have some problems with the binary logs - they got too big and crashed the server. We can remove them manually with the purge binary logs command, another way would be to set the expire_logs_days so they would expire. I now that we could use xtrabackup to backup the DB and use the binlog to get to some point in time. But do we really need it for Galera to work ?

    Read the article

  • After making boot disk using rufus in usb 3.0 port, it doesn't work in another ports.

    - by sin
    I have a big problem! After making boot disk using rufus in usb 3.0 port, it doesn't work in another ports. I have to install windows in other PC which has usb 2.0port only! so I made usb booting disk using usb 3.0 in 3.0 port. but after that it never worked except in 3.0 ... and I could't restore my usb. I alredy fomated in cmd and others in usb3.0 port! but there is no change. and I couldn't find it in usb 2.0 port. plz help me.

    Read the article

  • New Windows 7 Install Crashing

    - by bobber205
    One big reboot crash and one smaller crash already, 15 minutes in. Did a basic install of Windows 7, installed Chrome and Firefox. I had just finished loading up my gmail account in Chrome/Firefox to show the speed difference and we'd thought it would be hilarious to see how slow IE8 was. :P Just about as IE8 was done opening, the computer's screen goes black. After a restart and a couple minutes, Explorer crashes as well. What is going on? This install is only 15-20 minutes old. :P

    Read the article

  • Can irssi ignore the 24h dsl-reconnect

    - by mcnesium
    A couple of weeks ago I had to switch my ISP from cable to DSL. Now I have this ridiculous disconnect and reconnect every 24h. It's no big deal insofar as having a new IP address every day, but for one exception. Since I host my irssi screen on a machine inside the LAN, my history gets affected by the reconnect in terms of a topic announcement, the users in each channel, creation date and so on. It's about 10 lines of redundant content every day. This is annoying especially in channels with very little traffic, because you hardly see the actual content in line with the every-day-junk. So I was wondering if I can tell irssi to silently ignore the reconnection details, so that my only meta-content in each channel goes back to "Day changed to ...", like back in the days of cable-internet.

    Read the article

  • In Outlook 2010, can you add "Categories" to the "New Email" Ribbon?

    - by Jeff
    I couldn't figure out how to do this in Outlook 2007, and I was hoping I could do it in Outlook 2010... I want the ability to quickly apply a category when composing a new email (typically a "Waiting For..." category) for things that need a response. It is possible to apply a category by clicking the "Options" ribbon, then the little arrow under the More Options section - but why can't I get the nice big "Categories" drop-down that's available in the "Tags" section of the main Outlook window. There are about a kabillion commands in the "Customize Ribbon" dialog box for the New Mail window, but I couldn't find anything about Categories. Should I just give up?

    Read the article

  • Convert old videos to have smaller sizes

    - by Tim
    I have some videos from a few years ago,with various formats, such as avi, mpg, wmv, rm, rmvb, .... Their sizes are huge(more than 500 MB, and sometimes 1GB). Given there may likely be some advance in data compression, I would like to know which file formats and compression methods are recommended these days, by the standard that without losing obvious data, while achieving big size reduction. How can I perform the file format conversion and data compression in Ubuntu 12.04? Command line and batch ways would be the most convenient, although GUI ways are also appreciated. Thanks and regards!

    Read the article

< Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >