Search Results

Search found 11025 results on 441 pages for 'f4r 20'.

Page 204/441 | < Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >

  • How do I get around restrictive email policies by ISP?

    - by Peter Turner
    Apparently we've been restricted (though packet filtering) to some arbitrarily small and untenable number of emails a day by some bankrupt ISP (and they say that's how it's always been chortle). We've been using our own mail server for the last 15 years, and only recently they've been giving us guff. Is there a way for a legitimate business to email their clients, who really want to receive these emails, by bypassing the ISP? The way we've been doing it is by breaking up into 20 or 30 emails, but that gets complicated and requires a lot of manual labor by the receptionist, and unless she's really careful we wind up emailing lots of people twice. So what are my options (Hosted Email, Lithuanian Proxy Server, Different ISP, not writing awful PHP that sends out zillions of emails and gets us blacklisted)?

    Read the article

  • rsyslog server - Can you split up and organize logs?

    - by Jakobud
    I recently setup one of our servers as an rsyslog server. I now have our firewall setup to log everything to that rsyslog server. But there doesn't seem to be an organization of the logs. All the firewall logs are just being dumped into the /var/log/messages on the rsyslog server. I guess I was maybe expecting them to at least be in a machine specific log file or directory. How can I organize the incoming logging? If I setup 20 servers to all log everything to a central rsyslog server, I really don't want everything being dumped into one big file or a few files. How can I setup rsyslog to tell it where to log what? Like if all the logs for a specific server were in it's own directory/file, etc... Is this possible?

    Read the article

  • Optimal Instance Size for EC2 Sharepoint Server

    - by Rob Wilkerson
    I'm surprised that I can't find any info about this, but I'm not a Windows admin and just a novice EC2 user. I have a client who wants to stand up a Sharepoint server on EC2 for internal use. The team is small (10-20) folks and traffic will be light. Mostly, the client is looking for one place to store documents (and revisions of documents) while making access easy for authenticated users anywhere in the world. They've settled on Sharepoint and have other EC2 instances so that seems like the natural fit, but I'm trying to figure out what to recommend for them. I'm currently thinking about a Medium instance. I'm afraid to go smaller because I think Windows would need a fair amount of memory just to run, but I'm very open to suggestions. Any advice would be much appreciated. I expect that the storage itself would happen in an EBS mount, but again, suggestions welcome. Thanks for your input.

    Read the article

  • Cross-update Word Fields

    - by Brent Arias
    I want to change a date in a field within my word document, and have it update a couple other fields automatically within the same document. The behavior I'm seeking is basically the same as what a spreadsheet can do. Is this possible? More specifically, if the first page of the document has the date Jan 20 2012, I want to be able to change it, and then watch a couple other dates elsewhere automatically change to either the same date or the same date plus six days. I would also "settle" for having all three fields updated from a central document property (though I don't know how to create one of those properties). Regardless of which approach is used, I want one of the dates to be <value> plus six days such as Jan 26 2012 based on the earlier example I gave.

    Read the article

  • UTF-8 bit representation

    - by Yanick Rochon
    I'm learning about UTF-8 standards and this is what I'm learning : Definition and bytes used UTF-8 binary representation Meaning 0xxxxxxx 1 byte for 1 à 7 bits chars 110xxxxx 10xxxxxx 2 bytes for 8 à 11 bits chars 1110xxxx 10xxxxxx 10xxxxxx 3 bytes for 12 à 16 bits chars 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx 4 bytes for 17 à 21 bits chars And I'm wondering, why 2 bytes UTF-8 code is not 10xxxxxx instead, thus gaining 1 bit all the way up to 22 bits with a 4 bytes UTF-8 code? The way it is right now, 64 possible values are lost (from 1000000 to 10111111). I'm not trying to argue the standards, but I'm wondering why this is so? ** EDIT ** Even, why isn't it UTF-8 binary representation Meaning 0xxxxxxx 1 byte for 1 à 7 bits chars 110xxxxx xxxxxxxx 2 bytes for 8 à 13 bits chars 1110xxxx xxxxxxxx xxxxxxxx 3 bytes for 14 à 20 bits chars 11110xxx xxxxxxxx xxxxxxxx xxxxxxxx 4 bytes for 21 à 27 bits chars ...? Thanks!

    Read the article

  • pdflush hanging on Amazon EBS drives when using multi-GB files - any workaround?

    - by rhh
    Hello, When I run gunzip on a 1.7GB file (which generates an 8GB file) on an EBS volume, pdflush freezes after gunzip runs and the CPU hangs indefinitely at 100% IO Wait. Here's the output from 'ps aux | grep pdflush'. Note the D status root 87 0.0 0.0 0 0 ? D 06:18 0:00 pdflush root 88 0.0 0.0 0 0 ? D 06:18 0:00 pdflush The only solution is to kill the pdflush processes. The processes don't die immediately either. This problem is repeatable and happens with new instances. I'm running 2xlarge instances and I have way more RAM free than is being used (i.e. /proc/meminfo shows 20+GB MemFree) Has anyone found a workaround to this problem in the past? Thanks for any thoughts. Robert

    Read the article

  • Excel SUM From Different Sheets IF Date Found

    - by user329005
    I have a workbook with separate sheets for each product (about 20 sheets, adding more on a regular basis). Each product is only available for a certain time frame, and has daily sales data recorded on that product's sheet. I want an overall snapshot across all products from any given date to be consolidated on a new sheet. This would sum from a particular column on each of the other sheets if a corresponding date exists. I have a moderately passable function right now that has a separate VLOOKUP for each product sheet like SUM(IF(ISERROR(VLOOKUP(DATECELL,SHEETNAME!ARRAY,COLUMN... next VLOOKUP, next VLOOKUP etc., but it's incredibly cumbersome to update each function when a new product is added. I'm thinking there's a much easier way utilizing a named group (sheet names), SUMIF, VLOOKUP etc. Then when a new product sheet is added, I can simply add the sheet name to the named group rather than editing all the functions. Any help would be much appreciated!

    Read the article

  • how to reduce time of git pulling each time when you do a make world on Xen source

    - by Registered User
    I am compiling xen from source and each time I do a make world it basically gives some or the other error my problem are not those errors ( I am trying to debug them) but the problem is each time when I do a make world Xen basically pulls things from git repository + rm -rf linux-2.6-pvops.git linux-2.6-pvops.git.tmp + mkdir linux-2.6-pvops.git.tmp + rmdir linux-2.6-pvops.git.tmp + git clone -o xen -n git://git.kernel.org/pub/scm/linux/kernel/git/jeremy/xen.git linux-2.6-pvops.git.tmp Initialized empty Git repository in /usr/src/xen-4.0.1/linux-2.6-pvops.git.tmp/.git/ remote: Counting objects: 1941611, done. remote: Compressing objects: 100% (319127/319127), done. remote: Total 1941611 (delta 1614302), reused 1930655 (delta 1604595) **Receiving objects: 20% (1941611/1941611), 98.17 MiB | 87 KiB/s, done.** and if you notice the last line it is still consuming my bandwidth pulling things from internet.How can I stop this step each time and use existing git repository?

    Read the article

  • Symantect (Veritas) Storage Foundation for Windows [closed]

    - by SvrGuy
    Does anyone out their have rough (I don't need exact) pricing for Symantec (used to be Veritas) Storage Foundation for Windows? Its for Windows Server 2008 R2. Ideally, I would love to know the cost of Storange Foundation For Windows, and also the price of the options (like VRR, HA etc. ) if you happen to know them. Getting the information out of a reseller is like pulling teeth. They want to meet with us and discuss our needs etc. My needs are just to know whether its $100, $500, $1,000 or $10,000 per server in small qtys (i.e. less than 20 licences). Arghh. Anyone know the rough prices?

    Read the article

  • Reclaiming deleted disk space from file vault

    - by cbrulak
    I have my main user account encrypted with file vault. After deleting some data (like 20 GB) my free space on the hard drive hasn't change (yes I emptied the trash, confirmed that the files are actually gone, etc,etc). I also tried "erasing free space" in the disk utility app. I logged off, and rebooted and so far that space hasn't been reclaimed. I'm assuming file vault or disk utility has some method of reclaiming but I can't find it. Any ideas?

    Read the article

  • windows VPS running apache and mysql, php scripts running slow.. but cpu usage is 1-3%..

    - by Roeland
    So every night I run some cron jobs. It requires probably about 20 min to process all the records. I gather the script does something like 10,000 sql queries. I figure this task was just that intense and needs time to complete, but I looked at CPU and memory usage, and it is super low. Cpu usage is between 1-3% and once in a while will bounce to 50ish for 2-3 seconds. This VPS is running windows 2003 server with Apache and MySQL. Does this sound right?

    Read the article

  • How do you expand a raid disk array in a dell 2850?

    - by johnny
    Hi, I have a Dell 2850 and I want to install Windows 2008 Server. Problem is that my C drive only has 16GB of space. The requirements say I need at least 20. I have an open bay for a drive. If I put in another drive, how can I add that to the array and them make it only for the C drive? what do I do? Thank you. edit: I don't want to remove any drives. I just want to add a new one to the existing array. Can I do that and make sure that new drive is for the logical C drive?

    Read the article

  • Amazon EC2 hostnames

    - by Firefly
    I'm currently trying to setup a Tigase cluster on Amazon EC2 instances in a VPC and I'm having troubles getting it to work due to the hostnames of the instances not being "full DNS names". According to the Tigase documentation: Please note the proper DNS configuration is critical for the cluster to work correctly. Make sure the 'hostname' command returns a full DNS name on each cluster node. Can anyone explain what a full DNS name is and how I can set my instances to use one? Currently my instances get a default hostname of the form "ip-10-0-0-20".

    Read the article

  • What could cause a flurry of Microsoft-Windows-Servicing events?

    - by MattUebel
    I have a windows 2k8 machine that generated almost 40,000 WinEventLog:System events in the period of about 20 minutes. The breakdown of these events by eventcode was approximately: 4373 46% 4371 46% 4383 7% 4372 1% Microsoft-Windows-Servicing seemed to go crazy for a short time.... looking at updates, changing the state of updates etc. What could have caused this? UPDATE: Many of the events seem to come in pairs of: Windows Servicing started a process of changing package KBfoobar state from Installed(Installed) to Installed(Installed) and Windows Servicing successfully set package KBfoobar state to Installed(Intstalled)

    Read the article

  • Can I use CAT 6a connectors with 7a cable (and get 6a performance)?

    - by Mr. Flibble
    I'm re-wiring a building and want to get the best cable possible laid - it required re-plastering to make a change to the cables and the cables will be there for the next 10 - 20 years. Currently there appears to be cat 7a cable available but not too much in the way of cat 7a connectors. Also - I won't be using 40Gig hardware in the near future. So, my question: is it possible to use cat 6a connectors / patch panels with cat 7a cable and get the same performance as I would had I used cat 6a cable? Are there any gotchas in trying to do this?

    Read the article

  • Corosync - stopping the service crashes the server

    - by Antipop
    I am trying to set up a test cluster on a Xen Server with 2 paravirtualized CentOS 5.4 machines. I am using Pacemaker+Corosync, and following the instructions found at http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf and other sites. Anyway, when I try to manually stop the corosync service, about 80% of the times the whole VM locks up with the message "Waiting for corosync services to unload" and I am forced to shut the machine down manually. For the remaining 20%, the VM keeps responding and adds dots to the above message, but it won't actually stop the service. There aren't many resources on the internet about this particular error. Any ideas about this? Thanks in advance.

    Read the article

  • Extracting httpdocs from Plesk Panel 9.5.4 Webserver backup file

    - by Paddington
    Good day, I am having problems manually extracting domains from Plesk 9.5 backup that was FTPed onto my back up server. I have followed the article http://kb.parallels.com/en/1757 using method 2. The problem is here: zcat DUMP_FILE.gz DUMP_FILE My backup file CP_1204131759.tar is a tar archive and zcat does not work with it. So I proceed to run the command: cat CP_1204131759.tar CP_1204131759. But when I try # cat CP_1204131759 | munpack I get an error that munpack did not find anything to read from standard input. I went on to extract the tar backup file using the xvf flags and got a lot of files (20) similar to these ones: CP_sapp-distrib.7686-0_1204131759.tgz CP_sapp-distrib.7686-35_1204131759.tgz CP_sapp-distrib.7686-6_1204131759.tgz How best can I extract the httpdocs of a domain from this server wide Plesk 9.5.4 backup?

    Read the article

  • e2fsck extremly slow, although enough memory exists

    - by kaefert
    I've got this external USB-Disk: kaefert@blechmobil:~$ lsusb -s 2:3 Bus 002 Device 003: ID 0bc2:3320 Seagate RSS LLC As can be seen in this dmesg output, there are some problems that prevents that disk from beeing mounted: kaefert@blechmobil:~$ dmesg | grep sdb [ 114.474342] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.475089] sd 5:0:0:0: [sdb] Write Protect is off [ 114.475092] sd 5:0:0:0: [sdb] Mode Sense: 43 00 00 00 [ 114.475959] sd 5:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 114.477093] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.501649] sdb: sdb1 [ 114.502717] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.504354] sd 5:0:0:0: [sdb] Attached SCSI disk [ 116.804408] EXT4-fs (sdb1): ext4_check_descriptors: Checksum for group 3976 failed (47397!=61519) [ 116.804413] EXT4-fs (sdb1): group descriptors corrupted! So I went and fired up my favorite partition manager - gparted, and told it to verify and repair the partition sdb1. This made gparted call e2fsck (version 1.42.4 (12-Jun-2012)) e2fsck -f -y -v /dev/sdb1 Although gparted called e2fsck with the "-v" option, sadly it doesn't show me the output of my e2fsck process (bugreport https://bugzilla.gnome.org/show_bug.cgi?id=467925 ) I started this whole thing on Sunday (2012-11-04_2200) evening, so about 48 hours ago, this is what htop says about it now (2012-11-06-1900): PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 3704 root 39 19 1560M 1166M 768 R 98.0 19.5 42h56:43 e2fsck -f -y -v /dev/sdb1 Now I found a few posts on the internet that discuss e2fsck running slow, for example: http://gparted-forum.surf4.info/viewtopic.php?id=13613 where they write that its a good idea to see if the disk is just that slow because maybe its damaged, and I think these outputs tell me that this is not the case in my case: kaefert@blechmobil:~$ sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 3562 MB in 2.00 seconds = 1783.29 MB/sec Timing buffered disk reads: 82 MB in 3.01 seconds = 27.26 MB/sec kaefert@blechmobil:~$ sudo hdparm /dev/sdb /dev/sdb: multcount = 0 (off) readonly = 0 (off) readahead = 256 (on) geometry = 364801/255/63, sectors = 5860533160, start = 0 However, although I can read quickly from that disk, this disk speed doesn't seem to be used by e2fsck, considering tools like gkrellm or iotop or this: kaefert@blechmobil:~$ iostat -x Linux 3.2.0-2-amd64 (blechmobil) 2012-11-06 _x86_64_ (2 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 14,24 47,81 14,63 0,95 0,00 22,37 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0,59 8,29 2,42 5,14 43,17 160,17 53,75 0,30 39,80 8,72 54,42 3,95 2,99 sdb 137,54 5,48 9,23 0,20 587,07 22,73 129,35 0,07 7,70 7,51 16,18 2,17 2,04 Now I researched a little bit on how to find out what e2fsck is doing with all that processor time, and I found the tool strace, which gives me this: kaefert@blechmobil:~$ sudo strace -p3704 lseek(4, 41026998272, SEEK_SET) = 41026998272 write(4, "\212\354K[_\361\3nl\212\245\352\255jR\303\354\312Yv\334p\253r\217\265\3567\325\257\3766"..., 4096) = 4096 lseek(4, 48404766720, SEEK_SET) = 48404766720 read(4, "\7t\260\366\346\337\304\210\33\267j\35\377'\31f\372\252\ffU\317.y\211\360\36\240c\30`\34"..., 4096) = 4096 lseek(4, 41027002368, SEEK_SET) = 41027002368 write(4, "\232]7Ws\321\352\t\1@[+5\263\334\276{\343zZx\352\21\316`1\271[\202\350R`"..., 4096) = 4096 lseek(4, 48404770816, SEEK_SET) = 48404770816 read(4, "\17\362r\230\327\25\346//\210H\v\311\3237\323K\304\306\361a\223\311\324\272?\213\tq \370\24"..., 4096) = 4096 lseek(4, 41027006464, SEEK_SET) = 41027006464 write(4, "\367yy>x\216?=\324Z\305\351\376&\25\244\210\271\22\306}\276\237\370(\214\205G\262\360\257#"..., 4096) = 4096 lseek(4, 48404774912, SEEK_SET) = 48404774912 read(4, "\365\25\0\21|T\0\21}3t_\272\373\222k\r\177\303\1\201\261\221$\261B\232\3142\21U\316"..., 4096) = 4096 ^CProcess 3704 detached around 16 of these lines every second, so 4 read and 4 write operations every second, which I don't consider to be a lot.. And finally, my question: Will this process ever finish? If those numbers from fseek (48404774912) represent bytes, that would be something like 45 gigabytes, with this beeing a 3 terrabyte disk, which would give me 134 days to go, if the speed stays constant, and he scans the disk like this completly and only once. Do you have some advice for me? I have most of the data on that disk elsewhere, but I've put a lot of hours into sorting and merging it to this disk, so I would prefer to getting this disk up and running again, without formatting it anew. I don't think that the hardware is damaged since the disk is only a few months and since I can't see any I/O errors in the dmesg output. UPDATE: I just looked at the strace output again (2012-11-06_2300), now it looks like this: lseek(4, 1419860611072, SEEK_SET) = 1419860611072 read(4, "3#\f\2447\335\0\22A\355\374\276j\204'\207|\217V|\23\245[\7VP\251\242\276\207\317:"..., 4096) = 4096 lseek(4, 43018145792, SEEK_SET) = 43018145792 write(4, "]\206\231\342Y\204-2I\362\242\344\6R\205\361\324\177\265\317C\334V\324\260\334\275t=\10F."..., 4096) = 4096 lseek(4, 1419860615168, SEEK_SET) = 1419860615168 read(4, "\262\305\314Y\367\37x\326\245\226\226\320N\333$s\34\204\311\222\7\315\236\336\300TK\337\264\236\211n"..., 4096) = 4096 lseek(4, 43018149888, SEEK_SET) = 43018149888 write(4, "\271\224m\311\224\25!I\376\16;\377\0\223H\25Yd\201Y\342\r\203\271\24eG<\202{\373V"..., 4096) = 4096 lseek(4, 1419860619264, SEEK_SET) = 1419860619264 read(4, ";d\360\177\n\346\253\210\222|\250\352T\335M\33\260\320\261\7g\222P\344H?t\240\20\2548\310"..., 4096) = 4096 lseek(4, 43018153984, SEEK_SET) = 43018153984 write(4, "\360\252j\317\310\251G\227\335{\214`\341\267\31Y\202\360\v\374\307oq\3063\217Z\223\313\36D\211"..., 4096) = 4096 So this number of the lseeks before the reads, like 1419860619264 are already a lot bigger, standing for 1.29 terabytes if the numbers are bytes, so it doesn't seem to be a linear progress on a big scale, maybe there are only some areas that need work, that have big gaps in between them. (times are in CET)

    Read the article

  • Good Slideshow DVD programs?

    - by AlexMax
    I want to burn a bunch of pictures to a DVD slideshow. However, Google reveals that there are tons of software programs that claim to do this. Can anyone recommend one of them without me having to download 20 of them only to discover half don't work and the other half are free trials to $29 software that doesn't work either? The only one which I have tried out so far is DVD Slideshow GUI, which is simply a mess of programs. It was free, but it crashed a whole bunch, spit errors at me when i tried to preview and never worked when I tried to export the slideshow to mpg.

    Read the article

  • Bad IIS 7.5 performance on webserver

    - by Robert P.
    I have a webpage (ASP.NET 4.0 / MVC 4). On my development machine (i5-2500 3.3 8GB Win7 VS2010 SP1 Fujitsu Esprimo P700) the page performs with 160 requests/sec on devenv webserver on my machine. The page performs with 250 requests/sec on my local IIS 7.5. (uncompiled web) The page performs with 20 requests per second on a 16core 32gb ram production server (Fujitsu RX-300 w2k8 rc2 IIS 7.5). (compiled web) Why? I think it's the IIS configuration but i can't figure out whats the problem. The page runs with 1 worker process on both machines. Web garden is not an option (it helps but the app isnt compatible with)

    Read the article

  • Outlook 2010 - Downloads same messages multiple times when opening

    - by AaronM
    I run multiple POP mailboxes, and have the 'Leave copy of messages on server' set for 20 days. I have been working like this for several years. However, I have just this week upgraded to Outlook 2010, and for some reason a couple of the mailboxes are downloading all the messages again, each time I open outlook. I have gone through all the settings but I can't see anything obvious, and apart from the upgrade to 2010, I havent touched any settings (2010 imported everything from 2007). I use webmail when away from my PC, hence why I leave a copy of emails on the server. Any ideas? It only downloads when Outlook first opens, thankfully not every time I do a send/receive.

    Read the article

  • LXC Container Networking

    - by digitaladdictions
    I just started to experiment with LXC containers. I was able to create a container and start it up but I cannot get dhcp to assign the container an IP address. If I assign a static address the container can ping the host IP but not outside the host IP. The host is CentOS 6.5 and the guest is Ubuntu 14.04LTS. I used the template downloaded by lxc-create -t download -n cn-01 command. If I am trying to get an IP address on the same subnet as the host I don't believe I should need the IP tables rule for masquerading but I added it anyways. Same with IP forwarding. I compiled LXC by hand from the following source https://linuxcontainers.org/downloads/lxc-1.0.4.tar.gz Host Operating System Version #> cat /etc/redhat-release CentOS release 6.5 (Final) #> uname -a Linux localhost.localdomain 2.6.32-431.20.3.el6.x86_64 #1 SMP Thu Jun 19 21:14:45 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Container Config #> cat /usr/local/var/lib/lxc/cn-01/config # Template used to create this container: /usr/local/share/lxc/templates/lxc-download # Parameters passed to the template: # For additional config options, please look at lxc.container.conf(5) # Distribution configuration lxc.include = /usr/local/share/lxc/config/ubuntu.common.conf lxc.arch = x86_64 # Container specific configuration lxc.rootfs = /usr/local/var/lib/lxc/cn-01/rootfs lxc.utsname = cn-01 # Network configuration lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 LXC default.confu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:12:30:f2 brd ff:ff:ff:ff:f #> cat /usr/local/etc/lxc/default.conf lxc.network.type = veth lxc.network.link = br0 lxc.network.flags = up #> lxc-checkconfig Kernel configuration not found at /proc/config.gz; searching... Kernel configuration found at /boot/config-2.6.32-431.20.3.el6.x86_64 --- Namespaces --- Namespaces: enabled Utsname namespace: enabled Ipc namespace: enabled Pid namespace: enabled User namespace: enabled Network namespace: enabled Multiple /dev/pts instances: enabled --- Control groups --- Cgroup: enabled Cgroup namespace: enabled Cgroup device: enabled Cgroup sched: enabled Cgroup cpu account: enabled Cgroup memory controller: /usr/local/bin/lxc-checkconfig: line 103: [: too many arguments enabled Cgroup cpuset: enabled --- Misc --- Veth pair device: enabled Macvlan: enabled Vlan: enabled File capabilities: /usr/local/bin/lxc-checkconfig: line 118: [: -gt: unary operator expected Note : Before booting a new kernel, you can check its configuration usage : CONFIG=/path/to/config /usr/local/bin/lxc-checkconfig Network Config (HOST) #> cat /etc/sysconfig/network-scripts/ifcfg-br0 DEVICE=br0 TYPE=Bridge BOOTPROTO=dhcp ONBOOT=yes #> cat /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 ONBOOT=yes TYPE=Ethernet IPV6INIT=no USERCTL=no BRIDGE=br0 #> cat /etc/networks default 0.0.0.0 loopback 127.0.0.0 link-local 169.254.0.0 #> ip a s 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:12:30:f2 brd ff:ff:ff:ff:ff:ff inet6 fe80::20c:29ff:fe12:30f2/64 scope link valid_lft forever preferred_lft forever 3: pan0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 42:7e:43:b3:61:c5 brd ff:ff:ff:ff:ff:ff 4: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 00:0c:29:12:30:f2 brd ff:ff:ff:ff:ff:ff inet 10.60.70.121/24 brd 10.60.70.255 scope global br0 inet6 fe80::20c:29ff:fe12:30f2/64 scope link valid_lft forever preferred_lft forever 12: vethT6BGL2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether fe:a1:69:af:50:17 brd ff:ff:ff:ff:ff:ff inet6 fe80::fca1:69ff:feaf:5017/64 scope link valid_lft forever preferred_lft forever #> brctl show bridge name bridge id STP enabled interfaces br0 8000.000c291230f2 no eth0 vethT6BGL2 pan0 8000.000000000000 no #> cat /proc/sys/net/ipv4/ip_forward 1 # Generated by iptables-save v1.4.7 on Fri Jul 11 15:11:36 2014 *nat :PREROUTING ACCEPT [34:6287] :POSTROUTING ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A POSTROUTING -o eth0 -j MASQUERADE COMMIT # Completed on Fri Jul 11 15:11:36 2014 Network Config (Container) #> cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp #> ip a s 11: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 02:69:fb:42:ee:d7 brd ff:ff:ff:ff:ff:ff inet6 fe80::69:fbff:fe42:eed7/64 scope link valid_lft forever preferred_lft forever 13: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever

    Read the article

  • How can I make Excel correlate data from two data sets into a single graph?

    - by Tom Ritter
    I have two datasets, one being sparser than the other. They look like this: Data Set 1: 4 50 5 55 6 60 7 70 8 80 Data Set 2: 4 10 6 20 8 30 I have several hundred points instead of this few. I want them in the same graph, the X axis being 4-8, the y axis being 0-100ish, and two lines, one for each data set. What I get is two lines, not correlated at all along the X axis, and the X axis being labeled from one of the two datasets, with the labels being wrong for the other. The smaller data set is one-point-per-tick on the x axis, when I need it to skip ticks and actually line up with the other data set. Not married to excel, willing to try this in something else if it's free.

    Read the article

  • Recurring Apache 2.0.52 error on CentOS 4 - 'could not create `rewrite_log_lock`'

    - by warren
    I have been seeing a recurring issue on my web server: [Sun May 16 03:10:19 2010] [crit] (28)No space left on device: mod_rewrite: could not create rewrite_log_lock Configuration Failed [Sun May 16 04:10:05 2010] [crit] (28)No space left on device: mod_rewrite: could not create rewrite_log_lock Configuration Failed [Sun May 16 05:10:04 2010] [crit] (28)No space left on device: mod_rewrite: could not create rewrite_log_lock Configuration Failed [Sun May 16 05:17:13 2010] [crit] (28)No space left on device: mod_rewrite: could not create rewrite_log_lock Configuration Failed So far, the only fix I have found to this when it happens is to reboot my server. This is non-ideal :-\ Restarting httpd does not clear the error. df indicates I have 20+ gigs free, and top and free both report 800+ megs (or 1.2 gigs) > df -h Filesystem Size Used Avail Use% Mounted on /dev/simfs 40G 18G 23G 44% / # > free total used free shared buffers cached Mem: 1474560 300832 1173728 0 0 0 -/+ buffers/cache: 300832 1173728 Any ideas on why this would recur, and how to prevent/fix it?

    Read the article

  • How do I get a 300Mbps connection over 802.11n?

    - by Daniel Schaffer
    I just bought a new wireless setup, consisting of a Cisco E2000 router, Edimax 7718un USB adapter for my laptop and an Edimax 7728in PCI adapter for my HTPC (which isn't in a location I can run cat5 to). I have to stay in the 2.4GHz band because I have an iPhone and a Wii that will need to connect. I'm aware that 11g devices will drop speeds for 11n devices, but they aren't connected yet. The fastest connection I've been able to get, with the router within 5 feet of either the laptop or HTPC is 144Mbps. The router has settings for "20MHz" and "Auto (20 MHz or 40Mhz) channel width, which I've set to the "Auto" setting. I haven't been able to find anything similar for either of the Edimax adapters. This is the first I've dealt with 11n, so I'm not even sure what else could be causing a problem. How do I get up to 300Mbps, or at least a fair bit closer?

    Read the article

< Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >