Search Results

Search found 9715 results on 389 pages for 'bad passwords'.

Page 369/389 | < Previous Page | 365 366 367 368 369 370 371 372 373 374 375 376  | Next Page >

  • I cut-to-move DCIM folder to ext SD when an auto android OS update popped up b4 I could choose target - Cannot recover 200+ photos

    - by ZeroG
    I was downloading my Exhibit II's DCIM camera folder (with month's of photos inside) to its external SD card, in order to transfer them into my laptop. In my overconfidence, I hurriedly chose cut-to-move (rather than copy-to-move) when KABOOM! —an automatic Android OS update popped up before I could choose the target!!! I figured everything was in cache & calmly tried to go through with the update. But that was not a typically seamless event. It showed downloading icon but hmm… since I rooted the phone it brought the command line up & recovery sequence. But neither Android nor I had yet downloaded any alternate custom ROM Files to internal SD to update from! So were they trying to make me unroot my phone by giving me some bogus update on the fly or just give me a hard time in trying to hand me down an unrooted ROM that I'd have to figure out how to root again? Yes, I know there was that blurb about overwriting a file of the same name but I was trying to shake the darn stubborn update being forced on my phone during this precarious moment. I thought I had frozen or turned off all those auto-updates previously. Anyway, phones are small & fingers are big (sigh)... I tried to reboot into safe mode but the resultant photo file was partially overwritten (200 files had names but Zero bytes in them). I thought maybe it was still hung in cache or deposited somewhere else but I have searched everywhere with file managers. Since I did not have Titanium backing up camera, photo folder or gallery, I cannot recover 200+ photos. Dumb. You can understand my dilemma as I am involved in the arts & although just a camera phone, most of these photos were historic & aesthetic or at least as to subject matter. Photo-ops don't reoccur. I have tried a couple of recovery apps from the market like Search Duplicates & Recover to no avail. I was only able to salvage stuff I'd sent out in messages. I've got several decades in computers & this is such a miserable beginner's piece of bad luck I can't believe it happened to me. They were precious photos! Yes, I turned on Titanium since & yes I even tried USB to laptop recoveries. Being on a MacBookPro I'm trying androidfiletransfer.dmg, but I'd have to upgrade to Peach Sunrise to get above Android 3.0 for that App to recognize the phone via USB & the programmer says installation zeros your data, so that pretty much toasts any secret hidden places where these photos may have been deposited. Don't want to do that & am still trying to find them. They certainly didn't make it to my external SD Card. If any of you techies out there know anything, please help & thanks. Despite decades of being in computing, unfamiliar & ever-changing hard or software can humble even the most seasoned veterans.

    Read the article

  • Server Admin is not allowing me to configure DNS

    - by Clinton Blackmore
    We have a Mac OS X 10.5.8 Server running DNS (and a few other services). When I connect to it (using Server Admin 10.5.3 [which comes from the Server Admin 10.5.7 tools]), and click to look at the DNS settings, all appears normal -- it shows many reverse entries and two top-level domains. However, when I select one of our domains and open the disclosure triangle, the list is empty! [There should be over a dozen entries, and the reverse entries do show up.] If I then tell it I want to add, say, an A Record to the domain, almost everything disappears -- and I am left with a list showing our two domains, one with a disclosure triangle underneath it showing a single entry, and one reverse entry to correspond to the new A record. named appears to be working fine. DNS names resolve. It appears to simply be that Server Admin is having problems with the data on the computer. No one here would have manually created a DNS entry. Now, while I think I've backed up the DNS (I backed up /var/named/, /etc/named.conf, and /etc/dns/, as mentioned here), I'm really not sure if just replacing the files would restore the DNS settings we have if things go south. I am contemplating going to settings and changing the log level from "Information" to "Debug", but 1) I am just a little concerned that it might write a bad configuration to the disk, and 2) I think it would only affect named and not Server Admin, and, so far as I can tell, named is not having a problem. (Nothing looks strange in /Library/Logs/named.log when I open it via Console/Terminal. Oddly, though, when I click on the 'log' button for DNS in Server Admin, I see no text at all, just a fully white window. When I look at one of our secondary DNS servers, I am able to see the log file through Server Admin.) This entry appears in the system log when I run Server Admin on the server: Jun 17 09:02:08 od1 Server Admin[3892]: Unexpected call to doMarkConfigurationAsDirty by 'DNS' plugin during updateConfigurationViewFromDescription It seems to occur after I've looked at DNS, look at another service, and then click back on DNS. Think that the most likely cause is a corrupt configuration file, I glanced through all the files that I backed up, and none of them is obviously gobbledygook. Here are some oddities I find when running Server Admin from a remote computer to manage the DNS. When I click to see the log file for DNS, the server starts writing messages like these to its system.log: Jun 17 09:59:04 od1 kernel[0]: Limiting open port RST response from 252 to 250 packets per second Jun 17 09:59:06 od1 kernel[0]: Limiting open port RST response from 258 to 250 packets per second This stops when I click on a different service. The inderterminate progress indicator (the spinning wheel that appears beside the "Revert" and "Save" buttons in the bottom-right corner of Server Admin) looks really strange. As far as I can tell, instead of just spinning and waiting, it is being told to start spinning repeatedly, resulting in a jerky animation. Here are some of the messages being logged on the computer running Server Admin: At startup: *** ERROR: -[GRAxes computeLayout]:1124 - plotRect height = 0.000000 <= 0.0 *** *** ERROR: -[GRChartView computeLayout]:1194 - Layout for overlay axes (0x18758f50) failed. *** (These messages don't concern me too much as they go away for a while if you delete ~/Library/Preferences/com.apple.ServerAdmin.plist). At shutdown: 2010-06-17 10:02:17.202 Server Admin[7770:10b] *** -[GroupTextField windowDidResignKey:]: unrecognized selector sent to instance 0x16e12490 More concerning are these messages: 2010-06-17 09:59:47.269 Server Admin[7770:10b] Unexpected call to doMarkConfigurationAsDirty by 'DNS' plugin during updateConfigurationViewFromDescription Server Admin(7770,0xb0453000) malloc: *** error for object 0x1c115390: double free *** set a breakpoint in malloc_error_break to debug 2010-06-17 10:01:00.795 Server Admin[7770:10b] *** -[ServiceEntry sessionHost]: unrecognized selector sent to instance 0x2af500 Any thoughts on: what the problem is how I can troubleshoot it or how to fix it? If I do need to wipe out DNS and restart, is there a good way to do this?

    Read the article

  • Ubuntu 12 crashed and took down network

    - by Leopd
    We recently set up a new Ubuntu 12.04LTS server on our network. It's not fully configured so it's not doing much beyond sshd and a default apache2 install. But this evening it appears to have crashed. It wasn't responding to the network or the keyboard. But the worst part is, it took down the entire network. My knowledge of the network stack below OSI layer 3 is very limited, so the rest confuses me. When this machine was physically connected to the network, no other machine could connect to the outside internet. When things were broken, running arp showed that our gateway's IP address (10.0.1.1) was listed as "invalid." Unplugging the server from the network fixed the problem, and plugging it back in broke it again. So the crashed server was advertising itself as owning the gateway's IP address? There's nothing at all in syslog during the time when it was causing problems. Any ideas about how to figure out what went wrong or what we can do to prevent it from happening again? I'm hesitant to even put the machine back on the network right now. Update ** It crashed again, and I ran tcpdump -penn arp (thanks bahamat!) for several minutes and got this... (timestamps and duplicate lines removed) 00:1e:65:f8:dc:24 > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 60: Request who-has 10.0.1.1 tell 10.0.2.191, length 46 00:1e:65:f8:dc:24 > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 60: Request who-has 10.0.1.44 tell 10.0.2.191, length 46 60:d8:19:d4:71:d6 > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 60: Request who-has 10.0.1.1 tell 10.0.2.125, length 46 d4:9a:20:04:e9:78 > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 42: Request who-has 192.168.1.1 tell 192.168.1.100, length 28 Update 2 ** When the network is functioning properly, arping -c4 10.0.1.1 returns this: ARPING 10.0.1.1 60 bytes from c0:c1:c0:77:25:8e (10.0.1.1): index=0 time=267.982 usec 60 bytes from c0:c1:c0:77:25:8e (10.0.1.1): index=1 time=422.955 usec 60 bytes from c0:c1:c0:77:25:8e (10.0.1.1): index=2 time=299.215 usec 60 bytes from c0:c1:c0:77:25:8e (10.0.1.1): index=3 time=366.926 usec --- 10.0.1.1 statistics --- 4 packets transmitted, 4 packets received, 0% unanswered (0 extra) When the bad server is plugged in, arping -c4 10.0.1.1 returns: ARPING 10.0.1.1 --- 10.0.1.1 statistics --- 4 packets transmitted, 0 packets received, 100% unanswered (0 extra) Context ** 10.0.x.x is the main subnet. 10.0.1.1 is the main internet gateway 10.0.1.44 is a printer 10.0.2.* devices are all laptops / workstations I have no idea what's using the 192.168.x.x subnet -- your guesses are at least as good as mine. A VM on a workstation? A misconfigured WAP? Somebody re-sharing wifi? A machine that failed to DHCP? The offending ubuntu server's MAC address ends in cd:80 so isn't listed in the dump. It should DHCP to 10.0.3.3 Thanks for any help. This ARP stuff is all voodoo to me. Packets just go to IP addresses, right? ;)

    Read the article

  • Degraded RAID5 and no md superblock on one of remaining drive

    - by ark1214
    This is actually on a QNAP TS-509 NAS. The RAID is basically a Linux RAID. The NAS was configured with RAID 5 with 5 drives (/md0 with /dev/sd[abcde]3). At some point, /dev/sde failed and drive was replaced. While rebuilding (and not completed), the NAS rebooted itself and /dev/sdc dropped out of the array. Now the array can't start because essentially 2 drives have dropped out. I disconnected /dev/sde and hoped that /md0 can resume in degraded mode, but no luck.. Further investigation shows that /dev/sdc3 has no md superblock. The data should be good since the array was unable to assemble after /dev/sdc dropped off. All the searches I done showed how to reassemble the array assuming 1 bad drive. But I think I just need to restore the superblock on /dev/sdc3 and that should bring the array up to a degraded mode which will allow me to backup data and then proceed with rebuilding with adding /dev/sde. Any help would be greatly appreciated. mdstat does not show /dev/md0 # cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] md5 : active raid1 sdd2[2](S) sdc2[3](S) sdb2[1] sda2[0] 530048 blocks [2/2] [UU] md13 : active raid1 sdd4[3] sdc4[2] sdb4[1] sda4[0] 458880 blocks [5/4] [UUUU_] bitmap: 40/57 pages [160KB], 4KB chunk md9 : active raid1 sdd1[3] sdc1[2] sdb1[1] sda1[0] 530048 blocks [5/4] [UUUU_] bitmap: 33/65 pages [132KB], 4KB chunk mdadm show /dev/md0 is still there # mdadm --examine --scan ARRAY /dev/md9 level=raid1 num-devices=5 UUID=271bf0f7:faf1f2c2:967631a4:3c0fa888 ARRAY /dev/md5 level=raid1 num-devices=2 UUID=0d75de26:0759d153:5524b8ea:86a3ee0d spares=2 ARRAY /dev/md0 level=raid5 num-devices=5 UUID=ce3e369b:4ff9ddd2:3639798a:e3889841 ARRAY /dev/md13 level=raid1 num-devices=5 UUID=7384c159:ea48a152:a1cdc8f2:c8d79a9c With /dev/sde removed, here is the mdadm examine output showing sdc3 has no md superblock # mdadm --examine /dev/sda3 /dev/sda3: Magic : a92b4efc Version : 00.90.00 UUID : ce3e369b:4ff9ddd2:3639798a:e3889841 Creation Time : Sat Dec 8 15:01:19 2012 Raid Level : raid5 Used Dev Size : 1463569600 (1395.77 GiB 1498.70 GB) Array Size : 5854278400 (5583.08 GiB 5994.78 GB) Raid Devices : 5 Total Devices : 4 Preferred Minor : 0 Update Time : Sat Dec 8 15:06:17 2012 State : active Active Devices : 4 Working Devices : 4 Failed Devices : 1 Spare Devices : 0 Checksum : d9e9ff0e - correct Events : 0.394 Layout : left-symmetric Chunk Size : 64K Number Major Minor RaidDevice State this 0 8 3 0 active sync /dev/sda3 0 0 8 3 0 active sync /dev/sda3 1 1 8 19 1 active sync /dev/sdb3 2 2 8 35 2 active sync /dev/sdc3 3 3 8 51 3 active sync /dev/sdd3 4 4 0 0 4 faulty removed [~] # mdadm --examine /dev/sdb3 /dev/sdb3: Magic : a92b4efc Version : 00.90.00 UUID : ce3e369b:4ff9ddd2:3639798a:e3889841 Creation Time : Sat Dec 8 15:01:19 2012 Raid Level : raid5 Used Dev Size : 1463569600 (1395.77 GiB 1498.70 GB) Array Size : 5854278400 (5583.08 GiB 5994.78 GB) Raid Devices : 5 Total Devices : 4 Preferred Minor : 0 Update Time : Sat Dec 8 15:06:17 2012 State : active Active Devices : 4 Working Devices : 4 Failed Devices : 1 Spare Devices : 0 Checksum : d9e9ff20 - correct Events : 0.394 Layout : left-symmetric Chunk Size : 64K Number Major Minor RaidDevice State this 1 8 19 1 active sync /dev/sdb3 0 0 8 3 0 active sync /dev/sda3 1 1 8 19 1 active sync /dev/sdb3 2 2 8 35 2 active sync /dev/sdc3 3 3 8 51 3 active sync /dev/sdd3 4 4 0 0 4 faulty removed [~] # mdadm --examine /dev/sdc3 mdadm: No md superblock detected on /dev/sdc3. [~] # mdadm --examine /dev/sdd3 /dev/sdd3: Magic : a92b4efc Version : 00.90.00 UUID : ce3e369b:4ff9ddd2:3639798a:e3889841 Creation Time : Sat Dec 8 15:01:19 2012 Raid Level : raid5 Used Dev Size : 1463569600 (1395.77 GiB 1498.70 GB) Array Size : 5854278400 (5583.08 GiB 5994.78 GB) Raid Devices : 5 Total Devices : 4 Preferred Minor : 0 Update Time : Sat Dec 8 15:06:17 2012 State : active Active Devices : 4 Working Devices : 4 Failed Devices : 1 Spare Devices : 0 Checksum : d9e9ff44 - correct Events : 0.394 Layout : left-symmetric Chunk Size : 64K Number Major Minor RaidDevice State this 3 8 51 3 active sync /dev/sdd3 0 0 8 3 0 active sync /dev/sda3 1 1 8 19 1 active sync /dev/sdb3 2 2 8 35 2 active sync /dev/sdc3 3 3 8 51 3 active sync /dev/sdd3 4 4 0 0 4 faulty removed fdisk output shows /dev/sdc3 partition is still there. [~] # fdisk -l Disk /dev/sdx: 128 MB, 128057344 bytes 8 heads, 32 sectors/track, 977 cylinders Units = cylinders of 256 * 512 = 131072 bytes Device Boot Start End Blocks Id System /dev/sdx1 1 8 1008 83 Linux /dev/sdx2 9 440 55296 83 Linux /dev/sdx3 441 872 55296 83 Linux /dev/sdx4 873 977 13440 5 Extended /dev/sdx5 873 913 5232 83 Linux /dev/sdx6 914 977 8176 83 Linux Disk /dev/sda: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 66 530113+ 83 Linux /dev/sda2 67 132 530145 82 Linux swap / Solaris /dev/sda3 133 182338 1463569695 83 Linux /dev/sda4 182339 182400 498015 83 Linux Disk /dev/sda4: 469 MB, 469893120 bytes 2 heads, 4 sectors/track, 114720 cylinders Units = cylinders of 8 * 512 = 4096 bytes Disk /dev/sda4 doesn't contain a valid partition table Disk /dev/sdb: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdb1 * 1 66 530113+ 83 Linux /dev/sdb2 67 132 530145 82 Linux swap / Solaris /dev/sdb3 133 182338 1463569695 83 Linux /dev/sdb4 182339 182400 498015 83 Linux Disk /dev/sdc: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdc1 1 66 530125 83 Linux /dev/sdc2 67 132 530142 83 Linux /dev/sdc3 133 182338 1463569693 83 Linux /dev/sdc4 182339 182400 498012 83 Linux Disk /dev/sdd: 2000.3 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdd1 1 66 530125 83 Linux /dev/sdd2 67 132 530142 83 Linux /dev/sdd3 133 243138 1951945693 83 Linux /dev/sdd4 243139 243200 498012 83 Linux Disk /dev/md9: 542 MB, 542769152 bytes 2 heads, 4 sectors/track, 132512 cylinders Units = cylinders of 8 * 512 = 4096 bytes Disk /dev/md9 doesn't contain a valid partition table Disk /dev/md5: 542 MB, 542769152 bytes 2 heads, 4 sectors/track, 132512 cylinders Units = cylinders of 8 * 512 = 4096 bytes Disk /dev/md5 doesn't contain a valid partition table

    Read the article

  • Issues with Verizon's "Network Extender" device talking on my home network.

    - by Logan
    I recently switched my phone service to Verizon from ATT, and I get somewhat spotty service in my house. I called them and they sent me a "network extender" device for free. Its a femtocell that connects to my home network. The directions that come with it are very dumbed down, basically just say to connect it to your router and put it near a window (so it can get a GPS signal, it has to make sure its within the correct area before operating). The problem I'm having is the network light on it stays red. The troubleshooting information that came with it tells me this means there is a bad network connection. Its connected through an ASUS router running DD-WRT. No other devices on my network have a problem with it, including a Western Digital WDLIVE device, mine and my wife's cell phones (via wifi), a Wii, and an Xbox. If I connect the device directly to my cable modem, the light goes blue (which means good) and it starts working. So this tells me that its definately a configuration issue with my router. Verizon basically washed their hands of me when I connected it to my cable modem, and told me that its a router issue and to try a different router. Because normal people just have extra routers laying around their houses... When I connect it to the router, I can watch the DHCP Clients list on the status page, and the MAC of the network extender quickly fills up the clients list, grabbing every available DHCP address. Its like it grabs an address, can't connect to the internet, releases it, grabs another, then another, then another. So in the DHCP server settings I assigned a static IP to its MAC. This made it quit doing what it was doing before, but its still not working. I found the ports I needed to open on verizon's website, and opened them in the port forwarding config on my router. This still didn't help. So, I tried setting the network extender device's IP as the DMZ IP on the router. This still did no good. I called Verizon back and got the tech to write up a report which he passed on to a "senior network tech" who I got a call back from a few hours ago. This guy told me that while an ASUS router isn't listed as a supported device, he's not really sure why its not working. He suggested restoring the firmware to stock ASUS firmware and trying again. I have a very hard time believing its DD-WRT doing this, since every other device is working just fine with it. But its also not the Network Extender, since it works just fine when connected directly to the modem. At this point I'm out of ideas, and the next step is to restore the stock firmware on my router, and then going to walmart and getting a linksys WRT-54G to try. Is there anything else I could try before going that drastic? Cliffs- -Network extender won't work behind router, works when connected directly to cable modem. -Extender goes nuts when allowed to pick its own DHCP address, I had to assign it a static IP. -Won't work when correct ports are forwarded to it -Won't work with a DMZ address.

    Read the article

  • How to set up a centralized backup server with lots of offsite workstations, intermittent internet connectivity, and stubborn users?

    - by Zac B
    This might be an impossible question. Context: We have a bunch of computers across around 1000 users. We have a centralized office where 900 of the users work, most of the time. Most of the computers are laptops. They are very frequently coming on and off the network for hours at a time. Users often take their computers home and do lots of work from home. In addition, there are a handful of users who work elsewhere in the country, who are offline (no internet connection whatsoever) for more than half of the time they use their machines. All of the machines are Windows 7/XP. Problem: People are always losing data. One day someone accidentally deletes a bunch of files. The next day someone else installs a bad driver or tries to mess with something in system32 and needs a personal data backup/reinstall of Windows. Because of how many of our business operations are done without an internet connection, and how frequently computers come on- and offline, it's unfeasible to make users use network storage for all of their data. We tried giving them Dropboxes, and they stored their files elsewhere. We bought and deployed Altiris, and they uninstalled it and blamed us when they couldn't get files back that they accidentally deleted while they were offline and hadn't taken a backup in months. We tried teaching them backup best-practices, and using scheduled sync tools to upload things to the network drives, and they turned them off because they "looked like viruses". It doesn't help that many of these users are pretty high up in the business and are not amicable to any sort of "you need to do something regularly because we say so" solution. Question: Other than finding another job where IT is treated differently and users are willing to follow best practices, how would people recommend I implement a file backup solution that supports the following: Backs up to a centralized server over LAN or WAN whenever a network link becomes available, or on a schedule. Supports interrupted/resumed backups (and hopefully file-delta only backups), since connections to the network (WAN or LAN) are often slow and only open for half an hour or so. Supports relatively rapid, "I accidentally deleted the TPS reports! Oh no!" single-file recovery, ideally administered from the central backup server rather than the client PC. Supports local-to-local file delta backup on a schedule, so that users without a network connection for a few days can still retrieve accidental deletions or whatnot. Ideally, the local stored backups would be pushed up to the server whenever network link is available. Isn't configurable on the clients without certain credentials. Because the CFOs (who won't give up their admin rights on the domain) will disable it if they can. Backs up the entire hard drive. There are people who are self-righteous about storing things in C:\, or in the recycle bin, or in the C:\Windows dir (yes, I know). I'm fine integrating multiple products/solutions, or scripting different programs together myself (I'm a somewhat competent programmer), but I've been drawing a blank on where to start. Dropbox is folder-specific, Altiris doesn't cope with LAN outages or interrupted/resumed backups, Volume Shadow Copy is awesome for a local-to-local solution, but I don't know how to push days of stored shadow copies up to a server in a 2 hour window of network access. The company is fine with spending decent money on this, thousands (USD) on a server, and hundreds on clients, if necessary. I want to emphasize that this isn't a shopping list request. While I wish there was a program out there that did what I want, I've looked pretty hard, and not found anything that fits the bill. Instead, I'm hoping for ideas on where to start hacking things together from scratch/from different technologies to make something stable that works. Cheers!

    Read the article

  • Dual head setup for Ubuntu 10.04.1 and Windows XP Pro with same hardware configuration

    - by mejpark
    I have a Dell OptiPlex 360 workstation at work, with 2 x ATI RV280 [Radeon 9200 PRO] graphics cards installed, which are attached to two identical 19" HII flat panel monitors. I'm using the open source Radeon driver with Ubuntu, and the proprietary drivers with Windows. The good news is that dual head configuration works for both OSes. The bad news is, I have to use a different hardware configuration for each OS to achieve this. Hardware config #1: Dual monitors work for Windows XP Pro like this: First display -> external VGA port Second display -> DVI input on gfx card Hardware config #2: Dual monitors work for Ubuntu 10.04.1 like this: First display -> VGA port on gfx card Second display -> DVI input on gfx card I connected up the displays according to Config #2 and booted up Windows, which resulted in a mirror image on both screens. I was unable to login, as the login box was not visible. I unplugged the VGA lead from gfx card and plugged it into the external VGA port (Config #1) - Windows dual head works again, but the VGA-connected screen is not recognised by Ubuntu and remains in standby mode. Is it possible to configure a dual head setup for Ubuntu using Config #1, or am I missing something? I tried setting up dual monitors using Config #1, this morning which didn't work. By default, there is no xorg.conf file in Ubuntu 10.04.1, so I generated one using: $ sudo X :2 -configure X.Org X Server 1.7.6 Release Date: 2010-03-17 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.24-27-server i686 Ubuntu Current Operating System: Linux harrier 2.6.32-24-generic #42-Ubuntu SMP Fri Aug 20 14:24:04 UTC 2010 i686 Kernel command line: BOOT_IMAGE=/boot/vmlinuz-2.6.32-24-generic root=UUID=a34c1931-98d4-4a34-880c-c227a2936c4a ro quiet splash Build Date: 21 July 2010 12:47:34PM xorg-server 2:1.7.6-2ubuntu7.3 (For technical support please see http://www.ubuntu.com/support) Current version of pixman: 0.16.4 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.2.log", Time: Mon Sep 13 10:02:02 2010 List of video drivers: apm ark intel mach64 s3virge trident mga tseng ati nouveau neomagic i740 openchrome voodoo s3 i128 radeon siliconmotion nv ztv vmware v4l chips rendition savage sisusb tdfx geode sis r128 cirrus fbdev vesa (++) Using config file: "/home/michael/xorg.conf.new" (==) Using config directory: "/usr/lib/X11/xorg.conf.d" (II) [KMS] No DRICreatePCIBusID symbol, no kernel modesetting. Xorg detected your mouse at device /dev/input/mice. Please check your config if the mouse is still not operational, as by default Xorg tries to autodetect the protocol. Xorg has configured a multihead system, please check your config. Your xorg.conf file is /home/michael/xorg.conf.new To test the server, run 'X -config /home/michael/xorg.conf.new' ddxSigGiveUp: Closing log $ sudo X -config /home/michael/xorg.conf.new Fatal server error: Server is already active for display 0 If this server is no longer running, remove /tmp/.X0-lock and start again. Please consult the The X.Org Foundation support at http://wiki.x.org for help. ddxSigGiveUp: Closing log I then booted Ubuntu in failsafe mode, dropped into root shell, and executed $ X -config /home/michael/xorg.conf.new again. The screen went blank and turned off, so

    Read the article

  • Intel Rapid Storage / Smart Response SSD caching issue

    - by goober
    Background Recently built my own PC. It works! Almost. It's been a while since getting into the guts of these things, so I'm familiar but may be missing something simple. FYI, I don't care about blowing the OS away -- it's brand new and we can go back from scratch as many times as necessary. Goal / Issue I'd like to use the SSD to take advantage of Intel's Smart Response technology (allows the SSD to act as a cache for HDDs) I would like the SSD cache to act as a cache for my HDDs, which I would like to be in a RAID1 array (so I get the speed from the SSD and the redundancy from the RAID1) However, Windows only sees the drive in device manager (not as a drive), so I'm unsure what to do about that. Related: as far as I know, for this to work, the drives all have to be in a single RAID array (i.e. a RAID0 pairing of the SSD and the RAID1 HDD array). However, when attempting this at the BIOS level, I am told there is not enough space for an array. Steps so Far Moved the SSD onto the Intel controller (I'd had it on the Marvel 6.0 controller instead of the Intel controller, so the BIOS was only seeing it in a strange way) Updated the BIOS of the motherboard to the latest version Reinstalled Intel's RST (iRST?) software several times, as some forums reported it working after reinstalling 3 times (which does not inspire confidence). Checked Intel storage: it does see the SSD as a physical, non-RAID device. However, it says no space exists if I try to create an array. Checked the BIOS: it does not show up in the boot order, but is an option that can be selected under boot options. Tried the firmware update for that model. Issue: the firmware CD doesn't detect a drive; maybe the Intel storage controller is making it difficult? moved the ssd to the marvel controller. The firmware update cd appeared to hang while searching for drives. swapped out the SATA cable for the manufacturer's and moved back to the intel storage controller. Noticed at this point that in the Intel RST software, a device DOES show up in addition to the RAID set -- only shown as a "60 GB internal disk". Windows doesn't appear to see it as a drive, but it does still show in device manager. Move SSD to port from 0-3 on MOBO and set SATA mode to IDE (after disconnecting RAID1 config) to allow the firmware update to work. Firmware was already at the latest version. Next Steps ? Components involved ASUS P8Z68-V PRO motherboard (Intel Z68 Chipset) Intel i7 2600k Processor 2 x 1TB 7200 RPM HDDs 64 GB Crucial M4 SSD (M4-CT064M4SSD2) For Reference -- Storage Configuration Intel 3 gbps Intel 3gbps Intel 6gbps Marvel 6gbps +----------+ +----------+ +----------+ +----------+ | | <----+ | | +-+ | | | |----------| | |----------| |-|--------| |----------| | | | | + | | | | | | +----------+ | +--|-------+ +-|--------+ +----------+ | | | + v v | 1 TB HDD 64 GB SSD + +> 1 TB HDD For Reference -- Intel RST (v10.8.0.1003) Screenshot Don't mind the "rebuilding" -- knocked a power cable out at one point; it's doing its job, not an indicator of a bad HDD. Any thoughts? Thanks in advance for any help!

    Read the article

  • phpmyadmin login redirect fails with custom ssl port

    - by baraboom
    The server is running Ubuntu 10.10, Apache 2.2.16, PHP 5.3.3-1ubuntu9.3, phpMyAdmin 3.3.7deb5build0.10.10.1. Since this same server is also running Zimbra on port 443, I've configured apache to serve SSL on port 81. So far, I have one CMS script running on this virtual host successfully. However, when I access /phpmyadmin (set up with the default alias) on my custom ssl port and submit the login form, I am redirected to http://vhost.domain.com:81/index.php?TOKEN=foo (note the http:// instead of the https:// that the login url was using). This generates an Error 400 Bad Request complaining about "speaking plain HTTP to an SSL-enabled server port." I can then manually change the http:// to https:// in the URL and use phpmyadmin as expected. I was annoyed enough to spend an hour trying to fix it and now even more annoyed that I cannot figure it out. I've tried various things, including: Adding $cfg['PmaAbsoluteUri'] = 'https://vhost.domain.com:81/phpmyadmin/'; to the /usr/share/phpmyadmin/config.inc.php file but this did not correct the problem (even though /usr/share/phpmyadmin/libraries/auth/cookie.auth.lib.php looks like it should honor it and use it as the redirect). Adding $cfg['ForceSSL'] = 1; to the same config.inc.php but then apache spirals into an infinite redirect. Adding a rewrite rule to the vhost-ssl conf file in apache but I was unable to figure out the condition to use when http:// was present along with the correct ssl port of :81. Lots of googling. Here are the relevant Apache configuration pieces: /etc/apache2/ports.conf <IfModule mod_ssl.c> NameVirtualHost *:81 Listen 81 </IfModule> /etc/apache2/sites-enabled/vhost-nonssl <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName vhost.domain.com DocumentRoot /home/xxx/sites/vhost/html RewriteEngine On RewriteCond %{HTTPS} off RewriteRule (.*) https://%{HTTP_HOST}:81%{REQUEST_URI} </Virtualhost> /etc/apache2/sites-enabled/vhost-ssl <VirtualHost *:81> ServerAdmin webmaster@localhost ServerName vhost.domain.com DocumentRoot /home/xxx/sites/vhost/html <Directory /> Options FollowSymLinks AllowOverride None AuthType Basic AuthName "Restricted Vhost" AuthUserFile /home/xxx/sites/vhost/.users Require valid-user </Directory> <Directory /home/xxx/sites/vhost/html/> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> /etc/apache2/conf.d/phpmyadmin.conf Alias /phpmyadmin /usr/share/phpmyadmin (The rest of the default .conf truncated.) Everything in the apache config seems to work ok - the rewrite from non-ssl to ssl, the http authentication, the problem only happens when I am submitting the login form for phpmyadmin from https://vhost.domain.com:81/index.php. Other configs: The phpmyadmin config is completely default and the php.ini has only had some minor changes to memory and timeout limits. These seem to work fine, as mentioned, another php script runs with no problem and phpmyadmin works great once I manually enter in the correct schema after login. I'm looking for either a bandaid I can add to save me the trouble of manually entering in the https:// after login, a real fix that will make phpmyadmin behave as I think it should or some greater understanding of why my desired config is not possible.

    Read the article

  • IP Micro-outages, telephone micro-outages, and CATV micro-outages

    - by Michael Graff
    This is a long and complicated question, mostly because it has been going on for 2.5 years without a solution in sight. It also is only one-third computer related, the other two-thirds are cable TV and cable-phone related. Background I have COX Communications for a cable provider, and we get Internet, digital cable TV, and digital phone service through them. The Internet is a SB5101 right now, and has been a DPC2100 and SB5120 in the past. Same results. The phone service is provided through a telephone interface mounted on the outside of the house (not classic VoIP) and the CATV is through a Scientific Atlanta receiver without DVR. I do have a TiVo connected to the CATV box. Symptoms The CATV shows "blocking" -- sometimes very very short duration where a few blocks appear on the screen. Sometimes it lasts long enough that the video "pauses" for 2-5 seconds, and rarely but not unseen the audio also fails. The CATV decoder box shows no correctable (FEC) or uncorrectable errors. That is, all BER counters are zero for the video stream. The Internet shows "micro-outages" where it appears that sent packets are not making it out, but I continue to receive packets from local modems. That is, pings stop coming back, but I continue to see modems broadcast for DHCP, and sometimes they ask more than once. The cable modem shows no errors during this time, but cable modems lie like you would not believe. It is actually possible to unplug the coax from the modem for 20 seconds and it reports NO ERRORS to the provider's tools. The phone service cuts out for 1-3 seconds, infrequently. When this happens, I hear NOTHING (not even comfort noise) and the remote side hears a "click" as if I were getting a call waiting message. However, there is no call incoming, other than the one I'm currently on of course. Things SEEM to happen more frequently when the temperature outside swings from cold to warm, so fall/spring seems worse than summer/winter. All micro-outages occur between once or twice a day (which I could ignore) to 10 times per hour. All SNR, signal levels, noise levels, etc. show very close to optimal when measured. COX's diagnosis This is a continual pain for me. Over the last 2.5 years, they have opened, "fixed" something, and closed the tickets. They close it without confirming that it is indeed better, and when I reopen they cannot do that, but instead they open a new ticket and send yet another low-level tech out to do the same signal tests and report that all is OK. I've finally gotten a line tech who has a clue and is motivated enough to pursue this with me. We have tried things like switching the local nodes over to UPS and generator power, but this does not trigger the noise. We have tried replacing all cabling, the tap outside my house, the modem, the CATV decoder -- all without resolution. Recently they have decided it is both my computer or switch, my TiVo, and my phone that are all broken and causing this issue. My debugging steps I spent the worse day of my TV-watching life yesterday and part of today. I watched live TV without the TiVo. I witnessed blocking, but it did "feel different." and was actually more severe. Some days it is better, some days it is worse, so perhaps this was just a very bad day. Today, I connected the TiVo to my DVD player, and ran two very long movies through it. I saw no blocking at all during nearly 6 hours of video. Suggestions? Does anyone have any suggestions on what to do next? I understand perhaps only the IP side can be addressed here, but it is one of the more limiting debugging options.

    Read the article

  • Raid 1 array won't assemble after power outage. How do I fix this ext4 mirror?

    - by Forkrul Assail
    Two ext4 drives on Raid 1 with mdadm won't reassemble after the power went out for an extended period (UPS drained). After turning the machine back on, mdadm said that the array was degraded, after which it took about 2 days for a full resync, which completed without problems. On trying to remount the array I get: mount: you must specify the filesystem type cat /etc/fstab lines relevant to setup: /dev/md127 /media/mediapool ext4 defaults 0 0 dmesg | tail (on trying to mount) says: [ 1050.818782] EXT3-fs (md127): error: can't find ext3 filesystem on dev md127. [ 1050.849214] EXT4-fs (md127): VFS: Can't find ext4 filesystem [ 1050.944781] FAT-fs (md127): invalid media value (0x00) [ 1050.944782] FAT-fs (md127): Can't find a valid FAT filesystem [ 1058.272787] EXT2-fs (md127): error: can't find an ext2 filesystem on dev md127. cat /proc/mdstat says: Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md127 : active (auto-read-only) raid1 sdj[2] sdi[0] 2930135360 blocks super 1.2 [2/2] [UU] unused devices: <none> fsck /dev/md127 says: fsck from util-linux 2.20.1 e2fsck 1.42 (29-Nov-2011) fsck.ext2: Superblock invalid, trying backup blocks... fsck.ext2: Bad magic number in super-block while trying to open /dev/md127 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> mdadm -E /dev/sdi gives me: /dev/sdi: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 37ac1824:eb8a21f6:bd5afd6d:96da6394 Name : sojourn:33 Creation Time : Sat Nov 10 10:43:52 2012 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 5860271016 (2794.40 GiB 3000.46 GB) Array Size : 2930135360 (2794.39 GiB 3000.46 GB) Used Dev Size : 5860270720 (2794.39 GiB 3000.46 GB) Data Offset : 262144 sectors Super Offset : 8 sectors State : clean Device UUID : 3e6e9a4f:6c07ab3d:22d47fce:13cecfd0 Update Time : Tue Nov 13 20:34:18 2012 Checksum : f7d10db9 - correct Events : 27 Device Role : Active device 0 Array State : AA ('A' == active, '.' == missing) boot@boot ~ $ sudo mdadm -E /dev/sdj /dev/sdj: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 37ac1824:eb8a21f6:bd5afd6d:96da6394 Name : sojourn:33 Creation Time : Sat Nov 10 10:43:52 2012 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 5860271016 (2794.40 GiB 3000.46 GB) Array Size : 2930135360 (2794.39 GiB 3000.46 GB) Used Dev Size : 5860270720 (2794.39 GiB 3000.46 GB) Data Offset : 262144 sectors Super Offset : 8 sectors State : clean Device UUID : 7fb84af4:e9295f7b:ede61f27:bec0cb57 Update Time : Tue Nov 13 20:34:18 2012 Checksum : b9d17fef - correct Events : 27 Device Role : Active device 1 Array State : AA ('A' == active, '.' == missing) machine@user ~ dmesg | tail [ 61.785866] init: alsa-restore main process (2736) terminated with status 99 [ 68.433548] eth0: no IPv6 routers present [ 534.142511] EXT4-fs (sdi): ext4_check_descriptors: Block bitmap for group 0 not in group (block 2838187772)! [ 534.142518] EXT4-fs (sdi): group descriptors corrupted! [ 546.418780] EXT2-fs (sdi): error: couldn't mount because of unsupported optional features (240) [ 549.654127] EXT3-fs (sdi): error: couldn't mount because of unsupported optional features (240) Since this is Raid 1 it was suggested that I try and mount or fsck the drives separately. After a long fsck on one drive, it ended with this as tail: Illegal double indirect block (2298566437) in inode 39717736. CLEARED. Illegal block #4231180 (2611866932) in inode 39717736. CLEARED. Error storing directory block information (inode=39717736, block=0, num=1092368): Memory allocation failed Recreate journal? yes Creating journal (32768 blocks): Done. *** journal has been re-created - filesystem is now ext3 again *** The drive however still doesn't want to mount: dmesg | tail [ 170.674659] md: export_rdev(sdc) [ 170.675152] md: export_rdev(sdc) [ 195.275288] md: export_rdev(sdc) [ 195.275876] md: export_rdev(sdc) [ 1338.540092] CE: hpet increased min_delta_ns to 30169 nsec [26125.734105] EXT4-fs (sdc): ext4_check_descriptors: Checksum for group 0 failed (43502!=37987) [26125.734115] EXT4-fs (sdc): group descriptors corrupted! [26182.325371] EXT3-fs (sdc): error: couldn't mount because of unsupported optional features (240) [27083.316519] EXT4-fs (sdc): ext4_check_descriptors: Checksum for group 0 failed (43502!=37987) [27083.316530] EXT4-fs (sdc): group descriptors corrupted! Please help me fix this. I never in my wildest nightmares thought a complete mirror would die this badly. Am I missing something? Suggestions on fixing this? Could someone explain why it would resync after the powerout, only to seemingly nuke the drive? Thanks for reading. Any help much appreciated. I've tried everything I can think of, including booting and filesystem checking with SystemRescue and Ubuntu liveboot discs.

    Read the article

  • Backup a hosted Sharepoint

    - by David Mackintosh
    One of my customers has outsourced their Sharepoint and Exchange services to a hosted services provider. I believe it is a Sharepoint 2007 service. It is a shared hosting solution, so we do not have any kind of access to the server itself; we only have user-level and sharepoint-administrator-level access to the Sharepoint application. They have come to the point where they would like to have a copy of everything that is on the Sharepoint server. I have downloaded the Office Sharepoint Designer 2007, and it features three (!) ways to backup a Sharepoint server, none (!) of which work for me: File-Export-Personal Web Package: When selecting everything, it calculates a negative size. Barfs with No "content-type" in CGI environment error. File-Export-Sharepoint Template: barfs with a A World Wide Web browser, such as Windows Internet Explorer, is required to use this feature error. Site-Administration-Backup Web Site: wants to create the backup .cmp file on the sharepoint server itself. I don't have access to any servers on the same network so I can't redirect it to any form of the suggested \\server\place. Barfs with a The Web application at $URL could not be found. [...] error. Possibly moot because Google tells me that bad things happen using OSD to back up sites larger than 24MB (which this site is most definitely). So I called the helpdesk of the outsource provider, and got told that they recommend using OSD, but no they don't actually provide any application support for OSD (not that I blame them for that), but they could do a stsadm.exe backup and provide us with that, and OSD should be able to read the resulting cmp file. Then for authorization reasons they had my customer call them directly (since I can't authorize such an operation), and they told him that he didn't want a stsadm.exe backup, he wanted to get into an 'explorer view' and deal with things that way (they were vague). Google hasn't been much help in figuring out what an 'explorer view' is, let alone how I bring one up. The end goal of this operation is to have a backup of the site as it exists (hopefully today, but shortly anyways) in such a format that we don't need another sharepoint server to restore it to. Ie we'd like to be able to pick individual content directly out of this backup. We are not excessively concerned with things like formatting. We just want the documents. This is a fairly complex site with multiple subsites and multiple folders per subsite, so sitting there and manually downloading each file isn't really going to happen if there is a better easier way. So, my questions: Is the stsadm.exe backup what I want? If not, what do I want? If I manage to convince them that I do want the stsadm.exe backup, can I pick files out of the resulting backup file with OSD? If OSD isn't going to let me extract individual files, is there a tool I can use that can?

    Read the article

  • Outlook 2007 Does Not Accept Login Credentials, OWA Webmail Does. Troubleshooting Advice?

    - by Chris
    I am trying to connect Outlook 2007 to Exchange (Hosted Exchange from Rackspace). Soon, I will need to roll this out for our entire office. With the Exchange account added to Outlook, Outlook starts up and asks for the user's username and password. Unfortunately, it doesn't like the password I use for it. I can confirm this username (email address) and password combo works by using Outlook WebMail, and another user (in another network/office) confirmed the Exchange account does work within his Outlook client. In my network/office, I can confirm that an Outlook 2007 client (under Windows 7) can connect to the Hosted Exchange server from Rackspace. However, I have not been able to get Outlook 2007 (under Windows XP SP3) to connect to the very same Exchange server Outlook 2007 (under Windows 7) can connect to. Outlook continuously prompts me for the username and password and does not accept the correct combination. Now, regarding the Outlook client that cannot connect/login to Exchange: The user has full admin rights on the workstation We do not run a domain controller/LDAP The firewall on the workstation has been disabled Real time file scanning in Microsoft Security Essentials has been disabled There are no virus scanning applications that would interface with Outlook or an email server. The Exchange account is setup to run on a newly created Outlook profile The network firewall does not log any blocked attempts A packet capture at the router reveals communication between the workstation and the Exchange server or proxy (though, this is SSL encrypted, so I don't know what the computers are saying) I have applied a fix (Added DWORD value of 0 for DefConnectOpts under HKEY_CURRENT_USER\Software\Microsoft\Office\12.0\Outlook\RPC) that was recommended to make RPC function when the workstation does not have a default gateway set. Workstation is configured as DHCP. This fix did nothing, and it may be worth noting the RPC subkey was not present until I added it. RPC service is running on the workstation The program is not running under any compatibility mode. Side note: Outlook 2007 installs with compatibility mode for XP enabled by default in windows 7. Outlook 2007 will not even try to connect to exchange if this compatibility mode is checked. In windows xp, I tried checking compatibility mode for windows 2000, and was unable to connect to exchange as well. Here is the specific configuration I've used in a blank outlook profile: Microsoft Exchange Server: ##MASKED##-MBX-C18.mex07a.mlsrvr.com Username: (Full Email Address: [email protected]) Password: ##MASKED## Outlook Anywhere: Connect to Microsoft Exchange using HTTP Exchange Proxy Settings: Proxy Server: mex07a.emailsrvr.com Check "Connect using SSL only" Under "Only connect to proxy servers...", enter: msstd:mex07a.emailsrvr.com Check "On fast networks, connect using HTTP first, then connect using TCP/IP" Check "On slow networks, connect using HTTP first, then connect using TCP/IP" Proxy authentication settings: Basic Authentication Notes: mex07a.mlsrvr.com and mex07a.emailsrvr.com may look incorrect at first glance, but this is not a typo - these instructions were handed down from rackspace and are confirmed to be working, just not on this workstation. I have tried to use the RpcPing utility but must have been using it wrong. I got as far as "Bad Interface Descriptor". It would seem to me getting Outlook and Exchange to work together would be a breeze, especially since everything is done over port 80 with web services. Unfortunately, the user is stuck with WebMail access only, because Outlook won't accept the Exchange credentials. Do you have any ideas of other things I could try to debug this issue further? Any and all help is greatly appreciated. Thank you! -Chris

    Read the article

  • How do i change the BIOS boot splash screen?

    - by YumYumYum
    I have a Dell PC which has very ugly and bad luck looking Alien face on every boot. I want to change it or disable it forever, but in Bios they do not have any options. How can i change this from my linux Fedora or ArchLinux which is running now? Tried following does not work. ( http://www.pixelbeat.org/docs/bios/ ) ./flashrom -r firmware.old #save current flash ROM just in case ./flashrom -wv firmware.new #write and verify new flash ROM image Also tried: $ cat c.c #include <stdio.h> #include <inttypes.h> #include <netinet/in.h> #include <stdlib.h> #include <string.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <unistd.h> #define lengthof(x) (sizeof(x)/sizeof(x[0])) uint16_t checksum(const uint8_t* data, int len) { uint16_t sum = 0; int i; for (i=0; i<len; i++) sum+=*(data+i); return htons(sum); } void usage(void) { fprintf(stderr,"Usage: therm_limit [0,50,53,56,60,63,66,70]\n"); fprintf(stderr,"Report therm limit of terminal in BIOS\n"); fprintf(stderr,"If temp specifed, it is changed if required.\n"); exit(EXIT_FAILURE); } #define CHKSUM_START 51 #define CHKSUM_END 109 #define THERM_OFFSET 67 #define THERM_SHIFT 0 #define THERM_MASK (0x7 << THERM_SHIFT) #define THERM_OFF 0 uint8_t thermal_limits[]={0,50,53,56,60,63,66,70}; #define THERM_MAX (lengthof(thermal_limits)-1) #define DEV_NVRAM "/dev/nvram" #define NVRAM_MAX 114 uint8_t nvram[NVRAM_MAX]; int main(int argc, char* argv[]) { int therm_request = -1; if (argc>2) usage(); if (argc==2) { if (*argv[1]=='-') usage(); therm_request=atoi(argv[1]); int i; for (i=0; i<lengthof(thermal_limits); i++) if (thermal_limits[i]==therm_request) break; if (i==lengthof(thermal_limits)) usage(); else therm_request=i; } int fd_nvram=open(DEV_NVRAM, O_RDWR); if (fd_nvram < 0) { fprintf(stderr,"Error opening %s [%m]\n", DEV_NVRAM); exit(EXIT_FAILURE); } if (read(fd_nvram, nvram, sizeof(nvram))==-1) { fprintf(stderr,"Error reading %s [%m]\n", DEV_NVRAM); close(fd_nvram); exit(EXIT_FAILURE); } uint16_t chksum = *(uint16_t*)(nvram+CHKSUM_END); printf("%04X\n",chksum); exit(0); if (chksum == checksum(nvram+CHKSUM_START, CHKSUM_END-CHKSUM_START)) { uint8_t therm_byte = *(uint16_t*)(nvram+THERM_OFFSET); uint8_t therm_status=(therm_byte & THERM_MASK) >> THERM_SHIFT; printf("Current thermal limit: %d°C\n", thermal_limits[therm_status]); if ( (therm_status == therm_request) ) therm_request=-1; if (therm_request != -1) { if (therm_status != therm_request) printf("Setting thermal limit to %d°C\n", thermal_limits[therm_request]); uint8_t new_therm_byte = (therm_byte & ~THERM_MASK) | (therm_request << THERM_SHIFT); *(uint8_t*)(nvram+THERM_OFFSET) = new_therm_byte; *(uint16_t*)(nvram+CHKSUM_END) = checksum(nvram+CHKSUM_START, CHKSUM_END-CHKSUM_START); (void) lseek(fd_nvram,0,SEEK_SET); if (write(fd_nvram, nvram, sizeof(nvram))!=sizeof(nvram)) { fprintf(stderr,"Error writing %s [%m]\n", DEV_NVRAM); close(fd_nvram); exit(EXIT_FAILURE); } } } else { fprintf(stderr,"checksum failed. Aborting\n"); close(fd_nvram); exit(EXIT_FAILURE); } return EXIT_SUCCESS; } $ gcc c.c -o bios # ./bios 16DB

    Read the article

  • udp through nat

    - by youllknow
    Hi everyone! I've two private networks (each of them behind a typical dsl router). The routers are connected to the WWW. The extern interface of each router have one dynamic IP address. I want to stream data via UDP directly between one client in private network A and one client in private network B. I've already tried a lot of things (see: http://en.wikipedia.org/wiki/UDP_hole_punching, or STUN). But it wasn't possible for me to transfer data between the two clients. It's possible to use a server (located in the WWW, with static IP) to transfer the extern IPs (and extern ports) from the routers between the clients. So imagine client A knows client B's external IP and client B's external port assigned by his router. I simply tried sending UDP packet to the receivers external IP/port combination, but without any result. So does anyone know what do to communicate via UDP throw the two NAT routers? It must be possible??? Or does Skype, for example, not directly communicate between the clients when the call eachother (voice over ip). I am sorry for my bad English! If something is confusing don't mind asking me!!! Thanks for your help in advance. ::::EDIT:::: I can't get pwnat or chownat working. I tried it with my own dsl-gateway - didn't work. Then I set up a complete virtual environment using VMWare. C1 (Client 1, WinXP Prof SP3): 172.16.16.100/24, GW 172.16.16.1 C2 (Client 2, WinXP Prof SP3): 10.0.0.100/24, GW 10.0.0.1 C3 (Client 3, WinXP Prof SP3): 3.0.0.2/24, GW 3.0.0.1 S1 (Ubuntu 10.04 x64 Server): eth0: 172.16.16.1/24, eth1: 1.0.0.2/24 GW 1.0.0.1 S2 (Ubuntu 10.04 x64 Server): eth0: 10.0.0.1/24, eth1: 2.0.0.2/24 GW 2.0.0.1 S3 (Ubuntu 10.04 x64 Server): eth0: 1.0.0.1/24, eth1: 2.0.0.1/24, eth2: 3.0.0.1/24 +--+ +--+ +--+ +--+ +--+ |C1|-----|S1|-----|S3|-----|S2|-----|C2| +--+ +--+ +--+ +--+ +--+ | +--+ |C3| +--+ Server S1 and S2 provide NAT functionality. (they have routing enabled and provide a firewall, which allows trafic from the internal net and provide the nat functionality) Server S3 has routing enabled. The client firewalls are turned off. C1 and C2 are able to ping C3, e.g. visit C3's webserver. They are also able to send UDP Packets to C3 (C3 successful receives them)! C1 and C2 have also webservers running for test reasons. I run ""chownat -s 80 2.0.0.2"" at C1, and ""chownat -c 8000 1.0.0.2"" at C2. Then I tried to access the Webpage from C1 via webbrower localhost at port 8000. It didn't work. Can anybody help me? Any suggestions? If you have any questions to my question, please ask!

    Read the article

  • Server Cabinet/Room Cooling

    - by user37226
    Hello all. I currently have two desktops and three servers in my office sitting on the floor (I know this is bad). With that many servers the ambient temperature in the room goes up quickly. I am located in Dallas, TX so during the winter, if the heat is kept low, it is not a problem, but during the summer it easily jumps the room +10 degrees. I have decided and found a free 42U server cabinet that a hosting company was throwing away to house all of these systems in. One server is in a rack mount case while the other four servers are housed in mid-tower cases. I have purchased shelves for each computer and plan to lay the towers side ways on these shelves (as replacing the cases costs a heck of a lot of money). I like the idea of housing all of these systems in the cabinet because it will save a lot of room and clean up all of the cabling currently laying all over the office floor. When putting this setup together over the next couple of weeks, I want to address issues with dust and cooling. The server cabinet has a fan on top, front plexiglass door and a rear metal door with vent wholes on the bottom. First the cooling issues. I know I am going to want to have cool air enter the bottom of the cabinet and exit the top. I do not want the room heating up though as this will make my work area hot and then make the servers warmer as the air eventually reenters the cabinet. I had an idea to fix this problem, but am unsure if it will work. I was thinking of taking flexible piping and adapting it to the back fans of the computer having the other end of the pipe at the top close to the cabinet's top mounted fan. I was then thinking of creating a duct around the top fan into the attic. Now I am very concerned that the attic will cause issues with this type of setup because during July/August time frame, the attic is easily 120 degrees F. I could also use the flexible pipe to take it to an attic exhaust vent if it would be better to vent it into the 100 degree air outside (at least there may be wind. The other option would be to buy a small portable air conditioner. This may be a possibility, but do I want to spend the extra money on power? I bet this increases the noise. Plus they are around $250 on Amazon. What would you all recommend? Depending on the solution I end up running with above, I would also like to limit the dust that gets into the cabinet. If I were to cut a whole and mount a second cabinet fan on the bottom of the rear door, could I possibly mount a standard home air filter on the other side of that whole? Thanks in advance for your recommendations. I look forward to reading your interesting ideas.

    Read the article

  • Server Cabinet/Room Cooling

    - by user37226
    Hello all. I currently have two desktops and three servers in my office sitting on the floor (I know this is bad). With that many servers the ambient temperature in the room goes up quickly. I am located in Dallas, TX so during the winter, if the heat is kept low, it is not a problem, but during the summer it easily jumps the room +10 degrees. I have decided and found a free 42U server cabinet that a hosting company was throwing away to house all of these systems in. One server is in a rack mount case while the other four servers are housed in mid-tower cases. I have purchased shelves for each computer and plan to lay the towers side ways on these shelves (as replacing the cases costs a heck of a lot of money). I like the idea of housing all of these systems in the cabinet because it will save a lot of room and clean up all of the cabling currently laying all over the office floor. When putting this setup together over the next couple of weeks, I want to address issues with dust and cooling. The server cabinet has a fan on top, front plexiglass door and a rear metal door with vent wholes on the bottom. First the cooling issues. I know I am going to want to have cool air enter the bottom of the cabinet and exit the top. I do not want the room heating up though as this will make my work area hot and then make the servers warmer as the air eventually reenters the cabinet. I had an idea to fix this problem, but am unsure if it will work. I was thinking of taking flexible piping and adapting it to the back fans of the computer having the other end of the pipe at the top close to the cabinet's top mounted fan. I was then thinking of creating a duct around the top fan into the attic. Now I am very concerned that the attic will cause issues with this type of setup because during July/August time frame, the attic is easily 120 degrees F. I could also use the flexible pipe to take it to an attic exhaust vent if it would be better to vent it into the 100 degree air outside (at least there may be wind. The other option would be to buy a small portable air conditioner. This may be a possibility, but do I want to spend the extra money on power? I bet this increases the noise. Plus they are around $250 on Amazon. What would you all recommend? Depending on the solution I end up running with above, I would also like to limit the dust that gets into the cabinet. If I were to cut a whole and mount a second cabinet fan on the bottom of the rear door, could I possibly mount a standard home air filter on the other side of that whole? Thanks in advance for your recommendations. I look forward to reading your interesting ideas.

    Read the article

  • CPU Utilization LAMP stack

    - by Max
    We've got an ec2 m2.4xlarge running Magento (centos 5.6, httpd 2.2, php 5.2.17 with eaccelerator 0.9.5.3, mysql 5.1.52). Right now we're getting a large traffic spike, and our top looks like this: top - 09:41:29 up 31 days, 1:12, 1 user, load average: 120.01, 129.03, 113.23 Tasks: 1190 total, 18 running, 1172 sleeping, 0 stopped, 0 zombie Cpu(s): 97.3%us, 1.8%sy, 0.0%ni, 0.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.4%st Mem: 71687720k total, 36898928k used, 34788792k free, 49692k buffers Swap: 880737784k total, 0k used, 880737784k free, 1586524k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2433 mysql 15 0 23.6g 4.5g 7112 S 564.7 6.6 33607:34 mysqld 24046 apache 16 0 411m 65m 28m S 26.4 0.1 0:09.05 httpd 24360 apache 15 0 410m 60m 25m S 26.4 0.1 0:03.65 httpd 24993 apache 16 0 410m 57m 21m S 26.1 0.1 0:01.41 httpd 24838 apache 16 0 428m 74m 20m S 24.8 0.1 0:02.37 httpd 24359 apache 16 0 411m 62m 26m R 22.3 0.1 0:08.12 httpd 23850 apache 15 0 411m 64m 27m S 16.8 0.1 0:14.54 httpd 25229 apache 16 0 404m 46m 17m R 10.2 0.1 0:00.71 httpd 14594 apache 15 0 404m 63m 34m S 8.4 0.1 1:10.26 httpd 24955 apache 16 0 404m 50m 21m R 8.4 0.1 0:01.66 httpd 24313 apache 16 0 399m 46m 22m R 8.1 0.1 0:02.30 httpd 25119 apache 16 0 411m 59m 23m S 6.8 0.1 0:01.45 httpd Questions: Would giving msyqld more memory help it cache queries and react faster? If so, how? Other than splitting mysql and php to separate servers (which we're about to do) is there anything else we could/should be doing? Thanks! UPDATE: Here's our my.cnf along with the output of mysqltuner. It looks like a cache problem. Thanks again! # cat /etc/my.cnf [client] port = **** socket = /var/lib/mysql/mysql.sock [mysqld] datadir=/mnt/persistent/mysql port=**** socket=/var/lib/mysql/mysql.sock key_buffer = 512M max_allowed_packet = 64M table_cache = 1024 sort_buffer_size = 8M read_buffer_size = 4M read_rnd_buffer_size = 2M myisam_sort_buffer_size = 64M thread_cache_size = 128M tmp_table_size = 128M join_buffer_size = 1M query_cache_limit = 2M query_cache_size= 64M query_cache_type = 1 max_connections = 1000 thread_stack = 128K thread_concurrency = 48 log-bin=mysql-bin server-id = 1 wait_timeout = 300 innodb_data_home_dir = /mnt/persistent/mysql/ innodb_data_file_path = ibdata1:10M:autoextend innodb_buffer_pool_size = 20G innodb_additional_mem_pool_size = 20M innodb_log_file_size = 64M innodb_log_buffer_size = 8M innodb_flush_log_at_trx_commit = 1 innodb_lock_wait_timeout = 50 innodb_thread_concurrency = 48 ft_min_word_len=3 [myisamchk] ft_min_word_len=3 key_buffer = 128M sort_buffer_size = 128M read_buffer = 2M write_buffer = 2M # ./mysqltuner.pl >> MySQLTuner 1.2.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.52-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB +Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 2G (Tables: 26) [--] Data in InnoDB tables: 749M (Tables: 250) [!!] Total fragmented tables: 262 -------- Security Recommendations ------------------------------------------- -------- Performance Metrics ------------------------------------------------- [--] Up for: 31d 2h 30m 38s (680M q [253.371 qps], 2M conn, TX: 4825B, RX: 236B) [--] Reads / Writes: 89% / 11% [--] Total buffers: 20.6G global + 15.1M per thread (1000 max threads) [OK] Maximum possible memory usage: 35.4G (51% of installed RAM) [OK] Slow queries: 0% (35K/680M) [OK] Highest usage of available connections: 53% (537/1000) [OK] Key buffer size / total MyISAM indexes: 512.0M/457.2M [OK] Key buffer hit rate: 100.0% (9B cached / 264K reads) [OK] Query cache efficiency: 42.3% (260M cached / 615M selects) [!!] Query cache prunes per day: 4384652 [OK] Sorts requiring temporary tables: 0% (1K temp sorts / 38M sorts) [!!] Joins performed without indexes: 100404 [OK] Temporary tables created on disk: 17% (7M on disk / 45M total) [OK] Thread cache hit rate: 99% (537 created / 2M connections) [!!] Table cache hit rate: 0% (1K open / 946K opened) [OK] Open file limit used: 9% (453/5K) [OK] Table locks acquired immediately: 99% (758M immediate / 758M locks) [OK] InnoDB data size / buffer pool: 749.3M/20.0G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Enable the slow query log to troubleshoot bad queries Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Variables to adjust: query_cache_size (> 64M) join_buffer_size (> 1.0M, or always use indexes with joins) table_cache (> 1024)

    Read the article

  • FTP Upload works from local command line / remote GUI client but not from PHP script

    - by MrOodles
    I originally posted this question at StackOverflow, but I'm beginning to think it's more of a server question. I have installed ProFTPd on an EC2 instance running Ubuntu 10.10. I have managed my proftpd.conf file as well as my server permissions to be able to connect and upload/move files using FTP both remotely using Filezilla, and on the server itself when connecting to 127.0.0.1. The problem I'm running into is when I try to upload/install a file using Joomla's interface. I give Joomla the same login information that I give to Filezilla, and the connection is made in the same fashion. The ftp.log file actually shows that Joomla is able to login to the server: localhost UNKNOWN nobody [17/Jan/2011:14:09:17 +0000] "USER ftpuser" 331 - localhost UNKNOWN ftpuser [17/Jan/2011:14:09:17 +0000] "PASS (hidden)" 230 - localhost UNKNOWN ftpuser [17/Jan/2011:14:09:17 +0000] "PASV" 227 - localhost UNKNOWN ftpuser [17/Jan/2011:14:09:17 +0000] "TYPE I" 200 - localhost UNKNOWN ftpuser [17/Jan/2011:14:09:17 +0000] "STOR /directory/store/location/file.zip" 550 - But it fails when attempting the STOR command. I have traced the problem in the Joomla code to the PHP FTP module. The code (with my trace statements added): if (@ftp_put($this->_conn, $remote, $local, $mode) === false) { echo "\n FTP PUT failed."; echo "\n Remote: $remote ; Local: $local ; Mode: $mode - Either ASCII: ".FTP_ASCII." or Binary: ".FTP_BINARY; echo "\n The user: ".exec("whoami"); JError::raiseWarning('35', 'JFTP::store: Bad response' ); return false; } Trace ouputs: FTP PUT failed. Remote: /directory/store/location/file.zip ; Local: /tmp/phpwuccp4 ; Mode: 2 - Either ASCII: 1 or Binary: 2 The user: www-data And in case you were curious, here is an example of the FTP log when using Filezilla: my_client_ip UNKNOWN nobody [17/Jan/2011:16:45:55 +0000] "USER ftpuser" 331 - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:45:55 +0000] "PASS (hidden)" 230 - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:45:55 +0000] "OPTS UTF8 ON" - - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:45:55 +0000] "PWD" 257 - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:45:55 +0000] "TYPE I" 200 - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:45:55 +0000] "PASV" 227 - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:45:55 +0000] "MLSD" 226 3405 my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:06 +0000] "CWD location" 250 3405 my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:06 +0000] "PWD" 257 3405 my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:06 +0000] "PASV" 227 3405 my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:07 +0000] "MLSD" 226 3757 my_client_ip UNKNOWN nobody [17/Jan/2011:16:46:37 +0000] "USER ftpuser" 331 - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:37 +0000] "PASS (hidden)" 230 - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:37 +0000] "OPTS UTF8 ON" - - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:37 +0000] "CWD /location" 250 - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:37 +0000] "PWD" 257 - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:37 +0000] "TYPE I" 200 - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:37 +0000] "PASV" 227 - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:39 +0000] "STOR file.zip" 226 125317 my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:39 +0000] "PASV" 227 - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:39 +0000] "MLSD" 226 497

    Read the article

  • nginx : backend https, proxy_pass shows ip

    - by Vulpo
    I am using nginx as a reverse proxy listening at port 80 (http). I am using proxy_pass to forward requests to backend http and https servers. Everything works fine for my http server but when I try to reach the https server through nginx reverse proxy the ip of the https server is shown in the client's web browser. I want the uri of the nginx server to be shown instead of the https backend server's ip (once again, this works fine with the http server but not for the https server). See this post on the forum Here is my configuration file : server { listen 80; server_name domain1.com; access_log off; root /var/www; if ($request_method !~ ^(GET|HEAD|POST)$ ) { return 444; } location / { proxy_pass http://ipOfHttpServer:port/; } } server { listen 80; server_name domain2.com; access_log off; root /var/www; if ($request_method !~ ^(GET|HEAD|POST)$ ) { return 444; } location / { proxy_pass http://ipOfHttpsServer:port/; proxy_set_header X_FORWARDED_PROTO https; #proxy_set_header Host $http_host; } } When I try the "proxy_set_header Host $http_host" directive and "proxy_set_header Host $host" the web page can't be reached (page not found). But when I comment it, the ip of the https server is shown in the browser (which is bad). Does anyone have an idea ? My other configs files are : proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; #proxy_hide_header X-Powered-By; proxy_intercept_errors on; proxy_buffering on; proxy_cache_key "$scheme://$host$request_uri"; proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=cache:10m inactive=7d max_size=700m; user www-data; worker_processes 2; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; access_log /var/log/nginx/access.log; server_names_hash_bucket_size 64; sendfile off; tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_comp_level 5; gzip_http_version 1.0; gzip_min_length 0; gzip_types text/plain text/html text/css image/x-icon application/x-javascript; gzip_vary on; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } Thanks for your help !

    Read the article

  • How to delete files and folders that cannot be deleted?

    - by glenneroo
    I have a backup copy of a previous Windows' Documents and Settings folder which only contains my original user and within 2 more directories: Favorites and Local Settings. When I try to delete Local Settings I get this error: When I try to delete Favorites, I get this error: I ran this in a cmd shell: attrib *.* -r -a -s -h /s ...but it did not help, nor did it return any errors/warnings. I used Unlocker v1.8.5 and LockHunter repeatedly at multiple levels to see if any files are in use, but both always say: No Files Locked. Update #1: I was able to rename the directory, which now gives me this warning before (trying to) delete: If I press Yes (or Yes to All) then I get this error: Update #2: I let chkdsk /f run which required a reboot since it's on my primary system partition. During Stage 2 scanning, I received about 40 of these: Deleting an index entry from index $0 of file 25. ...followed by: Deleting index entry cookies in index $I30 of file 37576. ...but I still get the first error dialog above when trying to delete. I ran chkdsk again, this time: chkdsk /f /r. Produced no messages. Same result when deleting. Update #3: Digging deeper, the 99 is the name of one of many directories located deep in here: C:\Documents and Settings.OLD\User\Local Settings\Application Data\Microsoft\Messenger\[email protected]\SharingMetadata\[email protected]\DFSR\Staging\CS{D4E4AE55-B5E2-F03B-5189-6C4DA6E41788}\ Inside each of those directories were files with names such as: 2300-{C93D01AC-0739-4FD9-88C7-13D2F21A208E}-v2300-{C93D01AC-0739-4FD9-88C7-13D2F21A208E}-v2300-Downloaded.frx I noticed that, unlike all the directories, I couldn't rename any of these files. I also noticed that the file + dir names were extremely long: Original directory = 194 characters Filenames = 100+ characters Together the length exceeds the 255-char limit which is bad and would explain the error message I posted in Update #1. Partial Solution: Rename all directories until the total path length is less than 100. Afterwards I was able to rename the .frx files, not to mention delete everything inside the Local Settings directory. This is only a partial solution because these (empty) directories are still not deleteable, C:\1\2\Favorites\Wien\What To Do.. C:\1\2\Favorites\Photography\FIRE Same error as above: Here is what Explorer properties shows for both folders: Update #4 (another partial solution): Using harrymc's answer combined with thoroughly reading through this amazing MS-KB article which contains nearly everyone's idea and then some, inconspicuously titled: You cannot delete a file or a folder on an NTFS file system volume. I was able to delete the 2nd folder C:\1\2\Favorites\Photography\FIRE - the problem being that there was an invisible trailing space at the end. I got lucky when I did an auto-complete whilst playing around with the del "\\?\<path>" command which he suggested. NOTE: A normal del did NOT work, nor did deleting from explorer. Now all that is left is the first directory C:\1\2\Favorites\Wien\What To Do.. (yes I tried endlessly with multiple combinations of the above solution ;) Keep 'em coming! =)

    Read the article

  • Wear and tear on server hard drive from filesystem polling by PHP script

    - by jackie
    So I'm working on a discussion platform, and various clients will visit http://host/thread.php, which will render the discussion thread to date in addition to a form to submit a new post. When a new post is submitted, I would like all of the other clients with browser windows open to have it appear in near-real-time. One of the constraints of my script is that it may not use a DBMS and it must stay in the filesystem. Additionally, I can't use any PECL/PEAR extensions like inotify or anything like that for IPC. The flow will look like this: Client A requests thread.php and the thread is so far empty, but nonetheless it opens a Server-Side Event at eventPusher.php. Client B does the same. Client A fills out a post in the form and and submits (POSTs) it to subHandler.php. ??? (subHandler stores the new submission into the main thread storefile which gets read from when a fresh, new client requests thread.php, in addition to somehow signalling to the continually-running eventPusher event-source that a new comment was posted and that it should echo the event-json to the client. How, exactly, it will send this signal I'm yet unsure of, but there are a few options that I've thought of -- this is the crux of the question, so see below for more clarification) eventPusher.php happily pushes the new event to the client and it shows up soon after it was originally submitted on all clients who have the page open's screens. Now for the #4 missing-link mystery-step, I see a few problems. I mean, either way, eventPusher is gonna be doing a while loop of some sort -- it's gonna be polling something, I think that much is clear. (If that's a bad assumption please do let me know.) Now, the simplest way would be subHandler gets invoked on the form submission, writes it to the main store in addition to newComments.xml, then exits without doing anything else. Then eventPusher checks in newComments.xml every X seconds (by the way, what would be a reasonable time interval here?) and if it finds something then it emits an event to the client. Now, my fear with this is that the server's hard drive will have to constantly start spinning up. Maybe this isn't the case, perhaps it would just get cached in RAM and the linux kernel would take care of this transparently such that filesystem access doesn't actually engage the device because the kernel knows that that particular file hasn't changed since last read. * idea #2: I have no idea how to go about this, but perhaps there is a variable scope that gets stored in general RAM on the system which can be read by any process. Like if we mega-exported a bash variable so that $new_post is normally false but it gets toggled to true by subHandler, and then back to flase once it's pushed to the client. I doubt there's such a variable scope in PHP directly, but I struggle with the concept of variable scope, I just can't seem to understand it no matter what I read on it. * idea #3: eventPusher queries ps in its whileloop for another instance of itself. If there's not another eventPusher active then it's highly unlikely that new comments will be getting submitted. It's okay if this only works =90% of the time, it doesn't need to be completely foolproof. * idea #4: eventPusher queries DMESG to see if that file's been written to recently. So to sum everything up, I need to have inter-php-script-communication in near-real-time that will work on a standard mod_php shared hosting setup without any elevated privileges, PHP addon modules, or other system adjustments that can't be done from the PHP script itself at runtime. With*out* spinning up the drive more than a few times. No SQL servers either. Apologies if my english isn't the best, I'm still trying to improve on it.

    Read the article

  • e2fsck / resize2fs problems

    - by BlakBat
    I've got 6 drives (each 1.5T, all same model and firmware revision) that are part of a RAID5 array. The RAID5 makes a LVM volume group and a logical group. The latter contains only one ext3 partition. I've recently ran: e2fsck -f /dev/vg03/lv01 && resize2fs -M /dev/vg03/lv01 which exited without an error. Now when I try to mount /dev/vg03/lv01 I get: EXT3-fs error (device dm-0): ext3_check_descriptors: Block bitmap for group 30533 not in group (block 1000532368)! EXT3-fs: group descriptors corrupted! How do I get out of this predicament? This is all the info I can currently give you: fdisk -l /dev/sd[cdefgh] shows (correctly) that they are "Linux raid autodetect" but fdisk now shows: fdisk -l /dev/md0 Disk /dev/md0: 7501.5 GB, 7501495664640 bytes ... Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table (instead of a LVM type partition) fdisk -l /dev/vg03/lv01 Disk /dev/vg03/lv01: 7501.5 GB, 7501491732480 bytes ... Disk identifier: 0x00000000 Disk /dev/vg03/lv01 doesn't contain a valid partition table (instead of a ext3 type partition) I've tried: e2fsck -fy /dev/vg03/lv01 e2fsck 1.41.12 (17-May-2010) e2fsck: Group descriptors look bad... trying backup blocks... Block bitmap for group 30533 is not in group. (block 1000532368) Relocate? yes Inode bitmap for group 30533 is not in group. (block 1000532369) Relocate? yes Pass 1: Checking inodes, blocks, and sizes Relocating group 30533's block bitmap to 1000524246... Error allocating 1 contiguous block(s) in block group 30533 for inode bitmap: Could not allocate block in ext2 filesystem e2fsck: aborted Extra information I can give you: cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active (auto-read-only) raid5 sdg1[0] sdh1[5] sdf1[4] sde1[3] sdc1[2] sdd1[1] 7325679360 blocks level 5, 128k chunk, algorithm 2 [6/6] [UUUUUU] bitmap: 1/175 pages [4KB], 4096KB chunk unused devices: Lastly, all smartctl tests (short and extendend) showed no errors on any of the disks. Should I try to resize2fs to grow /dev/vg03/lv01 and redo a e2fsck ? Should I cfdisk /dev/md0 and /dev/vg03/lv01 back to their real types? Thanks in advance for all and any help. 2011-09-20 UPDATE I issued the following commands and was able to remount the partition, but by viewing the size (df) of before and after, it seems that 1Tb of data have gone missing. By checking the MD5SUMS (from an old backup) of some files with the "same" files from the remounted partition, some errors have been detected. Commands issued to remount the partition were: dumpe2fs /dev/vg03/lv01 Block count: 1000491435<br /> Block size: 4096<br /> tune2fs -O ^has_journal /dev/vg03/lv01 resize2fs -p /dev/vg03/lv01 dumpe2fs /dev/vg03/lv01 Block count: 1831418880<br /> Block size: 4096<br /> mount -o ro,noatime /dev/vg03/lv01 /mnt/raid OK... but files have been damaged / gone missing.

    Read the article

  • What would cause Memcached to Hang for 2+ seconds?

    - by Brad Dwyer
    I'm going nuts trying to scale memcached. From their site: Memcached operations are almost all O(1). Connecting to it and issuing a get or stat command should never lag. If connecting lags, you may be hitting the max connections limit. See ServerMaint for details on stats to monitor. If issuing commands lags, you can have a number of tuning problems. Most common are hardware problems, not enough RAM (swapping), network problems (bandwidth, dropped packets, half-duplex connections). On rare occasion OS bugs or memcached bugs can contribute. Well.. it is most certainly not performing like an O(1) operation for me. Under low to normal load on our site memcached response times for get and set ops are about 0.001 seconds. Not bad. But if we triple the load we get outliers that take 100x (or in rare cases 1000x!) that long. I even had one instance where it took 2.2442 seconds for memcached to store a value. Obviously this is killing our site. Here's the output of Memcached-getStats during one of the slow periods: [pid] => 18079 [uptime] => 8903 [threads] => 4 [time] => 1332795759 [pointer_size] => 32 [rusage_user_seconds] => 26 [rusage_user_microseconds] => 503872 [rusage_system_seconds] => 125 [rusage_system_microseconds] => 477008 [curr_items] => 42099 [total_items] => 422500 [limit_maxbytes] => 943718400 [curr_connections] => 84 [total_connections] => 4946 [connection_structures] => 178 [bytes] => 7259957 [cmd_get] => 1679091 [cmd_set] => 351809 [get_hits] => 1662048 [get_misses] => 17043 [evictions] => 0 [bytes_read] => 109388476 [bytes_written] => 3187646458 [version] => 1.4.13 So things that I have ruled out so far are: Hitting the max connections limit (curr_connections of 84 is well below the default of max of 1024) Swapping - the machine has 900M out of 1024M of memory dedicated to memcached on a dedicated machine. It only appears to be using about 7MB of data as per the bytes stat. How would I diagnose the other hardware problems? prstat doesn't really show a whole lot going on in terms of CPU or memory usage. Not sure how to figure out the network problems but as this is a dedicated server on the same private network as the web box I don't think it's a connectivity issue (ping is less than a millisecond between the boxes). Is there something else I'm missing here? It's driving me nuts. Edit: Also forgot to mention that I've tried both persistent and non-persistent connections with minimal-to-no impact.

    Read the article

  • Email from my new vps is marked as spam

    - by Chriswede
    I got a new vps from x10vps (x10hosting) and set up the domain via cloudflare. This is what the email looks like: Delivered-To: [email protected] Received: by 10.64.19.240 with SMTP id i16csp357708iee; Tue, 9 Oct 2012 01:29:48 -0700 (PDT) Received: by 10.50.57.130 with SMTP id i2mr908846igq.56.1349771387599; Tue, 09 Oct 2012 01:29:47 -0700 (PDT) Return-Path: <[email protected]> Received: from power.SOURCEAPE.COM ([198.91.90.116]) by mx.google.com with ESMTPS id v8si25630942ica.46.2012.10.09.01.29.46 (version=TLSv1/SSLv3 cipher=OTHER); Tue, 09 Oct 2012 01:29:47 -0700 (PDT) Received-SPF: temperror (google.com: error in processing during lookup of [email protected]: DNS timeout) client-ip=198.91.90.116; Authentication-Results: mx.google.com; spf=temperror (google.com: error in processing during lookup of [email protected]: DNS timeout) [email protected] Received: from nk11p03mm-asmtp010.mac.com ([17.158.232.169]:54276) by power.SOURCEAPE.COM with esmtp (Exim 4.80) (envelope-from <[email protected]>) id 1TLVBD-0004Ig-1Y for [email protected]; Tue, 09 Oct 2012 12:28:43 +0400 I then tried to enable SPF and DKIM and got following massage In order to ensure that SPF or DKIM takes effect, you must confirm that this server is an authoritative nameserver for chvw.de. If you need help, contact your hosting provider. Status: Enabled Warning: cPanel is unable to verify that this server is an authoritative nameserver for chvw.de. [?] and the email header now looks like this: Delivered-To: [email protected] Received: by 10.50.183.227 with SMTP id ep3csp14506igc; Tue, 9 Oct 2012 01:55:23 -0700 (PDT) Received: by 10.50.40.133 with SMTP id x5mr992934igk.32.1349772923717; Tue, 09 Oct 2012 01:55:23 -0700 (PDT) Return-Path: <[email protected]> Received: from power.SOURCEAPE.COM ([198.91.90.116]) by mx.google.com with ESMTPS id ng8si25688859icb.42.2012.10.09.01.55.23 (version=TLSv1/SSLv3 cipher=OTHER); Tue, 09 Oct 2012 01:55:23 -0700 (PDT) Received-SPF: temperror (google.com: error in processing during lookup of [email protected]: DNS timeout) client-ip=198.91.90.116; Authentication-Results: mx.google.com; spf=temperror (google.com: error in processing during lookup of [email protected]: DNS timeout) [email protected]; dkim=neutral (bad format) [email protected] DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=chvw.de; s=default; h=Message-ID:Subject:To:From:Date:Content-Transfer-Encoding:Content-Type:MIME-Version; bh=iugsx3Lx0KnqjR7dj3wyQHnJ9pe/z3ntYEVk80k8rx4=; b=IrYsCtHdoPubXVOvLqxd7sLE/TyQTS5P3OrEg5SSUSKnQQcQ/fWWyBrmsrgkFSsw6jCmmRWMDR09vH5bQRpFPMA57B7pf8QRKhwXOWFBV+GnVUqICsfRjnNPvhx/lNp5; Received: from localhost ([127.0.0.1]:46539 helo=direct.chvw.de) by power.SOURCEAPE.COM with esmtpa (Exim 4.80) (envelope-from <[email protected]>) id 1TLVb0-0004dZ-Kd for [email protected]; Tue, 09 Oct 2012 12:55:22 +0400

    Read the article

< Previous Page | 365 366 367 368 369 370 371 372 373 374 375 376  | Next Page >