Search Results

Search found 6301 results on 253 pages for 'cd man'.

Page 242/253 | < Previous Page | 238 239 240 241 242 243 244 245 246 247 248 249  | Next Page >

  • md/raid:md2: cannot start dirty degraded array, kernel panic

    - by nl-x
    After having made use of a remote power switch, my server did not come back online. When I went to the datacenter and reboot the computer on the spot I see the server booting (I see the centos progress bar with running almost all the way to the end) and eventually giving the following messages: md/raid:md2: cannot start dirty degraded array. md/raid:md2: failed to run raid set. md: pers->run() failed ... md/raid:md2: cannot start dirty degraded array. md/raid:md2: failed to run raid set. md: pers->run() failed ... Kernel panic - not syncing: Attempted to kill init! Pid: 1, comm: init not tainted 2.6.32-279.1.1.el6.i686 #1 Call Trace: [<c083bfbc>] ? panic+0x68/0x11c [<c045a501>] ? do_exit+0x741/0x750 [<c045a54c>] ? do_group_exit+0x3c/0xa0 [<c045a5c1>] ? sys_exit_group+0x11/0x20 [<c083eba4>] ? syscall_call+0x7/0xb [<c083007b>] ? cmos_wake_setup+0x62/0x112 The server runs CentOS and has software raid, and I don't have backups of the raid settings. The only backup I have is of /home and the database dumps. (Glad to at least have those though.) Since the server is an old Dell PowerEdge 1750 with no CD-ROM drive, I have no way of booting the machine from a boot disk. I also remember in the past that the server also wouldn't boot from a bootable USB disk. So the only way I know how to boot the server is to go to the datacenter, pick up the server and take it to the office. Screw open the server. Attach a cdrom drive to an IDE slot on the motherboard. And then boot it. I am hoping you guys could help me avoid this. I have looked a bit through the boot options and I found the following boot options. When CentOS is about to boot and interrupt the boot-countdown: CentOS (2.6.32-279.1.1.el63.i686) CentOS Linux (2.6.32-71.29.1.el6.i686) centos (2.6.32-71.el6.i686) I think the first configuration is the default one, because choosing that gets me to the above mentioned kernel panic. The other ones end with something like "Sleeping forever". I can press 'e' to edit boot commands, press 'a' to modify kernel arguments and press 'c' for grub command line. The command line gives a grub prompt. But I have no idea how to get the system to boot without (trying to) access the dirty partitions. What I want to do is off course: - boot the machine - check hard drive for errors - mark the drive as clean

    Read the article

  • Intermittent lockups, unable to diagnose in over a year

    - by Magsol
    Here's a real doosie; I may just give my firstborn child to whomever helps me solve this problem. In July 2008, I assembled what would be my desktop computer for graduate school. Here are the specs of the machine I built: Thermaltake 750W PSU Corsair Dominator 2x2GB 240-pin SDRAM Thermaltake Tower Asus P5K Deluxe Motherboard Intel Core 2 Quad Q9300 2.5GHz CPU 2 x GeForce 8600 GT WD Caviar Blue 640GB hard drive CD burner DVD burner Soon thereafter, I ordered a new motherboard (because I was an idiot; that first motherboard supported CrossFire, not SLI), an Asus P5N-D. I was originally running Windows XP SP3. Pretty much right into the start of the fall semester, my desktop would simply lock up after awhile. If my system was largely idling, it would be after 1-3 days. If was gaming, it often happened an hour or two into my gaming session, indicating a link to activity level. Here's where it started getting interesting. I started looking at the system temps. The CPU was warmer than it should have been (~60s C), so I purchased some more efficient cooling compound a way better cooler for it. Now it hardly goes over 40 C. Intel was even kind enough to swap it out for free, just to rule it out. Lockups continued. The graphics cards were also running pretty warm: about 60 C idling. Removing one of them seemed to improve stability a little bit...as in, it wouldn't lock up quite as frequently, but still always eventually locked up. But it didn't matter which card I used or removed, the lockups continued. I reverted back to the original motherboard, the P5K Deluxe. Lockups continued. I purchased an entirely new motherboard, eVGA's nForce 750i. Lockups continued. Ran memtest86+ over and over and over, with no errors. Even RMA'd the memory. Lockups continued. Replaced the PSU with a Corsair 750W PSU. Lockups continued. Tried disconnecting all IDE drives (HDDs are SATA). Lockups continued. Replaced both graphics cards with a single Radeon HD 4980. Average temps are now always around 50 C when idling, 60 C only when gaming. Lockups continued. Throughout the whole ordeal, the system has been upgraded from Windows XP SP3 to Vista 32-bit, to Vista 64-bit, and is now at Windows 7 64-bit. Lockups have occurred at every step along the way (each OS was in place for at least a few months before the next upgrade). Edit: By "upgrade" I mean clean install each time. In addition to those reformats, I have performed many, many other reformats of the system and a reinstall of whatever OS had been previously installed in an attempt to rectify this problem, to no avail./Edit When the system locks up, there's no blue screen, no reboot, no error message of any kind. It simply freezes in place until I hit the reset button. Very, very rarely, once Windows boots back up, the system informs me that Windows has recovered from an error, but it can never find the source aside from some piece of hardware. I've swapped out every component in this computer, and there are more fans in it than I care to count...though for the sake of completeness: top 80mm case fan (out) rear 80mm case fan (out) rear 120mm case fan (out) front 120mm case fan (in) side 250mm case fan (in) giant CPU fan on-board motherboard fan (the eVGA board) triple-fan memory setup (came with the memory) PSU internal fan another 120mm fan I stuck on the underside of the video card to keep hot air from collecting at the bottom of the case I'm truly out of ideas. ANY help at all would be oh-so-very GREATLY appreciated. Thank you!

    Read the article

  • Inconsistent file downloads of (what should be) the same file

    - by Austin A.
    I'm working on a system that archives large collections of timetstamped images. Part of the system deals with saving an image to a growing .zip file. This morning I noticed that the log system said that an image was successfully downloaded and placed in the zip file, but when I downloaded the .zip (from an apache alias running on our server), the images didn't match the log. For example, although the log said that camera 3484 captured on January 17, 2011, when I download from the apache alias, the downloaded zip file only contains images up to January 14. So, I sshed onto the server, and unzipped the file in its own directory, and that zip file has images from January 14 to today (January 17). What strikes me as odd is that this should be the exact same file as the one I downloaded from the apache alias. Other experiments: I scp-ed the file from the server to my local machine, and the zip file has the newer images. But when I use an SCP client (in this case, Fugu for OSX), I get the zip file for the older images. In short: unzipping a file on the server or after downloading through scp or after downloading through wget gives one zip file, but unzipping a file from Chrome, Firefox, or SCP client gives a different zip file, when they should be exactly the same. Unzipping on the server... [user@server ~]$ cd /export1/amos/images/2011/84/3484/00003484/ [user@server 00003484]$ ls -la total 6180 drwxr-sr-x 2 user groupname 24 Jan 17 11:20 . drwxr-sr-x 4 user groupname 36 Jan 11 19:58 .. -rw-r--r-- 1 user groupname 6309980 Jan 17 12:05 2011.01.zip [user@server 00003484]$ unzip 2011.01.zip Archive: 2011.01.zip extracting: 20110114_140547.jpg extracting: 20110114_143554.jpg replace 20110114_143554.jpg? [y]es, [n]o, [A]ll, [N]one, [r]ename: y extracting: 20110114_143554.jpg extracting: 20110114_153458.jpg (...bunch of files...) extracting: 20110117_170459.jpg extracting: 20110117_173458.jpg extracting: 20110117_180501.jpg Using the wget through apache alias. local:~ user$ wget http://example.com/zipfiles/2011/84/3484/00003484/2011.01.zip --12:38:13-- http://example.com/zipfiles/2011/84/3484/00003484/2011.01.zip => `2011.01.zip' Resolving example.com... ip.ip.ip.ip Connecting to example.com|ip.ip.ip.ip|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 6,327,747 (6.0M) [application/zip] 100% [=====================================================================================================>] 6,327,747 1.03M/s ETA 00:00 12:38:56 (143.23 KB/s) - `2011.01.zip' saved [6327747/6327747] local:~ user$ unzip 2011.01.zip Archive: 2011.01.zip extracting: 20110114_140547.jpg (... same as before...) extracting: 20110117_183459.jpg Using scp to grab the zip local:~ user$ scp user@server:/export1/amos/images/2011/84/3484/00003484/2011.01.zip . 2011.01.zip 100% 6179KB 475.3KB/s 00:13 local:~ user$ unzip 2011.01.zip Archive: 2011.01.zip extracting: 20110114_140547.jpg (...same as before...) extracting: 20110117_183459.jpg Using Fugu to download 2011.01.zip from /export1/amos/images/2011/84/3484/00003484/ gives images 20110113_090457.jpg through 201100114_010554.jpg Using Firefox to download 2011.01.zip from http://example.com/zipfiles/2011/84/3484/00003484/2011.01.zip gives images 20110113_090457.jpg through 201100114_010554.jpg Using Chrome gives same results as Firefox. Relevant section from apache httpd.conf: # ScriptAlias: This controls which directories contain server scripts. # ScriptAliases are essentially the same as Aliases, except that # documents in the realname directory are treated as applications and # run by the server when requested rather than as documents sent to the client. # The same rules about trailing "/" apply to ScriptAlias directives as to # Alias. # ScriptAlias /cgi-bin/ "/var/www/cgi-bin/" Alias /zipfiles/ /export1/amos/images/

    Read the article

  • Wordpress Permissions OS X & MAMP

    - by Matt2020
    I have installed several local versions of Wordpress for development purposes. After the install I can create posts, pages and edit admin options. However as soon as try to upload images which would be saved in wp_content/uploads I get an error: Upload Error: Unable to create directory ...../blog/wp-content/uploads/2011/05. Is its parent directory writable by the server? Looks like MAMP server runs as user _www The blog directory is owned by User1 and the group User1 _www is not in the User1 group, should it be? I do not want to chmod 777 or 765 on the directories just to get it going. Googled up a couple of references: http://codex.wordpress.org/Changing_File_Permissions in "Permission Scheme for WordPress" All files should be owned by your user (ftp) account on your web server, and should be writable by that account. On shared hosts, files should never be owned by the webserver process itself (sometimes this is www, or apache, or nobody user). Any file that needs write access from WordPress should be owned or group-owned by the user account used by the WordPress (which may be different than the server account). For example, you may have a user account that lets you FTP files back and forth to your server, but your server itself may run using a separate user, in a separate usergroup, such as dhapache or nobody. If WordPress is running as the FTP account, that account needs to have write access, i.e., be the owner of the files, or belong to a group that has write access. In the latter case, that would mean permissions are set more permissively than default (for example, 775 rather than 755 for folders, and 664 instead of 644). User and group are User1 (which is admin). Running "ps aux | grep httpd" is running as _www So I think this means Wordpress is running as user _www. So the advice seems contradictory: "files should never be owned by the webserver process" i.e. _www but then later it says "Any file that needs write access from WordPress should be owned or group-owned by the user account used by the WordPress" So isn't this _www again? Another search found this url http://dancingengineer.com/computing/2009/07/how-to-install-wordpress-on-mac-os-x-leopard States Which says: My preferred way to do this is to change the group of the wordpress directory and its contents to _www and give write permissions to the group. Keep the owner as your "username". $ cd /Users/"username"/Sites $ sudo chown -R username:_www wordpress_directory $ sudo chmod -R g+w wordpress_directory However, when I tried this, it did not work for automatic upgrades to newer versions of WordPress although it worked for automatically updating the .htaccess file for pretty permalinks. It is not entirely clear to me what should be done. This last suggestion seems to be saying change the group from User1 to _www and give the group write access, but Wordpress upgrades won't work. Is this the right solution? I would have thought there would be a clear way to set this up on OS X 10.6? Be great if there was a plugin that could run a script for each of the main OS's that Wordpress runs on.

    Read the article

  • How to use more than 3 virtual disks in Linux using CentOS and XenServer

    - by 010110110101
    I've attached 5 virtual disks to a Virtual Machine in Citrix XenServer. The VM has the xs-tools installed. Initially it said that it couldn't add so many disks. After I installed the xs-tools, it let me add all the disks. But /dev doesn't show all the disks. It shows these: /dev/xvda /dev/xvdb /dev/xvdc /dev/cdrom Perhaps it is bound by the limits of an IDE bus? (3 disks + CD-ROM) If so, how does one change the VM to use SCSI? Edit: According to the documentation: 2.6.3. VM Block Devices In the PV Linux case, block devices are passed through as PV devices. XenServer does not attempt to emulate SCSI or IDE, but instead provides a more suitable interface in the virtual environment in the form of xvd* devices. It is also possible to get an sd* device using the same mechanism, where the PV driver inside the VM takes over the SCSI device namespace. This is not desirable so it is best to use xvd* where possible for PV guests (this is the default for Debian and RHEL). For Windows or other fully virtualized guests, XenServer emulates an IDE bus in the form of an hd* device. When using Windows, installing the Citrix Tools for Virtual Machines installs a special PV driver that works in a similar way to Linux, except in the fully virtualized environment. Still, with 5 virtual disks attached, I don't see the other xvd devices. Edit #2: (attached requested info) Host Machine: XenServer 6.1 Linux version 2.6.32.43-0.4.1.xs1.6.10.777.170770xen (geeko@buildhost) (gcc version 4.1.2 20080704 (Red Hat 4.1.2-51)) #1 SMP Wed Apr 17 05:52:03 EDT 2013 Guest Machine: CentOS release 6.4 (Final) Linux version 2.6.32-358.6.2.el6.x86_64 ([email protected]) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) ) #1 SMP Thu May 16 20:59:36 UTC 2013 Output of 'fdisk -l' on Guest Machine: Note, the disk beyond the first 3 attached are not displaying -- there should be 4 100GB disks. (There are a total of 5 disks displayed in XenCenter -- 16GB, 100GB, 100GB, 100GB, 100GB) Disk /dev/xvdb: 107.4 GB, 107374182400 bytes 255 heads, 63 sectors/track, 13054 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xfb6c95b9 Device Boot Start End Blocks Id System /dev/xvdb1 1 13054 104856223+ 83 Linux Disk /dev/xvda: 17.2 GB, 17179869184 bytes 255 heads, 63 sectors/track, 2088 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000e5f41 Device Boot Start End Blocks Id System /dev/xvda1 * 1 64 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/xvda2 64 2089 16264192 8e Linux LVM Disk /dev/xvdc: 107.4 GB, 107374182400 bytes 255 heads, 63 sectors/track, 13054 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xed249ced Device Boot Start End Blocks Id System /dev/xvdc1 1 13054 104856223+ 83 Linux Disk /dev/mapper/vg_blue-lv_root: 14.6 GB, 14571012096 bytes 255 heads, 63 sectors/track, 1771 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_blue-lv_swap: 2080 MB, 2080374784 bytes 255 heads, 63 sectors/track, 252 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 I see that the Linux versions say SMP. The Guest VM doesn't say "xen" in the name. However, I have already run yum install kernel-xen. Could be a clue?

    Read the article

  • Setup access to SAS RAID drives with NTFS partitions on CentOS Machine

    - by Quanano
    We have a Dell Poweredge 2900 system with Adaptec 39320A SCSI CONTROLLER CARD and 4 SAS hard drives attached, with NTFS partitions on them. We installed CentOS on the other raid array with a different controller and it is working fine. We are now trying to access the drives shown above and they are not being shown in /dev as sdb, etc. sda is the drive that we installed centos on and it has sda1, sda2, sda3, etc. The CDROM has been picked up as well. If I scan for scsi devices then the perc and adaptec controllers are both found. sg0 is the CDROM and sg2 is the centos installed, however I think sg1 is the other drive but I cannot see anyway to mount the partitions, as only the drive is listed in /dev. Thanks. EXTRA INFO fdisk -l Disk /dev/sda: 72.7 GB, 72746008576 bytes 255 heads, 63 sectors/track, 8844 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x11e3119f Device Boot Start End Blocks Id System /dev/sda1 * 1 64 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 64 8845 70528000 8e Linux LVM Disk /dev/mapper/vg_lal2server-lv_root: 34.4 GB, 34431041536 bytes 255 heads, 63 sectors/track, 4186 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_lal2server-lv_root doesn't contain a valid partition table Disk /dev/mapper/vg_lal2server-lv_swap: 21.1 GB, 21139292160 bytes 255 heads, 63 sectors/track, 2570 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_lal2server-lv_swap doesn't contain a valid partition table Disk /dev/mapper/vg_lal2server-lv_home: 16.6 GB, 16647192576 bytes 255 heads, 63 sectors/track, 2023 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_lal2server-lv_home doesn't contain a valid partition table These are all from the install hdd not the additional hard drives modprobe a320raid FATAL: Module a320raid not found. lsscsi -v [0:0:0:0] cd/dvd TSSTcorp CDRWDVD TS-H492C DE02 /dev/sr0 dir: /sys/bus/scsi/devices/0:0:0:0 [/sys/devices/pci0000:00/0000:00:1f.1/host0/target0:0:0/0:0:0:0] [4:0:10:0] enclosu DP BACKPLANE 1.05 - dir: /sys/bus/scsi/devices/4:0:10:0 [/sys/devices/pci0000:00/0000:00:05.0/0000:01:00.0/0000:02:0e.0/host4/target4:0:10/4:0:10:0] [4:2:0:0] disk DELL PERC 5/i 1.03 /dev/sda dir: /sys/bus/scsi/devices/4:2:0:0 [/sys/devices/pci0000:00/0000:00:05.0/0000:01:00.0/0000:02:0e.0/host4/target4:2:0/4:2:0:0] . lsmod Module Size Used by fuse 66285 0 des_generic 16604 0 ecb 2209 0 md4 3461 0 nls_utf8 1455 0 cifs 278370 0 autofs4 26888 4 ipt_REJECT 2383 0 ip6t_REJECT 4628 2 nf_conntrack_ipv6 8748 2 nf_defrag_ipv6 12182 1 nf_conntrack_ipv6 xt_state 1492 2 nf_conntrack 79453 2 nf_conntrack_ipv6,xt_state ip6table_filter 2889 1 ip6_tables 19458 1 ip6table_filter ipv6 322029 31 ip6t_REJECT,nf_conntrack_ipv6,nf_defrag_ipv6 bnx2 79618 0 ses 6859 0 enclosure 8395 1 ses dcdbas 9219 0 serio_raw 4818 0 sg 30124 0 iTCO_wdt 13662 0 iTCO_vendor_support 3088 1 iTCO_wdt i5000_edac 8867 0 edac_core 46773 3 i5000_edac i5k_amb 5105 0 shpchp 33482 0 ext4 364410 3 mbcache 8144 1 ext4 jbd2 88738 1 ext4 sd_mod 39488 3 crc_t10dif 1541 1 sd_mod sr_mod 16228 0 cdrom 39771 1 sr_mod megaraid_sas 77090 2 aic79xx 129492 0 scsi_transport_spi 26151 1 aic79xx pata_acpi 3701 0 ata_generic 3837 0 ata_piix 22846 0 radeon 1023359 1 ttm 70328 1 radeon drm_kms_helper 33236 1 radeon drm 230675 3 radeon,ttm,drm_kms_helper i2c_algo_bit 5762 1 radeon i2c_core 31276 4 radeon,drm_kms_helper,drm,i2c_algo_bit dm_mirror 14101 0 dm_region_hash 12170 1 dm_mirror dm_log 10122 2 dm_mirror,dm_region_hash dm_mod 81500 11 dm_mirror,dm_log

    Read the article

  • How do I reset/update my BIOS for Optiplex GX280?

    - by Sam Langlhey
    So far this has been a nightmare for me, which has been frustrating me constantly. I am using Dell Optiplex GX280 with Windows XP home edition, which is running a BIOS version A04. Recently, i've rebooted the pc to find out that its not booting. It will get to the Windows boot up screen with the progress bar but only to restart to the same process again, over and over. Frustrated that I am, i've inserted the Windows recovery CD to at least either repair of reinstall the operating system to find out that was not possible. I hit F8 to have the boot options, each of the boot option that I've selected gave me an error saying: "Selected boot device is not available." Right after that, I went to the BIOS setting and did a diagnostic test, which recognized all the Boot devices onboard. Now, I cannot even repair of reinstall Windows XP, because the system is not booting from none of the boot devices. The surprise is when I removed the hard-drive from the computer and loaded it on into another computer successfully; that's right, there is nothing wrong with the hard drive. After that I was totally puzzled. I found a few pointers online saying that the BIOS start-up block might be corrupted itself and I might need to flash/update the BIOS. I found the detailed instruction on how to create a Boot up disk by downloading the BIOS firmware from the manufacture's website. I did exactly as instructed below: Download the latest version or your choose version of BIOS file for your computer or motherboard from the manufacturer’s support site. Rename the downloaded file to AMIBOOT.ROM. Copy the file to a floppy disk. Insert the floppy disk to the floppy drive. Turn on the system. After I did that and powered on the PC to boot from the floppy drive, it gave me this error message: "Non-System Disk or Disk Error. Replace and Strike any key when ready." I did all that, and I kept on pressing [Ctrl]+[Home] to force it, but it did not did any satisfying result. Desperate as I am, my next attempt is to try the instruction below. Since I want to be ready, in the event it does not work, do you have any solution that you can provide? Please keep in mind that I cannot boot from any of the devices at this moment. My only hope now is to come on with a solution that will work through the Floppy drive, since that's the only drive that affected. Thank you very much for your advice and support in advance. To create a Windows startup disk, insert a floppy disk into the drive of a similarly configured, working Windows XP system, launch My Computer, right-click the floppy disk icon, and select the Format command from the context menu. When you see the Format dialog box, leave all the default settings as they are and click the Start button. Once the format operation is complete, close the Format dialog box to return to My Computer, double-click the drive C icon to access the root directory, and copy the following three files to the floppy disk: Boot.ini NTLDR Ntdetect.com

    Read the article

  • Wi-Fi Stick with ZD1211 chip refuses to work on Ubuntu >8.10. No clue.

    - by Benjamin Maus
    I have a machine running Ubuntu 9.10 (Karmic *x86_64*). Everything is running smooth so far, except for the Wi-Fi USB Stick. The same device worked perfectly in 8.10. The wireless device is a GW-US54GXS using the Zydas Zd1211 chipset. Dmesg output after plugging in: [ 196.303436] phy0: Selected rate control algorithm 'minstrel' [ 196.304209] zd1211rw 2-1:1.0: phy0 [ 196.304227] usbcore: registered new interface driver zd1211rw [ 196.334137] usb 2-1: firmware: requesting zd1211/zd1211b_ub [ 196.357463] usb 2-1: firmware: requesting zd1211/zd1211b_uphr [ 196.402643] zd1211rw 2-1:1.0: firmware version 4725 [ 196.442611] zd1211rw 2-1:1.0: zd1211b chip 2019:5303 v4810 high 00-90-cc AL2230_RF pa0 ---N- [ 196.463814] usb 2-1: firmware: requesting zd1211/zd1211b_ub [ 196.466823] usb 2-1: firmware: requesting zd1211/zd1211b_uphr Syslog output: Nov 5 11:20:24 somesystem kernel: [ 196.303436] phy0: Selected rate control algorithm 'minstrel' Nov 5 11:20:24 kierkegaard NetworkManager: <info> Found radio killswitch rfkill0 (at /sys/devices/pci0000:00/0000:00:1d.7/usb2/2-1/2-1:1.0/ieee80211/phy0/rfkill0) (driver <unknown>) Nov 5 11:20:24 somesystem kernel: [ 196.304209] zd1211rw 2-1:1.0: phy0 Nov 5 11:20:24 somesystem kernel: [ 196.304227] usbcore: registered new interface driver zd1211rw Nov 5 11:20:24 somesystem NetworkManager: SCPlugin-Ifupdown: devices added (path: /sys/devices/pci0000:00/0000:00:1d.7/usb2/2-1/2-1:1.0/net/wmaster0, iface: wmaster0) Nov 5 11:20:24 somesystem NetworkManager: SCPlugin-Ifupdown: device added (path: /sys/devices/pci0000:00/0000:00:1d.7/usb2/2-1/2-1:1.0/net/wmaster0, iface: wmaster0): no ifupdown configuration found. Nov 5 11:20:24 somesystem NetworkManager: SCPlugin-Ifupdown: devices added (path: /sys/devices/pci0000:00/0000:00:1d.7/usb2/2-1/2-1:1.0/net/wlan0, iface: wlan0) Nov 5 11:20:24 somesystem NetworkManager: SCPlugin-Ifupdown: device added (path: /sys/devices/pci0000:00/0000:00:1d.7/usb2/2-1/2-1:1.0/net/wlan0, iface: wlan0): no ifupdown configuration found. Nov 5 11:20:24 somesystem NetworkManager: <info> (wlan0): driver supports SSID scans (scan_capa 0x01). Nov 5 11:20:24 somesystem NetworkManager: <info> (wlan0): new 802.11 WiFi device (driver: 'zd1211rw') Nov 5 11:20:24 somesystem NetworkManager: <info> (wlan0): exported as /org/freedesktop/NetworkManager/Devices/2 Nov 5 11:20:24 somesystem NetworkManager: <info> (wlan0): now managed Nov 5 11:20:24 somesystem NetworkManager: <info> (wlan0): device state change: 1 -> 2 (reason 2) Nov 5 11:20:24 somesystem NetworkManager: <info> (wlan0): bringing up device. Nov 5 11:20:24 somesystem kernel: [ 196.334137] usb 2-1: firmware: requesting zd1211/zd1211b_ub Nov 5 11:20:24 somesystem kernel: [ 196.357463] usb 2-1: firmware: requesting zd1211/zd1211b_uphr Nov 5 11:20:24 somesystem kernel: [ 196.402643] zd1211rw 2-1:1.0: firmware version 4725 Nov 5 11:20:24 somesystem kernel: [ 196.442611] zd1211rw 2-1:1.0: zd1211b chip 2019:5303 v4810 high 00-90-cc AL2230_RF pa0 ---N- Nov 5 11:20:24 somesystem NetworkManager: <WARN> nm_device_hw_bring_up(): (wlan0): device not up after timeout! Nov 5 11:20:24 somesystem NetworkManager: <info> (wlan0): deactivating device (reason: 2). Nov 5 11:20:24 somesystem kernel: [ 196.463814] usb 2-1: firmware: requesting zd1211/zd1211b_ub Nov 5 11:20:24 somesystem kernel: [ 196.466823] usb 2-1: firmware: requesting zd1211/zd1211b_uphr Nov 5 11:20:29 somesystem wpa_supplicant[978]: Could not set interface 'wlan0' UP Nov 5 11:20:29 somesystem wpa_supplicant[978]: Failed to initialize driver interface Nov 5 11:20:29 somesystem NetworkManager: <WARN> nm_supplicant_interface_add_cb(): Unexpected supplicant error getting interface: wpa_supplicant couldn't grab this interface. Gnome tells me in the network menu that the device was "not ready". It appears in iwconfig but not in ifconfig. The same symptoms appear when I boot from the live CD. How can I solve this dilemma?

    Read the article

  • ovs-vsctl: "eth0" is not a valid UUID

    - by Przemek Lach
    I'm trying to setup an open v-switch inside my Ubuntu 12.04 Server VM. I have created three interfaces for this VM and I want to create a port mirror inside of the VM using these there interfaces and open v-switch. There are three Host-Only Adapters: eth0, eth1, eth2. The idea is that three other VM's will be connected to these adapters. One of these VM's will stream UDP video to eth0 and I want the vswitch'd VM to mirror those packets from eth0 onto eth1 and eth2. Each of the VM's connected to eth1 and eth2 will get the same video stream. I performed the following steps to install open v-switch: $ apt-get install python-simplejson python-qt4 python-twisted-conch automake autoconf gcc uml-utilities libtool build-essential $ apt-get install build-essential autoconf automake pkg-config $ wget http://openvswitch.org/releases/openvswitch-1.7.1.tar.gz $ tar xf http://openvswitch.org/releases/openvswitch-1.7.1.tar.gz $ cd http://openvswitch.org/releases/openvswitch-1.7.1.tar.gz $ apt-get install libssl-dev iproute tcpdump linux-headers-`uname -r` $ ./boot.sh $ ./configure - -with-linux=/lib/modules/`uname -r`/build $ make $ sudo make install After installation I configured as follows: $ insmod datapath/linux/openvswitch.ko $ sudo touch /usr/local/etc/ovs-vswitchd.conf $ mkdir -p /usr/local/etc/openvswitch $ ovsdb-tool create /usr/local/etc/openvswitch/conf.db Then I started the server: $ ovsdb-server /usr/local/etc/openvswitch/conf.db \ --remote=punix:/usr/local/var/run/openvswitch/db.sock \ --remote=db:Open_vSwitch,manager_options \ --private-key=db:SSL,private_key \ --certificate=db:SSL,certificate \ --bootstrap-ca-cert=db:SSL,ca_cert --pidfile --detach --log-file $ ovs-vsctl –no-wait init (run only once) $ ovs-vswitchd --pidfile --detach The above steps I got from this tutorial and it all worked fine. I then proceeded to add a port mirror based on the open v-switch documentation under Port Mirroring. I successfully completed the following commands: $ ovs-vsctl add-br br0 $ ovs-vsctl add-port br0 eth0 $ ovs-vsctl add-port br0 eth1 $ ovs-vsctl add-port br0 eth2 $ ifconfig eth0 promisc up $ ifconfig eth1 promisc up $ ifconfig eth2 promisc up At this point when I run ovs-vsctl show I get the following: 75bda8c2-b870-438b-9115-e36288ea1cd8 Bridge "br0" Port "br0" Interface "br0" type: internal Port "eth0" Interface "eth0" Port "eth2" Interface "eth2" Port "eth1" Interface "eth1" And when I run ifconfig I get the following: eth0 Link encap:Ethernet HWaddr 08:00:27:9f:51:ca inet6 addr: fe80::a00:27ff:fe9f:51ca/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:17 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1494 (1.4 KB) TX bytes:468 (468.0 B) eth1 Link encap:Ethernet HWaddr 08:00:27:53:02:d4 inet6 addr: fe80::a00:27ff:fe53:2d4/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:17 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1494 (1.4 KB) TX bytes:468 (468.0 B) eth2 Link encap:Ethernet HWaddr 08:00:27:cb:a5:93 inet6 addr: fe80::a00:27ff:fecb:a593/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:17 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1494 (1.4 KB) TX bytes:468 (468.0 B) eth3 Link encap:Ethernet HWaddr 08:00:27:df:bb:d8 inet addr:192.168.1.139 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fedf:bbd8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2211 errors:0 dropped:0 overruns:0 frame:0 TX packets:1196 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:182987 (182.9 KB) TX bytes:125441 (125.4 KB) NOTE: I use eth3 as a bridge adapter for SSH'ing into the VM. So now, I think I've done everything correctly but when I try to create the bridge using the following command: $ ovs-vsctl -- set Bridge br0 mirrors=@m -- --id=@eth0 get Port eth0 -- --id=@eth1 get Port eth1 -- --id=@m create Mirror name=app1Mirror select-dst-port=eth0 select-src-port=@eth0 output-port=@eth1,eth2 I get the following error: ovs-vsctl: "eth0" is not a valid UUID I don't understand why it's not able to find the interfaces?

    Read the article

  • Set up lnux box for hosting a-z [apache mysql php ssl]

    - by microchasm
    I am in the process of reinstalling the OS on a machine that will be used to host a couple of apps for our business. The apps will be local only; access from external clients will be via vpn only. The prior setup used a hosting control panel (Plesk) for most of the admin, and I was looking at using another similar piece of software for the reinstall - but I figured I should finally learn how it all works. I can do most of the things the software would do for me, but am unclear on the symbiosis of it all. This is all an attempt to further distance myself from the land of Configuration Programmer/Programmer, if at all possible. I can't find a full walkthrough anywhere for what I'm looking for, so I thought I'd put up this question, and if people can help me on the way I will edit this with the answers, and document my progress/pitfalls. Hopefully someday this will help someone down the line. The details: CentOS 5.5 x86_64 httpd: Apache/2.2.3 mysql: 5.0.77 (to be upgraded) php: 5.1 (to be upgraded) The requirements: SECURITY!! Secure file transfer Secure client access (SSL Certs and CA) Secure data storage Virtualhosts/multiple subdomains Local email would be nice, but not critical The Steps: Download latest CentOS DVD-iso (torrent worked great for me). Install CentOS: While going through the install, I checked the Server Components option thinking I was going to be using another Plesk-like admin. In hindsight, considering I've decided to try to go my own way, this probably wasn't the best idea. Basic config: Setup users, networking/ip address etc. Yum update/upgrade. Upgrade PHP: To upgrade PHP to the latest version, I had to look to another repo outside CentOS. IUS looks great and I'm happy I found it! cd /tmp #wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/epel-release-1-1.ius.el5.noarch.rpm #rpm -Uvh epel-release-1-1.ius.el5.noarch.rpm #wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/ius-release-1-4.ius.el5.noarch.rpm #rpm -Uvh ius-release-1-4.ius.el5.noarch.rpm yum list | grep -w \.ius\. [will list all packages available in the IUS repo] rpm -qa | grep php [will list installed packages needed to be removed. the installed packages need to be removed before you can install the IUS packages otherwise there will be conflicts] #yum shell >remove php-gd php-cli php-odbc php-mbstring php-pdo php php-xml php-common php-ldap php-mysql php-imap Setting up Remove Process >install php53 php53-mcrypt php53-mysql php53-cli php53-common php53-ldap php53-imap php53-devel >transaction solve >transaction run Leaving Shell #php -v PHP 5.3.2 (cli) (built: Apr 6 2010 18:13:45) This process removes the old version of PHP and installs the latest. To upgrade mysql: Pretty much the same process as above with PHP #/etc/init.d/mysqld stop [OK] rpm -qa | grep mysql [installed mysql packages] #yum shell >remove mysql mysql-server Setting up Remove Process >install mysql51 mysql51-server mysql51-devel >transaction solve >transaction run Leaving Shell #service mysqld start [OK] #mysql -v Server version: 5.1.42-ius Distributed by The IUS Community Project And this is where I'm at. I will keep editing this as I make progress. Any tips on how to Configure Virtualhosts for SSL, setting up a CA, setting up SFTP with openSSH, or anything else would be appreciated.

    Read the article

  • I added some options to stop spam with Postfix, but now won't send email to remote domains

    - by willdanceforfun
    I had a working Postfix server, but added a few lines to my main.cf in a hope to block some common spam. Those lines I added were: smtpd_helo_required = yes smtpd_recipient_restrictions = reject_invalid_hostname, reject_unknown_recipient_domain, reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination, reject_rbl_client multi.uribl.com, reject_rbl_client dsn.rfc-ignorant.org, reject_rbl_client dul.dnsbl.sorbs.net, reject_rbl_client list.dsbl.org, reject_rbl_client sbl-xbl.spamhaus.org, reject_rbl_client bl.spamcop.net, reject_rbl_client dnsbl.sorbs.net, reject_rbl_client cbl.abuseat.org, reject_rbl_client ix.dnsbl.manitu.net, reject_rbl_client combined.rbl.msrbl.net, reject_rbl_client rabl.nuclearelephant.com, permit It appears my postfix is now receiving normal emails fine, and blocking spam emails. But when I now try to use this server myself to send to a remote domain (an email not on my server) I get bounced, with maillog saying something like this: Nov 12 06:19:36 srv postfix/smtpd[11756]: NOQUEUE: reject: RCPT from unknown[xx.xx.x.xxx]: 450 4.1.2 <[email protected]>: Recipient address rejected: Domain not found; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<[192.168.1.100]> Is that saying 'domain not found' for gmail.com? Why is that recipient address rejected? An output of my postconf-n is: alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases broken_sasl_auth_clients = yes command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 html_directory = no inet_interfaces = all inet_protocols = all mail_owner = postfix mailbox_size_limit = 0 mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain mydomain = primarydomain.net myhostname = mail.primarydomain.net myorigin = $myhostname newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES relay_domains = $mydestination, primarydomain.net, secondarydomain.org sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtpd_client_restrictions = permit_sasl_authenticated smtpd_helo_required = yes smtpd_recipient_restrictions = reject_invalid_hostname, reject_unknown_recipient_domain, reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination, reject_rbl_client multi.uribl.com, reject_rbl_client dsn.rfc-ignorant.org, reject_rbl_client dul.dnsbl.sorbs.net, reject_rbl_client list.dsbl.org, reject_rbl_client sbl-xbl.spamhaus.org, reject_rbl_client bl.spamcop.net, reject_rbl_client dnsbl.sorbs.net, reject_rbl_client cbl.abuseat.org, reject_rbl_client ix.dnsbl.manitu.net, reject_rbl_client combined.rbl.msrbl.net, reject_rbl_client rabl.nuclearelephant.com, permit smtpd_sasl_auth_enable = yes smtpd_sasl_path = private/auth smtpd_sasl_type = dovecot smtpd_sender_restrictions = reject_unknown_sender_domain soft_bounce = no unknown_local_recipient_reject_code = 550 virtual_alias_domains = mail.secondarydomain.org virtual_alias_maps = hash:/etc/postfix/virtual Any insight greatly appreciated. Edit: here is the dig mx gmail.com from the server: ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.17.rc1.el6_4.4 <<>> mx gmail.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 31766 ;; flags: qr rd ra; QUERY: 1, ANSWER: 5, AUTHORITY: 4, ADDITIONAL: 14 ;; QUESTION SECTION: ;gmail.com. IN MX ;; ANSWER SECTION: gmail.com. 1207 IN MX 5 gmail-smtp-in.l.google.com. gmail.com. 1207 IN MX 30 alt3.gmail-smtp-in.l.google.com. gmail.com. 1207 IN MX 20 alt2.gmail-smtp-in.l.google.com. gmail.com. 1207 IN MX 40 alt4.gmail-smtp-in.l.google.com. gmail.com. 1207 IN MX 10 alt1.gmail-smtp-in.l.google.com. ;; AUTHORITY SECTION: gmail.com. 109168 IN NS ns1.google.com. gmail.com. 109168 IN NS ns4.google.com. gmail.com. 109168 IN NS ns3.google.com. gmail.com. 109168 IN NS ns2.google.com. ;; ADDITIONAL SECTION: alt1.gmail-smtp-in.l.google.com. 207 IN A 173.194.70.27 alt1.gmail-smtp-in.l.google.com. 248 IN AAAA 2a00:1450:4001:c02::1b gmail-smtp-in.l.google.com. 200 IN A 173.194.67.26 gmail-smtp-in.l.google.com. 248 IN AAAA 2a00:1450:400c:c05::1b alt3.gmail-smtp-in.l.google.com. 207 IN A 74.125.143.27 alt3.gmail-smtp-in.l.google.com. 249 IN AAAA 2a00:1450:400c:c05::1b alt2.gmail-smtp-in.l.google.com. 207 IN A 173.194.69.27 alt2.gmail-smtp-in.l.google.com. 248 IN AAAA 2a00:1450:4008:c01::1b alt4.gmail-smtp-in.l.google.com. 207 IN A 173.194.79.27 alt4.gmail-smtp-in.l.google.com. 249 IN AAAA 2607:f8b0:400e:c01::1a ns2.google.com. 281970 IN A 216.239.34.10 ns3.google.com. 281970 IN A 216.239.36.10 ns4.google.com. 281970 IN A 216.239.38.10 ns1.google.com. 281970 IN A 216.239.32.10

    Read the article

  • How can I make subversion reset the stored passwords/users and remember my authentication credential

    - by NicDumZ
    Hello folks! Background: I used to have everything working just fine on my fresh install: $ svn co https://domain:443/ test1 Error validating server certificate for 'https://domain:443': - The certificate is not issued by a trusted authority. Use the fingerprint to validate the certificate manually! Certificate information: - Hostname: **REMOVED** - Valid: **REMOVED** - Issuer: **REMOVED** - Fingerprint: **checked with issuer and REMOVED** (R)eject, accept (t)emporarily or accept (p)ermanently? p Authentication realm: <https://domain:443> Subversion repository Password for 'nicdumz-machine-hostname': Authentication realm: <https://domain:443> Subversion repository Username: nicdumz Password for 'nicdumz': # proceeds to checkout correctly $ svn co https://domain:443/ test2 # checkouts nicely, without asking for my password. At some point I needed to commit stuff using a different account. So I did that $ svn ci --username other.user Authentication realm: <https://domain:443> Subversion repository Password for 'other.user': # works fine But since then, everytime I want to commit as 'nicdumz' (default user, all repos have been checked-out with that user), it prompts me for my password: $ svn ci Authentication realm: <https://domain:443> Subversion repository Password for 'nicdumz': Hey come on, why :) The same happens if I want a fresh checkout, since read-access is also protected. So I tried fixing the issue by myself. I read around that ~/.subversion/auth was storing credentials, so I removed it from the way: $ cd ~/.subversion $ mv auth oldauth $ mkdir auth It seemed to work at first, because svn had forgotten about certificate validation: $ svn co https://domain:443/ test3 Error validating server certificate for 'https://domain:443': - The certificate is not issued by a trusted authority. Use the fingerprint to validate the certificate manually! Certificate information: - Hostname: **REMOVED** - Valid: **REMOVED** - Issuer: **REMOVED** - Fingerprint: **checked with issuer and REMOVED** (R)eject, accept (t)emporarily or accept (p)ermanently? p Authentication realm: <https://domain:443> Subversion repository Password for 'nicdumz-machine-hostname': Authentication realm: <https://domain:443> Subversion repository Username: nicdumz Password for 'nicdumz': # proceeds to checkout correctly $ svn up Authentication realm: <https://domain:443> Subversion repository Password for 'nicdumz': What? how is this happening? If you have suggestions to investigate more about the behaviour, I am very interested. If I'm correct, there is no way to do a verbose svn up or anything of the like, so I'm not sure should I go for investigation. Oh, and for what it's worth: $ svn --version svn, version 1.6.6 (r40053) compiled Oct 26 2009, 06:19:08 Copyright (C) 2000-2009 CollabNet. Subversion is open source software, see http://subversion.tigris.org/ This product includes software developed by CollabNet (http://www.Collab.Net/). The following repository access (RA) modules are available: * ra_neon : Module for accessing a repository via WebDAV protocol using Neon. - handles 'http' scheme - handles 'https' scheme * ra_svn : Module for accessing a repository using the svn network protocol. - with Cyrus SASL authentication - handles 'svn' scheme * ra_local : Module for accessing a repository on local disk. - handles 'file' scheme * ra_serf : Module for accessing a repository via WebDAV protocol using serf. - handles 'http' scheme - handles 'https' scheme

    Read the article

  • OpenVPN (HideMyAss) client on Ubuntu: Route only HTTP traffic

    - by Andersmith
    I want to use HideMyAss VPN (hidemyass.com) on Ubuntu Linux to route only HTTP (ports 80 & 443) traffic to the HideMyAss VPN server, and leave all the other traffic (MySQL, SSH, etc.) alone. I'm running Ubuntu on AWS EC2 instances. The problem is that when I try and run the default HMA script, I suddenly can't SSH into the Ubuntu instance anymore and have to reboot it from the AWS console. I suspect the Ubuntu instance will also have trouble connecting to the RDS MySQL database, but haven't confirmed it. HMA uses OpenVPN like this: sudo openvpn client.cfg The client configuration file (client.cfg) looks like this: ############################################## # Sample client-side OpenVPN 2.0 config file # # for connecting to multi-client server. # # # # This configuration can be used by multiple # # clients, however each client should have # # its own cert and key files. # # # # On Windows, you might want to rename this # # file so it has a .ovpn extension # ############################################## # Specify that we are a client and that we # will be pulling certain config file directives # from the server. client auth-user-pass #management-query-passwords #management-hold # Disable management port for debugging port issues #management 127.0.0.1 13010 ping 5 ping-exit 30 # Use the same setting as you are using on # the server. # On most systems, the VPN will not function # unless you partially or fully disable # the firewall for the TUN/TAP interface. #;dev tap dev tun # Windows needs the TAP-Win32 adapter name # from the Network Connections panel # if you have more than one. On XP SP2, # you may need to disable the firewall # for the TAP adapter. ;dev-node MyTap # Are we connecting to a TCP or # UDP server? Use the same setting as # on the server. proto tcp ;proto udp # The hostname/IP and port of the server. # You can have multiple remote entries # to load balance between the servers. # All VPN Servers are added at the very end ;remote my-server-2 1194 # Choose a random host from the remote # list for load-balancing. Otherwise # try hosts in the order specified. # We order the hosts according to number of connections. # So no need to randomize the list # remote-random # Keep trying indefinitely to resolve the # host name of the OpenVPN server. Very useful # on machines which are not permanently connected # to the internet such as laptops. resolv-retry infinite # Most clients don't need to bind to # a specific local port number. nobind # Downgrade privileges after initialization (non-Windows only) ;user nobody ;group nobody # Try to preserve some state across restarts. persist-key persist-tun # If you are connecting through an # HTTP proxy to reach the actual OpenVPN # server, put the proxy server/IP and # port number here. See the man page # if your proxy server requires # authentication. ;http-proxy-retry # retry on connection failures ;http-proxy [proxy server] [proxy port #] # Wireless networks often produce a lot # of duplicate packets. Set this flag # to silence duplicate packet warnings. ;mute-replay-warnings # SSL/TLS parms. # See the server config file for more # description. It's best to use # a separate .crt/.key file pair # for each client. A single ca # file can be used for all clients. ca ./keys/ca.crt cert ./keys/hmauser.crt key ./keys/hmauser.key # Verify server certificate by checking # that the certicate has the nsCertType # field set to "server". This is an # important precaution to protect against # a potential attack discussed here: # http://openvpn.net/howto.html#mitm # # To use this feature, you will need to generate # your server certificates with the nsCertType # field set to "server". The build-key-server # script in the easy-rsa folder will do this. ;ns-cert-type server # If a tls-auth key is used on the server # then every client must also have the key. ;tls-auth ta.key 1 # Select a cryptographic cipher. # If the cipher option is used on the server # then you must also specify it here. ;cipher x # Enable compression on the VPN link. # Don't enable this unless it is also # enabled in the server config file. #comp-lzo # Set log file verbosity. verb 3 # Silence repeating messages ;mute 20 # Detect proxy auto matically #auto-proxy # Need this for Vista connection issue route-metric 1 # Get rid of the cached password warning #auth-nocache #show-net-up #dhcp-renew #dhcp-release #route-delay 0 120 # added to prevent MITM attack ns-cert-type server # # Remote servers added dynamically by the master server # DO NOT CHANGE below this line # remote-random remote 173.242.116.200 443 # 0 remote 38.121.77.74 443 # 0 # etc... remote 67.23.177.5 443 # 0 remote 46.19.136.130 443 # 0 remote 173.254.207.2 443 # 0 # END

    Read the article

  • My PC suddenly doesn't detect the primary drive (SSD)

    - by smoth190
    My computer has been working fine for months, and it worked today, but tonight I went to start it up to find that my OCZ Vertex 2 isn't being found. When I turn on my computer, the loading screen gets stuck at "Detecting IDE drives...". After a while, it keeps going and lists the drives it finds. The first one in the list should be my Vertex 2, but it just says "None". The computer proceeds to get stuck on "Loading operating system...", which is understandable because the drive with the OS is "gone". My first thought was drive failure, but every time drives have crashed on me, they're still detected--they just don't work. This drive is an SSD, it's pretty new, and I had no problems beforehand. I find it hard to believe it failed. I'm sure it's possible, but I hope this isn't the case. There has been nothing strange going on at all with my PC, it's been running perfect until now. I was just about to do my monthly dskchk and defrag today. I popped in my Windows 7 Home Premium disk and booted from it. When I launched the repair tool, it didn't list any operating systems (because the drive is 100% missing...). When I've had disks crash before, it still listed the OS, you just couldn't do anything with it. I tried to restore from an image, but I don't have any of those, either. I opened the command console and listed the drivers with wmic logicaldisk get name. Only C: and D: came up. C: was my 1TB storage driver (luckily, all my stuff is here--only the OS is on the SSD!) and D: was the disk driver. So I still had an MIA drive... The SSD didn't come with any driver disks, so I can't install drivers. If there's a way to do this from a CD I can burn with my other PC, please let me know. What the heck do I do? Although only the OS is on my SSD, a new SSD is expensive. I'll probably also have to buy a new copy of Windows (an upgrade would be nice, though...) because I've found it eats my registration key when my PC crashes (and my thousands of dollars of Adobe programs, I'll be on the phone with tech support for a week to get those keys back). And I'll lose my registry, all my settings, all sorts of other stuff that I'll spend weeks restoring. My computer is a pain in the butt to take out and open up, so if I can't fix it, I'll try fiddling with the plug or putting it into a new computer, but not right now. Any help is greatly appreciated! The day when they make crash-less drives will be the day I live without worry.

    Read the article

  • How to install Delorme StreetAtlas (any version) + GPS inside VirtualBox VM?

    - by hotei
    When I try to run the install program I get a popup message that says the installer program is not a valid executable. Background: I want a GPS with maps on my laptop running Ubuntu 10.4LTS. Unfortunately I can't find a decent native Linux GPS solution with 50 state US street level coverage. I have VirtualBox VMs available for WinXP and Win7 (among others). The VMs work fine with MicroSoft Streets and Trips (2010) and MapNGo 5 (a very! old Delorme product), but while both these products support GPS, they don't support the Earthmate LT-40 USB GPS I already have. I've got pretty much every Delorme Street Atlas they've released in the last decade and none of them will install in a VM. Any help would be much appreciated. Clarification: I've installed the Delorme products from these CDs before and the disks are fine - as long as installation is done on a "physical" machine. Added: I've tried install from an iso as well as the real CD. No difference in result (setup.exe is not a valid executable) The WinXP is SP-2 (held back on purpose at this point - I'll snapshot and fork a later SP to test). The Win2K is SP-6a. Win7(32) VM is whatever updates came out last week. The USB setup is working at least to the point where the GPS device is active in the device list (has an x in the box). At this point its not relevant because the program that needs to read it can't even be installed. Added 9-19: Added wine as harrymc suggested. Initial result was no change. Here's wines error message. The file '/media/Disk1/setup.exe' is not marked as executable. If this was downloaded or copied form an untrusted source, it may be dangerous to run. For more details, read about the executable bit. At first I thought the execute bit was the problem, but looking at several other windows CDs I see that the execute bit is not set on their exe files (which install to VM without error). Still it was worth a shot so I copied the StreetAtlas 9 DVD to my hard disk, changed the on-disk exe files to have the execute bit set and tried to install again. This time the install via wine got me through the installation process. When I start the program it bombs immediately, so we haven't made much real progress so far. I very much prefer the VM solution to wine, so I'm going back to that for now. To recap the VM situation, using an updated XP with SP3 and all recommended hotfixes: StreetAtlas 2009 USA fails with "not marked as executable". StreetAtlas 2007 USA fails with "not marked as executable". StreetAtlas 9 (copyright 2001) fails with "not marked as executable". SteeetAtlas (copyright 1991) fails with "not marked as executable" Delorme Topo 4 (copyright 2002) fails with "not marked as executable". Just about ready to give up. So I switched from XP VM to Win7 VM and tried StreetAtlas 2009 again. This time it installs. Earthmate USB GPS works. WTH? I feel like the monkey who just wrote a line of Shakespear. I'm smiling because it worked, but I have no clue why. I'm awarding the bounty to harrymc because wine did give some useful insight into the problem and a +1 to goyiux as thanks for helping.

    Read the article

  • how to install 'version.h' in ubuntu ?

    - by user252098
    Just now , I try to install the Jungo WinDriver in the Ubuntu 13.10 . But I am puzzled by the its manual of how to Install version.h : Install version.h: The file version.h is created when you first compile the Linux kernel source code. Some distributions provide a compiled kernel without the file version.h. Look under /usr/src/linux/include/linux to see whether you have this file. If you do not, follow these steps: Become super user: $ su Change directory to the Linux source directory: # cd /usr/src/linux Type: # make xconfig Save the configuration by choosing Save and Exit. Type: # make dep Exit super user mode: # exit But the shell says: warning: make dep is unnecessary now. Then, I found out there is a version.h in /usr/src/linux-headers-3.11.0.12-generic, so I type: /usr/src/windriver/redist# ./configure --with-kernel-source=/usr/src/linux-headers-3.11.0.12-generic But, the windriver run fails: USE_KBUILD = yes checking for cpu architecture... x86_64 checking for WinDriver root directory... /usr/src/WinDriver checking for linux kernel source... found at /usr/src/linux checking for lib directory... ln -sf $(ROOT_DIR)/lib/$(SHARED_OBJECT)_32.so /usr/lib/$(SHARED_OBJECT).so; ln -s /usr/lib /usr/lib64; ln -sf $(ROOT_DIR)/lib/$(SHARED_OBJECT).so /usr/lib64/$(SHARED_OBJECT).so checking which directories to include... -I/usr/src/linux/include checking linux kernel version... 3.11.10.6 checking for modules installation directory... /lib/modules/3.11.0-12-generic/kernel/drivers/misc checking output directory... LINUX.3.11.0-12-generic.x86_64 checking target... LINUX.3.11.0-12-generic.x86_64/windrvr6_usb.ko checking for regparm kernel option... find: `/usr/src/WinDriver/redist/.tmp_driver/.tmp_versions': No such file or directory 0 checking for modpost location... /usr/src/linux/scripts/mod/modpost configure.usb: creating ./config.status config.status: creating makefile.usb.kbuild checking for cpu architecture... x86_64 checking for WinDriver root directory... /usr/src/WinDriver checking for linux kernel source... found at /usr/src/linux checking for lib directory... ln -sf $(ROOT_DIR)/lib/$(SHARED_OBJECT)_32.so /usr/lib/$(SHARED_OBJECT).so; ln -s /usr/lib /usr/lib64; ln -sf $(ROOT_DIR)/lib/$(SHARED_OBJECT).so /usr/lib64/$(SHARED_OBJECT).so checking which directories to include... -I/usr/src/linux/include checking linux kernel version... 3.11.10.6 checking for modules installation directory... /lib/modules/3.11.0-12-generic/kernel/drivers/misc checking output directory... LINUX.3.11.0-12-generic.x86_64 checking target... LINUX.3.11.0-12-generic.x86_64/windrvr6.ko checking for regparm kernel option... find: `/usr/src/WinDriver/redist/.tmp_driver/.tmp_versions': No such file or directory 0 checking for right linked object... windrvr_gcc_v3.a checking for modpost location... /usr/src/linux/scripts/mod/modpost configure.wd: creating ./config.status config.status: creating makefile.wd.kbuild What is the problem?

    Read the article

  • Clearing C#'s WebBrowser control's cookies for all sites WITHOUT clearing for IE itself

    - by Helgi Hrafn Gunnarsson
    Hail StackOverflow! The short version of what I'm trying to do is in the title. Here's the long version. I have a bit of a complex problem which I'm sure I will receive a lot of guesses as a response to. In order to keep the well-intended but unfortunately useless guesses to a minimum, let me first mention that the solution to this problem is not simple, so simple suggestions will unfortunately not help at all, even though I appreciate the effort. C#'s WebBrowser component is fundamentally IE itself so solutions with any sorts of caveats will almost certainly not work. I need to do exactly what I'm trying to do, and even a seemingly minor caveat will defeat the purpose completely. At the risk of sounding arrogant, I need assistance from someone who really has in-depth knowledge about C#'s WebBrowser and/or WinInet and/or how to communicate with Windows's underlying system from C#... or how to encapsulate C++ code in C#. That said, I don't expect anyone to do this for me, and I've found some promising hints which are explained later in this question. But first... what I'm trying to achieve is this. I have a Windows.Forms component which contains a WebBrowser control. This control needs to: Clear ALL cookies for ALL websites. Visit several websites, one after another, and record cookies and handle them correctly. This part works fine already so I don't have any problems with this. Rinse and repeat... theoretically forever. Now, here's the real problem. I need to clear all those cookies (for any and all sites), but only for the WebBrowser control itself and NOT the cookies which IE proper uses. What's fundamentally wrong with this approach is of course the fact that C#'s WebBrowser control is IE. But I'm a stubborn young man and I insist on it being possible, or else! ;) Here's where I'm stuck at the moment. It is quite simply impossible to clear all cookies for the WebBrowser control programmatically through C# alone. One must use DllImport and all the crazy stuff that comes with it. This chunk works fine for that purpose: [DllImport("wininet.dll", SetLastError = true)] private static extern bool InternetSetOption(IntPtr hInternet, int dwOption, IntPtr lpBuffer, int lpdwBufferLength); And then, in the function that actually does the clearing of the cookies: InternetSetOption(IntPtr.Zero, INTERNET_OPTION_END_BROWSER_SESSION, IntPtr.Zero, 0); Then all the cookies get cleared and as such, I'm happy. The program works exactly as intended, aside from the fact that it also clears IE's cookies, which must not be allowed to happen. The problem is that this also clears the cookies for IE proper, and I can't have that happen. From one fellow StackOverflower (if that's a word), Sheng Jiang proposed this to a different problem in a comment, but didn't elaborate further: "If you want to isolate your application's cookies you need to override the Cache directory registry setting via IDocHostUIHandler2::GetOverrideKeyPath" I've looked around the internet for IDocHostUIHandler2 and GetOverrideKeyPath, but I've got no idea of how to use them from C# to isolate cookies to my WebBrowser control. My experience with the Windows registry is limited to RegEdit (so I understand that it's a tree structure with different data types but that's about it... I have no in-depth knowledge of the registry's relationship with IE, for example). Here's what I dug up on MSDN: IDocHostUIHandler2 docs: http://msdn.microsoft.com/en-us/library/aa753275%28VS.85%29.aspx GetOverrideKeyPath docs: http://msdn.microsoft.com/en-us/library/aa753274%28VS.85%29.aspx I think I know roughly what these things do, I just don't know how to use them. So, I guess that's it! Any help is greatly appreciated.

    Read the article

  • protobuf-net: incorrect wire-type exception deserializing Guid properties

    - by Paul Smith
    I'm having issues deserializing certain Guid properties of ORM-generated entities using protobuf-net. Here's a simplified example of the code (reproduces most elements of the scenario, but doesn't reproduce the behavior; I can't expose our internal entities, so I'm looking for clues to account for the exception). Say I have a class, Account with an AccountID read-only guid, and an AccountName read-write string. I serialize & immediately deserialize a clone. Deserializing throws an Incorrect wire-type deserializing Guid exception while deserializing. Here's example usage... Account acct = new Account() { AccountName = "Bob's Checking" }; Debug.WriteLine(acct.AccountID.ToString()); using (MemoryStream ms = new MemoryStream()) { ProtoBuf.Serializer.Serialize<Account>(ms, acct); Debug.WriteLine(Encoding.UTF8.GetString(ms.GetBuffer())); ms.Position = 0; Account clone = ProtoBuf.Serializer.Deserialize<Account>(ms); Debug.WriteLine(clone.AccountID.ToString()); } And here's an example ORM'd class (simplified, but demonstrates the relevant semantics I can think of). Uses a shell game to deserialize read-only properties by exposing the backing field ("can't write" essentially becomes "shouldn't write," but we can scan code for instances of assigning to these fields, so the hack works for our purposes). Again, this does not reproduce the exception behavior; I'm looking for clues as to what could: [DataContract()] [Serializable()] public partial class Account { public Account() { _accountID = Guid.NewGuid(); } [XmlAttribute("AccountID")] [DataMember(Name = "AccountID", Order = 1)] public Guid _accountID; /// <summary> /// A read-only property; XML, JSON and DataContract serializers all seem /// to correctly recognize the public backing field when deserializing: /// </summary> [IgnoreDataMember] [XmlIgnore] public Guid AccountID { get { return this._accountID; } } [IgnoreDataMember] protected string _accountName; [DataMember(Name = "AccountName", Order = 2)] [XmlAttribute] public string AccountName { get { return this._accountName; } set { this._accountName = value; } } } XML, JSON and DataContract serializers all seem to serialize / deserialize these object graphs just fine, so the attribute arrangement basically works. I've tried protobuf-net with lists vs. single instances, different prefix styles, etc., but still always get the 'incorrect wire-type ... Guid' exception when deserializing. So the specific questions is, is there any known explanation / workaround for this? I'm at a loss trying to trace what circumstances (in the real code but not the example) could be causing it. We hope not to have to create a protobuf dependency directly in the entity layer; if that's the case, we'll probably create proxy DTO entities with all public properties having protobuf attributes. (This is a subjective issue I have with all declarative serialization models; it's a ubiquitous pattern & I understand why it arose, but IMO, if we can put a man on the moon, then "normal" should be to have objects and serialization contracts decoupled. ;-) ) Thanks!

    Read the article

  • How to make Symbolicate iPhone App Crash Reports

    - by bluej3
    Hello~ I retrieved the crash reports from iTunes Connect. I referenced this site. http://webcache.googleusercontent.com/search?q=cache:MmxwdXObZLMJ:www.anoshkin.net/blog/2008/09/09/iphone-crash-logs/+iphone+crash+debig&cd=2&hl=en&ct=clnk I tried.... $ symbolicatecrash report.crash MobileLines.app.dSYM report-with-symbols.crash Error in symbol file for /Developer/Platforms/iPhoneOS.platform/DeviceSupport/3.1.2 (7D11)/Symbols/System/Library/Frameworks/IOKit.framework/Versions/A/IOKit Error in symbol file for /Developer/Platforms/iPhoneOS.platform/DeviceSupport/3.1.2 (7D11)/Symbols/System/Library/PrivateFrameworks/WebCore.framework/WebCore Error in symbol file for /Developer/Platforms/iPhoneOS.platform/DeviceSupport/3.1.2 (7D11)/Symbols/System/Library/Frameworks/Foundation.framework/Foundation Error in symbol file for /Developer/Platforms/iPhoneOS.platform/DeviceSupport/3.1.2 (7D11)/Symbols/usr/lib/libSystem.B.dylib Error in symbol file for /Developer/Platforms/iPhoneOS.platform/DeviceSupport/3.1.2 (7D11)/Symbols/System/Library/PrivateFrameworks/GraphicsServices.framework/GraphicsServices Error in symbol file for /Developer/Platforms/iPhoneOS.platform/DeviceSupport/3.1.2 (7D11)/Symbols/System/Library/Frameworks/UIKit.framework/UIKit Error in symbol file for /Developer/Platforms/iPhoneOS.platform/DeviceSupport/3.1.2 (7D11)/Symbols/System/Library/Frameworks/OpenGLES.framework/MBXGLEngine.bundle/MBXGLEngine Error in symbol file for /Developer/Platforms/iPhoneOS.platform/DeviceSupport/3.1.2 (7D11)/Symbols/System/Library/Frameworks/AudioToolbox.framework/AudioToolbox Error in symbol file for /Developer/Platforms/iPhoneOS.platform/DeviceSupport/3.1.2 (7D11)/Symbols/System/Library/Frameworks/CoreFoundation.framework/CoreFoundation BUT... I didn't result. (find error message) - This directory is located "bulid/Distribution-iphones" - "MYGAME.app" file and "MYGAME.app.dSYM" file is located in same directory. How can i do solve this problem. ? Please help me :) * Crash log (carsh at thread 2 ) Incident Identifier: 95230C2E-CD83-46BF-8DAE-F38BCD46B910 Process: MYGAMELite [303] Path: /var/mobile/Applications/4FB79BEC-2BF0-438B-82A8-C302CD52A85C/MYGAMELite.app/MYGAMELite Identifier: MYGAMELite Version: ??? (???) Code Type: ARM (Native) Parent Process: launchd [1] Date/Time: 2010-06-03 11:43:52.875 +0800 OS Version: iPhone OS 3.1.2 (7D11) Report Version: 104 Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0x03e3a002 Crashed Thread: 2 Thread 2 Crashed: 0 AudioToolbox 0x330d708c AU3DMixerEmbedded::SumInput16(unsigned long, AudioBufferList const&, AudioBufferList const&, unsigned long, float, unsigned long) 1 AudioToolbox 0x330d89a0 AU3DMixerEmbedded::Render(unsigned long&, AudioTimeStamp const&, unsigned long) 2 AudioToolbox 0x32fe6bb8 AUBase::DoRender(unsigned long&, AudioTimeStamp const&, unsigned long, unsigned long, AudioBufferList&) 3 AudioToolbox 0x32fe6504 Render 4 AudioToolbox 0x330160b8 AUInputElement::PullInput(unsigned long&, AudioTimeStamp const&, unsigned long, unsigned long) 5 AudioToolbox 0x33023fa8 AUInputFormatConverter2::InputProc(OpaqueAudioConverter*, unsigned long*, AudioBufferList*, AudioStreamPacketDescription*, void) 6 AudioToolbox 0x32fe4b60 AudioConverterChain::CallInputProc(unsigned long) 7 AudioToolbox 0x32fe4a5c AudioConverterChain::FillBufferFromInputProc(unsigned long*, CABufferList*) 8 AudioToolbox 0x32fe4790 BufferedAudioConverter::GetInputBytes(unsigned long, unsigned long&, CABufferList const*&) 9 AudioToolbox 0x33023e30 CBRConverter::RenderOutput(CABufferList*, unsigned long, unsigned long&, AudioStreamPacketDescription*) 10 AudioToolbox 0x32fe4284 BufferedAudioConverter::FillBuffer(unsigned long&, AudioBufferList&, AudioStreamPacketDescription*) 11 AudioToolbox 0x32fe44a4 AudioConverterChain::RenderOutput(CABufferList*, unsigned long, unsigned long&, AudioStreamPacketDescription*) 12 AudioToolbox 0x32fe4284 BufferedAudioConverter::FillBuffer(unsigned long&, AudioBufferList&, AudioStreamPacketDescription*) 13 AudioToolbox 0x32fe3f10 AudioConverterFillComplexBuffer 14 AudioToolbox 0x33023844 AUConverterBase::RenderBus(unsigned long&, AudioTimeStamp const&, unsigned long, unsigned long) 15 AudioToolbox 0x330ce928 AURemoteIO::RenderBus(unsigned long&, AudioTimeStamp const&, unsigned long, unsigned long) 16 AudioToolbox 0x32fe6bb8 AUBase::DoRender(unsigned long&, AudioTimeStamp const&, unsigned long, unsigned long, AudioBufferList&) 17 AudioToolbox 0x330cf308 AURemoteIO::PerformIO(int, unsigned int, unsigned int, AQTimeStamp const&, AQTimeStamp const&) 18 AudioToolbox 0x330cf4cc AURIOCallbackReceiver_PerformIOSync 19 AudioToolbox 0x330c76fc _XPerformIOSync 20 AudioToolbox 0x330181d8 mshMIGPerform 21 AudioToolbox 0x3309cec8 MSHMIGDispatchMessage 22 AudioToolbox 0x330d48d4 AURemoteIO::IOThread::Entry(void*) 23 AudioToolbox 0x32fc9f20 CAPThread::Entry(CAPThread*) 24 libSystem.B.dylib 0x30b5b7b0 _pthread_body

    Read the article

  • Best practices for using the Entity Framework with WPF DataBinding

    - by Ken Smith
    I'm in the process of building my first real WPF application (i.e., the first intended to be used by someone besides me), and I'm still wrapping my head around the best way to do things in WPF. It's a fairly simple data access application using the still-fairly-new Entity Framework, but I haven't been able to find a lot of guidance online for the best way to use these two technologies (WPF and EF) together. So I thought I'd toss out how I'm approaching it, and see if anyone has any better suggestions. I'm using the Entity Framework with SQL Server 2008. The EF strikes me as both much more complicated than it needs to be, and not yet mature, but Linq-to-SQL is apparently dead, so I might as well use the technology that MS seems to be focusing on. This is a simple application, so I haven't (yet) seen fit to build a separate data layer around it. When I want to get at data, I use fairly simple Linq-to-Entity queries, usually straight from my code-behind, e.g.: var families = from family in entities.Family.Include("Person") orderby family.PrimaryLastName, family.Tag select family; Linq-to-Entity queries return an IOrderedQueryable result, which doesn't automatically reflect changes in the underlying data, e.g., if I add a new record via code to the entity data model, the existence of this new record is not automatically reflected in the various controls referencing the Linq query. Consequently, I'm throwing the results of these queries into an ObservableCollection, to capture underlying data changes: familyOC = new ObservableCollection<Family>(families.ToList()); I then map the ObservableCollection to a CollectionViewSource, so that I can get filtering, sorting, etc., without having to return to the database. familyCVS.Source = familyOC; familyCVS.View.Filter = new Predicate<object>(ApplyFamilyFilter); familyCVS.View.SortDescriptions.Add(new System.ComponentModel.SortDescription("PrimaryLastName", System.ComponentModel.ListSortDirection.Ascending)); familyCVS.View.SortDescriptions.Add(new System.ComponentModel.SortDescription("Tag", System.ComponentModel.ListSortDirection.Ascending)); I then bind the various controls and what-not to that CollectionViewSource: <ListBox DockPanel.Dock="Bottom" Margin="5,5,5,5" Name="familyList" ItemsSource="{Binding Source={StaticResource familyCVS}, Path=., Mode=TwoWay}" IsSynchronizedWithCurrentItem="True" ItemTemplate="{StaticResource familyTemplate}" SelectionChanged="familyList_SelectionChanged" /> When I need to add or delete records/objects, I manually do so from both the entity data model, and the ObservableCollection: private void DeletePerson(Person person) { entities.DeleteObject(person); entities.SaveChanges(); personOC.Remove(person); } I'm generally using StackPanel and DockPanel controls to position elements. Sometimes I'll use a Grid, but it seems hard to maintain: if you want to add a new row to the top of your grid, you have to touch every control directly hosted by the grid to tell it to use a new line. Uggh. (Microsoft has never really seemed to get the DRY concept.) I almost never use the VS WPF designer to add, modify or position controls. The WPF designer that comes with VS is sort of vaguely helpful to see what your form is going to look like, but even then, well, not really, especially if you're using data templates that aren't binding to data that's available at design time. If I need to edit my XAML, I take it like a man and do it manually. Most of my real code is in C# rather than XAML. As I've mentioned elsewhere, entirely aside from the fact that I'm not yet used to "thinking" in it, XAML strikes me as a clunky, ugly language, that also happens to come with poor designer and intellisense support, and that can't be debugged. Uggh. Consequently, whenever I can see clearly how to do something in C# code-behind that I can't easily see how to do in XAML, I do it in C#, with no apologies. There's been plenty written about how it's a good practice to almost never use code-behind in WPF page (say, for event-handling), but so far at least, that makes no sense to me whatsoever. Why should I do something in an ugly, clunky language with god-awful syntax, an astonishingly bad editor, and virtually no type safety, when I can use a nice, clean language like C# that has a world-class editor, near-perfect intellisense, and unparalleled type safety? So that's where I'm at. Any suggestions? Am I missing any big parts of this? Anything that I should really think about doing differently?

    Read the article

  • Howto - Running Redmine on mongrel as a service on windows

    - by Achilles
    I use Redmine on Mongrel as a project manager and I use a batch file (start-redmine.bat) to start the redmine in mongrel. There are 2 issues with my setup: 1. I have a running IIS on the server that occupies the HTTP port (80) 2. The start-redmine.bat must be periodically checked to see if it's stopped after a restart that is caused by windows update service. for the first issue, I have no choice but running mongrel on a port like 3000 and for the second issue I have to create a windows service that runs automatically in the background when the windows starts; and here comes the trouble! There are at least 3 ways to run redmine as a service that I'm aware of; none of them can satisfy a performance view on this subject. you may read about them on http://stackoverflow.com/questions/877943/how-to-configure-a-rails-app-redmine-to-run-as-a-service-on-windows I tried them all. The easiest way to setup such a service is using mongrel_service approach; in 3 lines of command you're done. but the performance is significantly lower than running that batch file... Now, I wanna show you my approach: First suppose we have ruby installed into c:\ruby and we have issued the command gem install mongrel to get the mongrel gem installed into c:\ruby\bin Also, suppose we have installed the Redmine into a folder like c:\redmine; and we have ruby's path (i.e. c:\ruby\bin) in our PATH environment variable. Now Download and install the Windows NT Resource Kit Tools from microsoft website. Open the command-line tool that comes with the Resource Kit (from start menu). Use instsrv to install a dummy service called Redmine using the following command: "[path-to-instsrv.exe]\instsrv" Redmine "[path-to-srvany.exe]\srvany.exe" in my case (which is the default case) it was something like this: "C:\Program Files\Windows Resource Kits\Tools\instsrv" Redmine "C:\Program Files\Windows Resource Kits\Tools\srvany.exe" Now create the batch file. Open a notepad and paste these instructions into it and then save it as "c:\redmine\start-redmine.bat" @echo off cd c:\redmine\ mongrel_rails start -a 0.0.0.0 -p 3000 -e production Now we need to configure that dummy service we had created before. WATCH OUT WHAT YOU'RE DOING FROM HERE ON, OR YOU MAY CORRUPT YOUR WINDOWS. To configure that service, open windows registry editor (Start - Run - regedit) and navigate to this node: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Redmine Right-Click on "Redmine" node and using the context menu, create a new key called Parameters (New - Key) Right-Click on "Parameters" and create a String Value property called Application. Do this again and create another String Value called AppParameters. Now Double-click on "Application" and put cmd.exe into "Value data" section. Then Double-click on "AppParameters" and put /C "C:\redmine\start-redmine.bat" into Value data section. We're done! issue this command to run the redmine on mongrel as a service: net start Redmine

    Read the article

  • Snow Leopard upgrade -> reinstalling sqlite3-ruby gem problem

    - by Carl Tessler
    Hi all, I got ruby 1.8.7 (native compiled), rails 2.3.4, OSX 10.6.2 and also sqlite3-ruby. The error I'm getting when accessing the rails app is NameError: uninitialized constant SQLite3::Driver::Native::Driver::API History: I upgraded to snow leopard by migrating my apps with a FW-cable from my old macbook to the new one. Everything was running perfectly for months, but Yesterday I needed to install watir, which was dependant on rb-appscript, which didn't build due a "wrong architecture" error in libsqlite3.dylib. I figured the build was made on the old machine, so i wanted to rebuild sqlite3-ruby: $ sudo gem uninstall sqlite3-ruby $ sudo gem install sqlite3-ruby Building native extensions. This could take a while... ERROR: Error installing sqlite3-ruby: ERROR: Failed to build gem native extension. /usr/local/bin/ruby extconf.rb checking for fdatasync() in -lrt... no checking for sqlite3.h... yes checking for sqlite3_open() in -lsqlite3... no * extconf.rb failed * Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. It seems like the sqlite3 libs are not working properly. I've tried to install macports sqlite3 (sudo port install sqlite3) and use that instead, but with same result... so i rebuild sqlite3 from scratch.. download-configure-make-make install. After that, the gem now builds perfectly, but doesn't work in rails, giving the error in the top of this article. I'm not really sure where to go from here because I've tried the following Rebuild sqlite3 from newest source (http://www.sqlite.org/download.html) Reinstalled sqlite3-ruby (sudo gem uninstall sqlite3-ruby && sudo gem install sqlite3-ruby) Used sqlite3 from macports (sudo port install sqlite3 && sudo gem install sqlite3-ruby) Reinstalled rails (sudo gem install rails sqlite3-ruby ) and updated environment.rb to rails 2.3.5. No avail, I still get this error: NameError: uninitialized constant SQLite3::Driver::Native::Driver::AP from /usr/local/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:105:in const_missing' from /usr/local/lib/ruby/gems/1.8/gems/sqlite3-ruby-1.2.5/lib/sqlite3/driver/native/driver.rb:76:in open' from /usr/local/lib/ruby/gems/1.8/gems/sqlite3-ruby-1.2.5/lib/sqlite3/database.rb:76:in `initialize' Btw, I have Xcode installed from the Snow Leopard CD. What can i do to solve the problem?

    Read the article

  • Speeding up a search .net 4.0

    - by user231465
    Wondering if I can speed up the search. I need to build a functionality that has to be used by many UI screens The one I have got works but I need to make sure I am implementing a fast algoritim if you like It's like an incremental search. User types a word to search for eg const string searchFor = "Guinea"; const char nextLetter = ' ' It looks in the list and returns 2 records "Guinea and Guinea Bissau " User types a word to search for eg const string searchFor = "Gu"; const char nextLetter = 'i' returns 3 results. This is the function but I would like to speed it up. Is there a pattern for this kind of search? class Program { static void Main() { //Find all countries that begin with string + a possible letter added to it //const string searchFor = "Guinea"; //const char nextLetter = ' '; //returns 2 results const string searchFor = "Gu"; const char nextLetter = 'i'; List<string> result = FindPossibleMatches(searchFor, nextLetter); result.ForEach(x=>Console.WriteLine(x)); //returns 3 results Console.Read(); } /// <summary> /// Find all possible matches /// </summary> /// <param name="searchFor">string to search for</param> /// <param name="nextLetter">pretend user as just typed a letter</param> /// <returns></returns> public static List<string> FindPossibleMatches (string searchFor, char nextLetter) { var hashedCountryList = new HashSet<string>(CountriesList()); var result=new List<string>(); IEnumerable<string> tempCountryList = hashedCountryList.Where(x => x.StartsWith(searchFor)); foreach (string item in tempCountryList) { string tempSearchItem; if (nextLetter == ' ') { tempSearchItem = searchFor; } else { tempSearchItem = searchFor + nextLetter; } if(item.StartsWith(tempSearchItem)) { result.Add(item); } } return result; } /// <summary> /// Returns list of countries. /// </summary> public static string[] CountriesList() { return new[] { "Afghanistan", "Albania", "Algeria", "American Samoa", "Andorra", "Angola", "Anguilla", "Antarctica", "Antigua And Barbuda", "Argentina", "Armenia", "Aruba", "Australia", "Austria", "Azerbaijan", "Bahamas", "Bahrain", "Bangladesh", "Barbados", "Belarus", "Belgium", "Belize", "Benin", "Bermuda", "Bhutan", "Bolivia", "Bosnia Hercegovina", "Botswana", "Bouvet Island", "Brazil", "Brunei Darussalam", "Bulgaria", "Burkina Faso", "Burundi", "Byelorussian SSR", "Cambodia", "Cameroon", "Canada", "Cape Verde", "Cayman Islands", "Central African Republic", "Chad", "Chile", "China", "Christmas Island", "Cocos (Keeling) Islands", "Colombia", "Comoros", "Congo", "Cook Islands", "Costa Rica", "Cote D'Ivoire", "Croatia", "Cuba", "Cyprus", "Czech Republic", "Czechoslovakia", "Denmark", "Djibouti", "Dominica", "Dominican Republic", "East Timor", "Ecuador", "Egypt", "El Salvador", "England", "Equatorial Guinea", "Eritrea", "Estonia", "Ethiopia", "Falkland Islands", "Faroe Islands", "Fiji", "Finland", "France", "Gabon", "Gambia", "Georgia", "Germany", "Ghana", "Gibraltar", "Great Britain", "Greece", "Greenland", "Grenada", "Guadeloupe", "Guam", "Guatemela", "Guernsey", "Guiana", "Guinea", "Guinea Bissau", "Guyana", "Haiti", "Heard Islands", "Honduras", "Hong Kong", "Hungary", "Iceland", "India", "Indonesia", "Iran", "Iraq", "Ireland", "Isle Of Man", "Israel", "Italy", "Jamaica", "Japan", "Jersey", "Jordan", "Kazakhstan", "Kenya", "Kiribati", "Korea, South", "Korea, North", "Kuwait", "Kyrgyzstan", "Lao People's Dem. Rep.", "Latvia", "Lebanon", "Lesotho", "Liberia", "Libya", "Liechtenstein", "Lithuania", "Luxembourg", "Macau", "Macedonia", "Madagascar", "Malawi", "Malaysia", "Maldives", "Mali", "Malta", "Mariana Islands", "Marshall Islands", "Martinique", "Mauritania", "Mauritius", "Mayotte", "Mexico", "Micronesia", "Moldova", "Monaco", "Mongolia", "Montserrat", "Morocco", "Mozambique", "Myanmar", "Namibia", "Nauru", "Nepal", "Netherlands", "Netherlands Antilles", "Neutral Zone", "New Caledonia", "New Zealand", "Nicaragua", "Niger", "Nigeria", "Niue", "Norfolk Island", "Northern Ireland", "Norway", "Oman", "Pakistan", "Palau", "Panama", "Papua New Guinea", "Paraguay", "Peru", "Philippines", "Pitcairn", "Poland", "Polynesia", "Portugal", "Puerto Rico", "Qatar", "Reunion", "Romania", "Russian Federation", "Rwanda", "Saint Helena", "Saint Kitts", "Saint Lucia", "Saint Pierre", "Saint Vincent", "Samoa", "San Marino", "Sao Tome and Principe", "Saudi Arabia", "Scotland", "Senegal", "Seychelles", "Sierra Leone", "Singapore", "Slovakia", "Slovenia", "Solomon Islands", "Somalia", "South Africa", "South Georgia", "Spain", "Sri Lanka", "Sudan", "Suriname", "Svalbard", "Swaziland", "Sweden", "Switzerland", "Syrian Arab Republic", "Taiwan", "Tajikista", "Tanzania", "Thailand", "Togo", "Tokelau", "Tonga", "Trinidad and Tobago", "Tunisia", "Turkey", "Turkmenistan", "Turks and Caicos Islands", "Tuvalu", "Uganda", "Ukraine", "United Arab Emirates", "United Kingdom", "United States", "Uruguay", "Uzbekistan", "Vanuatu", "Vatican City State", "Venezuela", "Vietnam", "Virgin Islands", "Wales", "Western Sahara", "Yemen", "Yugoslavia", "Zaire", "Zambia", "Zimbabwe" }; } } } Any suggestions? Thanks

    Read the article

  • Exception when deploying a JSR 286 portlet into WebLogic+WebCenter 11g

    - by Rambaldi
    I get the following exception when deploying a JSR 286 portlet into Oracle WebLogic Server 11g (to deploy it later in Oracle WebCenter 11g): <19-ene-2010 13H32' CET> <Error> <oracle.portlet.server.containerimpl.PortletApplicationImpl> <BEA-000000> <Error al procesar el archivo "/WEB-INF/portlet.xml" en la lÝnea 6 columna 68. org.xml.sax.SAXParseException: cvc-elt.1: Cannot find the declaration of element 'portlet-app' The error message is in spanish. It means: "Error processing the file "/WEB-INF/portlet.xml at line 6 column 68" The portlet.xml of my portlet seems to be correct and I've deployed it in other portal servers. So I don't understand the error message. This is the portlet.xml of my portlet (eclipse XML validator said it was a valid XML) <?xml version="1.0" encoding="UTF-8"?> <portlet-app version="2.0" xmlns="http://java.sun.com/xml/ns/portlet/portlet-app_2_0.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/portlet/portlet-app_2_0.xsd http://java.sun.com/xml/ns/portlet/portlet-app_2_0.xsd" xmlns:dnd="http://www.denodo.com/widget/portlet/portletjsr286"> <portlet> <description>Test Inter Portlet Communication (JSR286)</description> <portlet-name>Test IPC</portlet-name> <display-name>Test IPC</display-name> <portlet-class>com.denodo.ipc.TestIPCPortlet</portlet-class> <supports> <mime-type>text/html</mime-type> <portlet-mode>VIEW</portlet-mode> </supports> <supported-locale>en</supported-locale> <resource-bundle>PortletMessages</resource-bundle> <portlet-info> <title>Test IPC</title> <short-title>Test IPC</short-title> <keywords>Test IPC,Denodo</keywords> </portlet-info> </portlet> </portlet-app> How do I deploy my portlet I convert my portlet into to a WSRP portlet by executing java -jar wsrp-predeploy.jar source EAR target EAR as explained in http://download.oracle.com/docs/cd/E12839_01/webcenter.1111/e12405/wcadm_portlet_prod.htm#CHDECJHI) I try to deploy it into WebLogic with the WebLogic Console and I get this exception. My Environment WebCenter Suite (11.1.1.2.0) + WebLogic Server (10.3.2) downloaded from the oracle.com. Default configuration S.O: Windows XP SP3 Thanks in advance for your time.

    Read the article

  • Mouse wheel not scrolling in JDialog but working in JFrame

    - by Iulian Serbanoiu
    Hello, I'm facing a frustrating issue. I have an application where the scroll wheel doesn't work in a JDialog window (but works in a JFrame). Here's the code: import javax.swing.*; import java.awt.event.*; public class Failtest extends JFrame { public static void main(String[] args) { new Failtest(); } public Failtest() { super(); setDefaultCloseOperation(JFrame.DISPOSE_ON_CLOSE); setTitle("FRAME"); JScrollPane sp1 = new JScrollPane(getNewList()); add(sp1); setSize(150, 150); setVisible(true); JDialog d = new JDialog(this, false);// NOT WORKING //JDialog d = new JDialog((JFrame)null, false); // NOT WORKING //JDialog d = new JDialog((JDialog)null, false);// WORKING - WHY? d.setTitle("DIALOG"); d.setDefaultCloseOperation(JDialog.DISPOSE_ON_CLOSE); JScrollPane sp = new JScrollPane(getNewList()); d.add(sp); d.setSize(150, 150); d.setVisible(true); } public JList getNewList() { String objs[] = new String[30]; for(int i=0; i<objs.length; i++) { objs[i] = "Item "+i; } JList l = new JList(objs); return l; } } I found a solution which is present as a comment in the java code - the constructor receiving a (JDialog)null parameter. Can someone enlighten me? My opinion is that this is a java bug. Tested on Windows XP-SP3 with 1 JDK and 2 JREs: D:\Program Files\Java\jdk1.6.0_17\bin>javac -version javac 1.6.0_17 D:\Program Files\Java\jdk1.6.0_17\bin>java -version java version "1.6.0_17" Java(TM) SE Runtime Environment (build 1.6.0_17-b04) Java HotSpot(TM) Client VM (build 14.3-b01, mixed mode, sharing) D:\Program Files\Java\jdk1.6.0_17\bin>cd .. D:\Program Files\Java\jdk1.6.0_17>java -version java version "1.6.0_18" Java(TM) SE Runtime Environment (build 1.6.0_18-b07) Java HotSpot(TM) Client VM (build 16.0-b13, mixed mode, sharing) Thank you in advance, Iulian Serbanoiu PS: The problem is not new - the code is taken from a forum (here) where this problem was also mentioned - but no solutions to it (yet) LATER EDIT: The problem persists with jre/jdk_1.6.0_10, 1.6.0_16 also LATER EDIT 2: Back home, tested on linux (Ubuntu - lucid/lynx) - both with openjdk and sun-java from distribution repo and it works (I used the .class file compiled on Windows) !!! - so I believe I'm facing a JRE bug that happens on some Windows configurations.

    Read the article

< Previous Page | 238 239 240 241 242 243 244 245 246 247 248 249  | Next Page >