Search Results

Search found 9051 results on 363 pages for 'apply templates'.

Page 327/363 | < Previous Page | 323 324 325 326 327 328 329 330 331 332 333 334  | Next Page >

  • Windows: what is the difference between DEP always on and DEP opt-out with no exceptions?

    - by Peter Mortensen
    What is the difference between DEP always on ("/NoExecute=AlwaysOn" in boot.ini) and DEP opt-out ( "/NoExecute=OptOut" in boot.ini) with no exceptions? "no exceptions" = empty list of programs for which DEP does not apply. DEP = Data Execution Prevention (hardware). One would expect it to work the same way, but it makes a difference for some applications. E.g. for all versions of UltraEdit 14 (14.2). It crashes at startup for DEP always on, at least on Microsoft Windows XP Professional Edition x64 edition. (2010-03-11: this problem has been fixed with UltraEdit 15.2 and later.) Update 1: I think this difference is caused by the backdoors that Microsoft has put into hardware DEP for OptOut, according to Fabrice Roux (see below). In the case of IrfanView, for which Steve Gibson observed the same difference as I did for UltraEdit (see below), the difference is caused by a non-DEP aware EXE packer (ASPack) that Microsoft coded a backdoor for. Is there a difference between Windows XP, Windows Vista and Windows 7 ? Is there a difference between 32 bit and 64 bit versions of Windows ? Sources: From [http://blog.fabriceroux.com/index.php/2007/02/26/hardware_dep_has_a_backdoor?blog=1], "Hardware DEP has a backdoor" by Fabrice Roux. 2007-02-26. "IrfanView was not using any trick to evade DEP ... Microsoft just coded a backdoor used only in OPTOUT. Bascially Microsoft checks the executable header for a section matching one of the 3 strings. If one these strings is found, DEP will be turned OFF for this application by windows. ... 'aspack', 'pcle', 'sforce'" From [http://www.grc.com/sn/sn-078.htm], by Steve Gibson. "I can’t find any documentation on Microsoft’s site anywhere, because we’re seeing a difference between always-on and opt-out. That is, you would imagine that always-on mode would be the same as opting out if you weren’t having any opt-out programs. It turns out it’s not the case. For example ... the IrfanView file viewer ... runs fine in opt-out mode, even if it has not been opted out. But it won’t launch, Windows blocks it from launching ... in always-on mode." From [http://www.grc.com/sn/sn-083.htm], by Steve Gibson. "... IrfanView ... won’t run with DEP turned on. It’s because it uses an EXE packer, an executable compression program called ASPack. And it makes sense that it wouldn’t because naturally an executable compressor has got to decompress the executable, so it allocates a bunch of data memory into which it decompresses the compressed executable, and then it runs it. Well, it’s running a data allocation, which is exactly what DEP is designed to stop. On the other hand, UPX, which is actually the leading and most popular EXE compressor, it’s DEP- compatible because those guys realized, hey, when we allocate this memory, we should mark the pages as executable."

    Read the article

  • straight to grub prompt on boot

    - by cheshirekow
    I am very lost. I did a fresh install of Ubuntu 10.04 on a laptop. First reboot was fine. I ran all the recommended upgrades, and now every time I start I get just a grub>_ prompt. No error message, just the prompt, and a little banner at the top saying grub's version and telling me that I have minimal bash style editing. I've tried: 1) Re-installing grub via sudo grub-install sda (There is only one disk with only two partitions, one primary, and one for swap) 2) Changed GRUB_HIDDEN_TIMEOUT=10 GRUB_TIMEOUT=30 GRUB_CMDLINE_LINUX_DEFAULT="rootdelay=90" GRUB_CMDLINE_LINUX="rootdelay=90" in /etc/default/grub. No luck. I can boot with the following: grub> set root=(hd0,1) grub> probe (hd0,1) -u c00fadde-f7e8-45e7-a4da-0235605f756 grub> linux /boot/vmlinuz-2.6.32-21-generic root=UUID=c00fadde-f7e8-45e7-a4da-0235605f756 rootdelay=90 grub> initrd /boot/initrd.img-2.6.32-21-generic grub> boot And then everything seems to be fine from there. From the grub prompt if I try configfile /boot/grub/grub.cfg The screen clears and I get another grub prompt. So, seriously, what could the problem be? edit: Full text of /boot/grub/grub.cfg # # DO NOT EDIT THIS FILE # # It is automatically generated by /usr/sbin/grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then load_env fi set default="0" if [ ${prev_saved_entry} ]; then set saved_entry=${prev_saved_entry} save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z ${boot_once} ]; then saved_entry=${chosen} save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n ${have_grubenv} ]; then if [ -z ${boot_once} ]; then save_env recordfail; fi; fi } insmod ext2 set root='(hd0,1)' search --no-floppy --fs-uuid --set c00fadde-f7e8-45e7-a4da-0235c605f756 if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=640x480 insmod gfxterm insmod vbe if terminal_output gfxterm ; then true ; else # For backward compatibility with versions of terminal.mod that don't # understand terminal_output terminal gfxterm fi fi insmod ext2 set root='(hd0,1)' search --no-floppy --fs-uuid --set c00fadde-f7e8-45e7-a4da-0235c605f756 set locale_dir=($root)/boot/grub/locale set lang=en insmod gettext if [ ${recordfail} = 1 ]; then set timeout=-1 else set timeout=30 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### menuentry 'Ubuntu, with Linux 2.6.32-21-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod ext2 set root='(hd0,1)' search --no-floppy --fs-uuid --set c00fadde-f7e8-45e7-a4da-0235c605f756 linux /boot/vmlinuz-2.6.32-21-generic root=UUID=c00fadde-f7e8-45e7-a4da-0235c605f756 ro rootdelay=90 rootdelay=90 initrd /boot/initrd.img-2.6.32-21-generic } menuentry 'Ubuntu, with Linux 2.6.32-21-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod ext2 set root='(hd0,1)' search --no-floppy --fs-uuid --set c00fadde-f7e8-45e7-a4da-0235c605f756 echo 'Loading Linux 2.6.32-21-generic ...' linux /boot/vmlinuz-2.6.32-21-generic root=UUID=c00fadde-f7e8-45e7-a4da-0235c605f756 ro single rootdelay=90 echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-2.6.32-21-generic } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod ext2 set root='(hd0,1)' search --no-floppy --fs-uuid --set c00fadde-f7e8-45e7-a4da-0235c605f756 linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod ext2 set root='(hd0,1)' search --no-floppy --fs-uuid --set c00fadde-f7e8-45e7-a4da-0235c605f756 linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### if [ ${timeout} != -1 ]; then if sleep --verbose --interruptible 10 ; then set timeout=0 fi fi ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### output of update-grub Generating grub.cfg ... Found linux image: /boot/vmlinuz-2.6.32-21-generic Found initrd image: /boot/initrd.img-2.6.32-21-generic Found memtest86+ image: /boot/memtest86+.bin done contents of /boot total 14280 -rw-r--r-- 1 root root 640617 2010-04-16 09:01 abi-2.6.32-21-generic -rw-r--r-- 1 root root 115847 2010-04-16 09:01 config-2.6.32-21-generic drwxr-xr-x 3 root root 4096 2010-09-08 02:42 grub -rw-r--r-- 1 root root 7968754 2010-09-02 01:49 initrd.img-2.6.32-21-generic -rw-r--r-- 1 root root 160280 2010-03-23 05:37 memtest86+.bin -rw-r--r-- 1 root root 1687378 2010-04-16 09:01 System.map-2.6.32-21-generic -rw-r--r-- 1 root root 1196 2010-04-16 09:03 vmcoreinfo-2.6.32-21-generic -rw-r--r-- 1 root root 4029792 2010-04-16 09:01 vmlinuz-2.6.32-21-generic

    Read the article

  • How do I determine if my controller is in IDE or AHCI mode in Linux?

    - by philcolbourn
    I have an old MacBook Pro 4,1 (early 2008) - but I suspect an answer would apply to many MacBook Pros. It has an Intel IDE/SATA controller (ICH8M/ICH8M-E). I have installed a patched MBR that is supposed to put my controller into AHCI mode. It does this by setting some controller port value that I don't understand. This seems to work as I get this from lspci: 00:1f.1 IDE interface: Intel Corporation 82801HM/HEM (ICH8M/ICH8M-E) IDE Controller (rev 03) 00:1f.2 IDE interface: Intel Corporation 82801HM/HEM (ICH8M/ICH8M-E) SATA Controller [AHCI mode] (rev 03) Now most, perhaps all, sites that provide a solution (enabling AHCI) suggest that after a sleep/wake cycle that a controller will revert to IDE mode due to how Apple support Windows. They recommend disabling sleep. From author of patchedcode.bin I think Enabling AHCI for Windows on MacBooks NB: I do not have bootcamp installed and I do not have Windows installed. Is there a way to prove that my controller is in IDE or AHCI mode? Background Data Using patchedcode.bin MBR I get this in syslog: Jun 12 22:33:22 max kernel: [ 1.860955] ahci 0000:00:1f.2: version 3.0 Jun 12 22:33:22 max kernel: [ 1.861052] ahci 0000:00:1f.2: irq 45 for MSI/MSI-X Jun 12 22:33:22 max kernel: [ 1.861117] ahci 0000:00:1f.2: AHCI 0001.0100 32 slots 3 ports 1.5 Gbps 0x1 impl SATA mode Jun 12 22:33:22 max kernel: [ 1.861120] ahci 0000:00:1f.2: flags: 64bit ncq sntf pm led clo pio slum part ccc ems Jun 12 22:33:22 max kernel: [ 1.861130] ahci 0000:00:1f.2: setting latency timer to 64 Jun 12 22:33:22 max kernel: [ 1.880880] ACPI: Video Device [GFX0] (multi-head: yes rom: no post: no) Jun 12 22:33:22 max kernel: [ 1.880983] scsi2 : ahci Jun 12 22:33:22 max kernel: [ 1.884552] scsi3 : ahci Jun 12 22:33:22 max kernel: [ 1.886932] scsi4 : ahci Jun 12 22:33:22 max kernel: [ 1.886998] ata3: SATA max UDMA/133 abar m2048@0xdb504000 port 0xdb504100 irq 45 Jun 12 22:33:22 max kernel: [ 1.887000] ata4: DUMMY Jun 12 22:33:22 max kernel: [ 1.887002] ata5: DUMMY Jun 12 22:33:22 max kernel: [ 2.204103] ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jun 12 22:33:22 max kernel: [ 2.204656] ata3.00: ATA-8: FUJITSU MHY2200BH, 0081000D, max UDMA/100 Jun 12 22:33:22 max kernel: [ 2.204662] ata3.00: 390721968 sectors, multi 16: LBA48 NCQ (depth 31/32), AA Jun 12 22:33:22 max kernel: [ 2.205324] ata3.00: configured for UDMA/100 Jun 12 22:33:22 max kernel: [ 2.205554] scsi 2:0:0:0: Direct-Access ATA FUJITSU MHY2200B 0081 PQ: 0 ANSI: 5 Using my original MBR I get this from syslog: Jun 13 18:07:13 max kernel: [ 0.622861] ata_piix 0000:00:1f.1: version 2.13 Jun 13 18:07:13 max kernel: [ 0.622869] ata_piix 0000:00:1f.1: power state changed by ACPI to D0 Jun 13 18:07:13 max kernel: [ 0.622924] ata_piix 0000:00:1f.1: setting latency timer to 64 Jun 13 18:07:13 max kernel: [ 0.623339] scsi0 : ata_piix Jun 13 18:07:13 max kernel: [ 0.623730] scsi1 : ata_piix Jun 13 18:07:13 max kernel: [ 0.623765] ata1: PATA max UDMA/100 cmd 0x8108 ctl 0x811c bmdma 0x80e0 irq 21 Jun 13 18:07:13 max kernel: [ 0.623767] ata2: PATA max UDMA/100 cmd 0x8100 ctl 0x8118 bmdma 0x80e8 irq 21 Jun 13 18:07:13 max kernel: [ 0.623810] ata_piix 0000:00:1f.2: MAP [ Jun 13 18:07:13 max kernel: [ 0.623811] P0 -- -- -- ] Jun 13 18:07:13 max kernel: [ 0.623866] ata_piix 0000:00:1f.2: setting latency timer to 64 Jun 13 18:07:13 max kernel: [ 0.624241] scsi2 : ata_piix Jun 13 18:07:13 max kernel: [ 0.624558] scsi3 : ata_piix Jun 13 18:07:13 max kernel: [ 0.624862] ata3: SATA max UDMA/133 cmd 0x80f8 ctl 0x8114 bmdma 0x8020 irq 18 Jun 13 18:07:13 max kernel: [ 0.624865] ata4: SATA max UDMA/133 cmd 0x80f0 ctl 0x8110 bmdma 0x8028 irq 18 Jun 13 18:07:13 max kernel: [ 1.208879] ata3.00: ATA-8: FUJITSU MHY2200BH, 0081000D, max UDMA/100 Jun 13 18:07:13 max kernel: [ 1.208882] ata3.00: 390721968 sectors, multi 16: LBA48 NCQ (depth 0/32) Jun 13 18:07:13 max kernel: [ 1.208961] ata1.01: ATAPI: MATSHITA DVD+/-RW UJ-867S, 1.00, max UDMA/33 Jun 13 18:07:13 max kernel: [ 1.216186] ata3.00: configured for UDMA/100 Jun 13 18:07:13 max kernel: [ 1.224396] ata1.01: configured for UDMA/33

    Read the article

  • Users suddenly missing write permissions to the root drive c within an active directory domain

    - by Kevin
    I'm managing an active directory single domain environment on some Windows Server 2008, Windows Server 2008 R2 and Windows Server 2012 machines. Since a few weeks I got a strange issue. Some users (not all!) report that they cannot any longer save, copy or write files to the root drive c, whether on their clients (vista, win 7) nor via remote desktop connection on a Windows Server 2008 machine. Even running programs that require direct write permissions to the root drive without administrator permissions fail to do so since then. The affected users have local administrator permissions. The question I'm facing now is: What caused this change of system behavior? Why did this happen? I didn't find out yet. What was the last thing I did before it happened? The last action that was made before it happened was the rollout of a GPO containing network drive mappings for the users depending on their security group membership. All network drives are located on a linux server with samba enabled. We did not change any UAC settings, and they have always been activated. However I can't imagine that rolling out this GPO caused the problem. Has anybody faced an issue like that? Just in case: I know that it is for a specific reason that an user without administrative privileges is prevented from writing to the root drive since windows vista and the implementation of UAC. I don't think that those users should be able to write to drive c, but I try to figure out why this is happening and a few weeks ago this was still working. I also know that a user who is a member of the local administrators group does not execute anything with administrator permissions per default unless he or she executes a program with this permissions. What did I do yet? I checked the permissions of the affected programs, the affected clients/server. Didn't find something special. I checked ALL of our GPOs if there exist any restrictions that could prevent the affected users from writing to the root drive. Did not find any settings. I checked the UAC settings of the affected users and compared those to other users that still can write to the root drive. Everything similar. I googled though the internet and tried to find someone who had a similar problem. Did not find one. Has anybody an idea? Thank you very much. Edit: The GPO that was rolled out does the following (Please excuse if the settings are not named exactly like that, I translated the settings into english): **Windows Settings -- Network Drive Mappings -- Drive N: -- General:** Action: Replace **Properties:** Letter: N Location: \\path-to-drive\drivename Re-Establish connection: deactivated Label as: Name_of_the_Share Use first available Option: deactivated **Windows Settings -- Network Drive Mappings -- Drive N: -- Public: Options:** On error don't process any further elements for this extension: no Run as the logged in user: no remove element if it is not applied anymore: no Only apply once: no **Securitygroup:** Attribute -- Value bool -- AND not -- 0 name -- domain\groupname sid -- sid-of-the-group userContext -- 1 primaryGroup -- 0 localGroup -- 0 **Securitygroup:** Attribute -- Value bool -- OR not -- 0 name -- domain\another-groupname sid -- sid-of-the-group userContext -- 1 primaryGroup -- 0 localGroup -- 0 Edit: The Error-Message of an affected users says the following: Due to an unexpected error you can't copy the file. Error-Code 0x80070522: The client is missing a required permission. The command icacls C: shows the following: NT-AUTORITY\SYSTEM:(OI)(CI)(F) PRE-DEFINED\Administrators:(OI)(CI)(F) computername\username:(OI)(CI)(F) A college just told me that also the primary domain-controller (PDC) changed from Windows Server 2008 to Windows Server 2012. That also may be a reason. Any suggestions?

    Read the article

  • Coda 2 and SCP uploading files with the wrong permission

    - by Tom Black
    Currently I have a basic Ubuntu server running a website. The website is for a few students learning HTML/PHP and each student has their own account with a symbolic link to the shared website folder. Since the students are working on the website together, each user needs to be able to modify all the files (index.html for example). So I created a Webdev group containing all of the students with the default umask of 0002 set in their .bashrc (This allows newly created files to be 774). The shared folder is owned by the group Webdev with a chmod g+s so that new files/folders also belong to the group Webdev. The problem is that the students are using an IDE (Coda 2) and when they create a new file or folder using the IDE the file has the permissions of 644 on the server (not group writable). However when I make a new file through connecting with Cyberduck (SFTP client) the file permissions are 664 (as they should be). So I don't understand why Coda would be any different. However, after some trial and error I believe that Coda is first creating the file on local disk and then uploading that file to the server. On a mac by default a newly created file is 644. When the client uploads a file that's already 644 it stays 644 on the server side (umask is kind of useless in this situation). I've also tried creating ACL permissions for that folder but an uploaded file from my mac via SCP doesn't get the default ACL permissions. In Coda there is an option to change file permissions on a transfer. However this option seems to apply a chmod to all files being uploaded or saved. When one of students is modifying a file created by someone else when they try to upload the file or save it Coda tries to also do a chmod but fails because that user isn't the owner of the file. My current solution is using bindfs... I mount the shared web folder and bindfs sets permissions and group ownership of newly created files. However, bindfs seems to be a bit slow and I'm sure there is a better solution. Even if the students ditched Coda 2 and used Mac vim with scp the newly created files on the server would behave the same (644) which is default on the mac. Other options... 1) Either I teach the students to use (ssh/chmod) with their IDE to change their own file permissions when uploading. 2) I make all the students' Macs have the default umask of 0002 which would upload files with the right permissions. 3) Write a corn script to fix the file permissions every 5 to 15 minutes... (This option I think is the worst if students are working together at the same time). Is there any way that I could make all files that are uploaded via SCP have the default file permissions of 664 even though the uploaded file has a lower permission? (After hours of searching I don't think this is possible) I guess a corn script is my best option for novice users. How do web developers work together on larger sites? similar to this: http://serverfault.com/questions/283492/how-to-specify-file-permission-when-putting-a-file-using-openssh-sftp-command Also similar: http://serverfault.com/questions/395418/managing-linux-directory-permissions-sftp

    Read the article

  • nikto probe warning messages

    - by julio
    Hi-- I have a pretty standard VPS running Ubuntu 8.1, Apache 2.2, PHP 5 etc. -- standard Lamp stack. I am using suhosin and have tried my best to plug the obvious stuff, since I'm the only user-- there's no SSH access except via pubkey on a non-standard port, there's no root access by SSH, no FTP server running, iptables is set to discard anything outside of basically port 80 or my SSH port (there's no mail server or anything else). However, I've still been compromised (not badly as far as I can tell) probably by a SQL injection. I've locked down the SQL user (there's only one outside of root, and he's got limited priv, no file etc.) So I ran nikto to see what I'm doing wrong, and there's a list of things I've never seen, and can't find using "find" or any other method I'm aware of. See below: + /autologon.html?10514: Remotely Anywhere 5.10.415 is vulnerable to XSS attacks that can lead to cookie theft or privilege escalation. This is typically found on port 2000. + /servlet/webacc?User.html=noexist: Netware web access may reveal full path of the web server. Apply vendor patch or upgrade. + OSVDB-35878: /modules.php?name=Members_List&letter='%20OR%20pass%20LIKE%20'a%25'/*: PHP Nuke module allows user names and passwords to be viewed. + OSVDB-3092: /sitemap.xml: This gives a nice listing of the site content. + OSVDB-12184: /index.php?=PHPB8B5F2A0-3C92-11d3-A3A9-4C7B08C10000: PHP reveals potentially sensitive information via certain HTTP requests which contain specific QUERY strings. + OSVDB-12184: /some.php?=PHPE9568F36-D428-11d2-A769-00AA001ACF42: PHP reveals potentially sensitive information via certain HTTP requests which contain specific QUERY strings. + OSVDB-12184: /some.php?=PHPE9568F34-D428-11d2-A769-00AA001ACF42: PHP reveals potentially sensitive information via certain HTTP requests which contain specific QUERY strings. + OSVDB-12184: /some.php?=PHPE9568F35-D428-11d2-A769-00AA001ACF42: PHP reveals potentially sensitive information via certain HTTP requests which contain specific QUERY strings. + OSVDB-3092: /administrator/: This might be interesting... + OSVDB-3092: /Agent/: This might be interesting... + OSVDB-3092: /includes/: This might be interesting... + OSVDB-3092: /logs/: This might be interesting... + OSVDB-3092: /tmp/: This might be interesting... + ERROR: /servlet/Counter returned an error: error reading HTTP response + OSVDB-3268: /icons/: Directory indexing is enabled: /icons + OSVDB-3268: /images/: Directory indexing is enabled: /images + OSVDB-3299: /forumscalendar.php?calbirthdays=1&action=getday&day=2001-8-15&comma=%22;echo%20'';%20echo%20%60id%20%60;die();echo%22: Vbulletin allows remote command execution. See link + OSVDB-3299: /forumzcalendar.php?calbirthdays=1&action=getday&day=2001-8-15&comma=%22;echo%20'';%20echo%20%60id%20%60;die();echo%22: Vbulletin allows remote command execution. See link + OSVDB-3299: /htforumcalendar.php?calbirthdays=1&action=getday&day=2001-8-15&comma=%22;echo%20'';%20echo%20%60id%20%60;die();echo%22: Vbulletin allows remote command execution. See link + OSVDB-3299: /vbcalendar.php?calbirthdays=1&action=getday&day=2001-8-15&comma=%22;echo%20'';%20echo%20%60id%20%60;die();echo%22: Vbulletin allows remote command execution. See link + OSVDB-3299: /vbulletincalendar.php?calbirthdays=1&action=getday&day=2001-8-15&comma=%22;echo%20'';%20echo%20%60id%20%60;die();echo%22: Vbulletin allows remote command execution. See link + OSVDB-6659: /kCKAowoWuZkKCUPH7Mr675ILd9hFg1lnyc1tWUuEbkYkFCpCdEnCKkkd9L0bY34tIf9l6t2owkUp9nI5PIDmQzMokDbp71QFTZGxdnZhTUIzxVrQhVgwmPYsMK7g34DURzeiy3nyd4ezX5NtUozTGqMkxDrLheQmx4dDYlRx0vKaX41JX40GEMf21TKWxHAZSUxjgXUnIlKav58GZQ5LNAwSAn13l0w<font%20size=50>DEFACED<!--//--: MyWebServer 1.0.2 is vulnerable to HTML injection. Upgrade to a later version. I understand about the trace and index, but what about the vbulletin and autologin? I've searched, and I can't find any files like that on the server. I have no idea about the "MyWebServer" stuff, the PHP Nuke, or the Netware/servlet stuff-- there's nothing really on the server except a pretty standard Joomla site (updated to the latest version). Any help with these messages and/or what I'm doing wrong is very much appreciated.

    Read the article

  • GRUB-2 Bootloader fails to load for lack of floppy drive. Ubuntu 10.4 & Windows XP

    - by kammer
    2010.07.21 while trying to install Ubuntu 10.4 Hello all, I've been trying to install Ubuntu 10.04 on my Dell workstation and am unable to get the Grub-2 bootloader to load properly. It seems to be failing for lack of a floppy drive on the system resulting in an error message that reads : error: fd0 cannot get C/H/S values. I've gone through the Grub-2 page at https://help.ubuntu.com/community/Grub2 to no avail and other sources having similar problems have likewise turned up no solutions. I would certainly appreciate any insight, here's the background: A while back I was trying to install a different version of Linux and had the same problems, then had to set the project aside for a bit. I don't think this has anything to do with Linux or Ubuntu per se, but rather Grub. The system is an old (4-5 years) Dell workstation that has one drive (128 GB) set up for Windows XP and a second new drive (500GB) which I installed for Linux. There is a DVD/CD drive and the system contains no floppy drive at all. In one attempt to get this working I tried modifying the BIOS to indicate there was a floppy drive - this created a failure earlier in the chain with the BIOS failing to load properly, not unexpected, just a shot in the dark at that point. At the moment I am considering just running out to buy and install a cheap floppy drive to see if that helps. I'll never use the thing though so I'd rather find a solution that doesn't require me to spend money on useless hardware. In any case, here's the /boot/grub/grub.cfg contents: # # DO NOT EDIT THIS FILE # # It is automatically generated by /usr/sbin/grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then load_env fi set default="0" if [ ${prev_saved_entry} ]; then set saved_entry=${prev_saved_entry} save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z ${boot_once} ]; then saved_entry=${chosen} save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n ${have_grubenv} ]; then if [ -z ${boot_once} ]; then save_env recordfail; fi; fi } insmod ext2 set root='(hd1,1)' search --no-floppy --fs-uuid --set fbebde47-f488-41b0-9480-337802ecb988 if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=640x480 insmod gfxterm insmod vbe if terminal_output gfxterm ; then true ; else # For backward compatibility with versions of terminal.mod that don't # understand terminal_output terminal gfxterm fi fi insmod ext2 set root='(hd1,1)' search --no-floppy --fs-uuid --set fbebde47-f488-41b0-9480-337802ecb988 set locale_dir=($root)/boot/grub/locale set lang=en insmod gettext if [ ${recordfail} = 1 ]; then set timeout=-1 else set timeout=10 fi insmod play play 480 440 1 ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### menuentry 'Ubuntu, with Linux 2.6.32-21-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod ext2 set root='(hd1,1)' search --no-floppy --fs-uuid --set fbebde47-f488-41b0-9480-337802ecb988 linux /boot/vmlinuz-2.6.32-21-generic root=UUID=fbebde47-f488-41b0-9480-337802ecb988 ro quiet splash initrd /boot/initrd.img-2.6.32-21-generic } menuentry 'Ubuntu, with Linux 2.6.32-21-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod ext2 set root='(hd1,1)' search --no-floppy --fs-uuid --set fbebde47-f488-41b0-9480-337802ecb988 echo 'Loading Linux 2.6.32-21-generic ...' linux /boot/vmlinuz-2.6.32-21-generic root=UUID=fbebde47-f488-41b0-9480-337802ecb988 ro single echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-2.6.32-21-generic } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod ext2 set root='(hd1,1)' search --no-floppy --fs-uuid --set fbebde47-f488-41b0-9480-337802ecb988 linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod ext2 set root='(hd1,1)' search --no-floppy --fs-uuid --set fbebde47-f488-41b0-9480-337802ecb988 linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### menuentry "Microsoft Windows XP Home Edition (on /dev/sda1)" { insmod ntfs set root='(hd0,1)' search --no-floppy --fs-uuid --set 6ef0d4b4f0d4842d drivemap -s (hd0) ${root} chainloader +1 } ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### Thoughts anyone? Thanks in advance.

    Read the article

  • UFW as an active service on Ubuntu

    - by lamcro
    Every time I restart my computer, and check the status of the UFW firewall (sudo ufw status), it is disabled, even if I then enable and restart it. I tried putting sudo ufw enable as one of the startup applications but it asks for the sudo password every time I log on, and I'm guessing it does not protect anyone else who logs on my computer. How can I setup ufw so it is activated when I turn on my computer, and protects all accounts? Update I just tried /etc/init.d/ufw start, and it activated the firewall. Then I restarted the computer, and again it was disabled. content of /etc/ufw/ufw.conf # /etc/ufw/ufw.conf # # set to yes to start on boot ENABLED=yes # set to one of 'off', 'low', 'medium', 'high' LOGLEVEL=full content of /etc/default/ufw # /etc/default/ufw # # Set to yes to apply rules to support IPv6 (no means only IPv6 on loopback # accepted). You will need to 'disable' and then 'enable' the firewall for # the changes to take affect. IPV6=no # Set the default input policy to ACCEPT, ACCEPT_NO_TRACK, DROP, or REJECT. # ACCEPT enables connection tracking for NEW inbound packets on the INPUT # chain, whereas ACCEPT_NO_TRACK does not use connection tracking. Please note # that if you change this you will most likely want to adjust your rules. DEFAULT_INPUT_POLICY="DROP" # Set the default output policy to ACCEPT, ACCEPT_NO_TRACK, DROP, or REJECT. # ACCEPT enables connection tracking for NEW outbound packets on the OUTPUT # chain, whereas ACCEPT_NO_TRACK does not use connection tracking. Please note # that if you change this you will most likely want to adjust your rules. DEFAULT_OUTPUT_POLICY="ACCEPT" # Set the default forward policy to ACCEPT, DROP or REJECT. Please note that # if you change this you will most likely want to adjust your rules DEFAULT_FORWARD_POLICY="DROP" # Set the default application policy to ACCEPT, DROP, REJECT or SKIP. Please # note that setting this to ACCEPT may be a security risk. See 'man ufw' for # details DEFAULT_APPLICATION_POLICY="SKIP" # By default, ufw only touches its own chains. Set this to 'yes' to have ufw # manage the built-in chains too. Warning: setting this to 'yes' will break # non-ufw managed firewall rules MANAGE_BUILTINS=no # # IPT backend # # only enable if using iptables backend IPT_SYSCTL=/etc/ufw/sysctl.conf # extra connection tracking modules to load IPT_MODULES="nf_conntrack_ftp nf_nat_ftp nf_conntrack_irc nf_nat_irc" Update Followed your advise and ran update-rc.d with no luck. lester@mcgrath-pc:~$ sudo update-rc.d ufw defaults update-rc.d: warning: /etc/init.d/ufw missing LSB information update-rc.d: see <http://wiki.debian.org/LSBInitScripts> Adding system startup for /etc/init.d/ufw ... /etc/rc0.d/K20ufw -> ../init.d/ufw /etc/rc1.d/K20ufw -> ../init.d/ufw /etc/rc6.d/K20ufw -> ../init.d/ufw /etc/rc2.d/S20ufw -> ../init.d/ufw /etc/rc3.d/S20ufw -> ../init.d/ufw /etc/rc4.d/S20ufw -> ../init.d/ufw /etc/rc5.d/S20ufw -> ../init.d/ufw lester@mcgrath-pc:~$ ls -l /etc/rc?.d/*ufw lrwxrwxrwx 1 root root 13 2009-12-20 20:34 /etc/rc0.d/K20ufw -> ../init.d/ufw lrwxrwxrwx 1 root root 13 2009-12-20 20:34 /etc/rc1.d/K20ufw -> ../init.d/ufw lrwxrwxrwx 1 root root 13 2009-12-20 20:34 /etc/rc2.d/S20ufw -> ../init.d/ufw lrwxrwxrwx 1 root root 13 2009-12-20 20:34 /etc/rc3.d/S20ufw -> ../init.d/ufw lrwxrwxrwx 1 root root 13 2009-12-20 20:34 /etc/rc4.d/S20ufw -> ../init.d/ufw lrwxrwxrwx 1 root root 13 2009-12-20 20:34 /etc/rc5.d/S20ufw -> ../init.d/ufw lrwxrwxrwx 1 root root 13 2009-12-20 20:34 /etc/rc6.d/K20ufw -> ../init.d/ufw

    Read the article

  • All PHP sites stopped working on IIS7, internal server error 500

    - by TimothyP
    I installed multiple drupal 7 sites using the Web Platform Installer on Windows Server 2008. Until know they worked without any problems, but recently internal server error 500 started to show up (once every so many requests), now it happens for all requests to any of the php sites. There's not much detail to go on, and nothing changed between the time when it was working and now (well nothing I know of anyway) The log file is flooded with messages such as [09-Aug-2011 09:08:04] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:08:16] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:08:16] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:08:20] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:08:22] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:08:51] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:09:56] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:09:57] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:12:13] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:15:09] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:15:09] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:21:28] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:21:28] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 I have tried increasing the memory limit in php.ini as such: memory_limit = 512MB But that doesn't seem to solve the problem either. This is in the global php configuration in IIS When I looked at the sites one by one, I noticed that PHP seemed to have been disabled. PHP is not enabled. Register new PHP version to enable PHP via FastCGI So I tried to register the php version again C:\Program Files\PHP\v5.3\php-cgi.exe But when I try to apply the changes I get There was an error while performing this operation Details: Operation is not valid due to the current state of the object There doesn't seem to be any other information than that. I have no idea why all of a sudden php isn't available for the sites anymore. PS: I have rebooted IIS, the server, etc... This server is hosted on amazon S3, so I gave the server some more power Update These seem to be two different issues I used memory_limit=128MB instead of memory_limit=128M Notice the "M" instead of "MB" A memory_limit of 128M was not enough, had to increase it to 512M The first issue caused internal server errors for every request. Increasing to 512MB seemed to have solved the problem for a little while, but after a while the server errors return. Note that the PHP manager inside of IIS still shows there is no PHP available for the sites (the global config does see it as available) So the problem remains unsolved

    Read the article

  • How can I work around problems with certificate configuration in Remote Desktop Services?

    - by Michael Steele
    I am setting up a Remote Desktop Services farm, and am having trouble configuring certificates for it to use. A demonstration of the problem I'm seeing can be found in Step #4. At this point I am convinced that there are problems with the user interface, and am looking for ways around them. Is there any way to configure certificates in Remote Desktop Services so that the settings hold and are reflected in the GUI? If not, is there any way for me to verify that the settings are correct? Step #1 - Create certificate to be used. I've configured a certificate to use with RD Web Access. The certificate is stored with in the Certificates MMC on my RD Connection Broker, and I am configuring the farm from that computer. I found by letting RD Web Access generate its own certificate that the following properties are required: Enhanced Key Usage Server Authentication Client Authentication This may not be required, but the self-signed certificate includes it. Key Usage Digital Signature Key Agreement Subject Alternative Name DNS Name=domain.com Detour about self-signed certificate generation As a quick detour, I was able to work around a problem with creating self-signed certificates using powershell. The documentation for the New-RDCertificate cmdlet gives the following example: PS C:\> $password = ConvertTo-SecureString -string "password" -asplaintext -force New-RDCertificate -Role RDWebAccess -DnsName "test-rdwa.contoso.com" -Password $password -ConnectionBroker rdcb.contoso.com -ExportPath "c:\test-rdwa.pfx" Typing this into the shell will result in an error message claiming that a function, Get-Server cannot be found. Prior to using New-RDCertificate, you must import the RemoteDesktop Module with Import-Module RemoteDesktop. Step #2 - Observe out-of-box behavior The first time you visit the Deployment Properties dialog box by navigating to Server Manager - Remote Desktop Services - Collections and selecting "Edit Deployment Properties" from the "TASKS" dropdown list in the "COLLECTIONS" grouping, you will see the following screen: This window is misleading because the level field is listed as "Not Configured". If I understand correctly all three of the role services are using a self-signed certificate. For the RD Web Access role this can be verified by visiting the website: The certificate being used also appears in the Certificates MMC: Step #3 - Assign new certificate The Deployment Properties dialog box will allow me to select my existing certificate. The certificate must be placed within the local computers Certificates MMC in the "Personal" certificate store. The private key will need to be exportable, and you will need to provide the password. I temporarily exported my certificate to a file named temp.pfx with a password, and then imported it into Remote Desktop Services from there. Once this is done the GUI will indicate that it is ready to accept the new configuration. Once I click the "Apply" button, the GUI indicates success. This can be verified by visiting the RD Web Access web site a second time. There is no certificate error. Step #4 - The GUI fails to maintain its state If the GUI is closed and reopened, all of these settings appear to be lost. Actually, the certificate I configured is still being used. I am able to continue accessing the RD Web Access site without any certificate errors. Oddly, if I use the "Create new certificate..." button to generate a self-signed certificate this window will update to an "Untrusted" level. This setting will then be maintained through the opening and closing of the Deployment Properties dialog box. Is there anything I can do to have my settings appear to stick? I feel like something is wrong when the GUI claims I haven't fully configured certificates.

    Read the article

  • Inconsistent file downloads of (what should be) the same file

    - by Austin A.
    I'm working on a system that archives large collections of timetstamped images. Part of the system deals with saving an image to a growing .zip file. This morning I noticed that the log system said that an image was successfully downloaded and placed in the zip file, but when I downloaded the .zip (from an apache alias running on our server), the images didn't match the log. For example, although the log said that camera 3484 captured on January 17, 2011, when I download from the apache alias, the downloaded zip file only contains images up to January 14. So, I sshed onto the server, and unzipped the file in its own directory, and that zip file has images from January 14 to today (January 17). What strikes me as odd is that this should be the exact same file as the one I downloaded from the apache alias. Other experiments: I scp-ed the file from the server to my local machine, and the zip file has the newer images. But when I use an SCP client (in this case, Fugu for OSX), I get the zip file for the older images. In short: unzipping a file on the server or after downloading through scp or after downloading through wget gives one zip file, but unzipping a file from Chrome, Firefox, or SCP client gives a different zip file, when they should be exactly the same. Unzipping on the server... [user@server ~]$ cd /export1/amos/images/2011/84/3484/00003484/ [user@server 00003484]$ ls -la total 6180 drwxr-sr-x 2 user groupname 24 Jan 17 11:20 . drwxr-sr-x 4 user groupname 36 Jan 11 19:58 .. -rw-r--r-- 1 user groupname 6309980 Jan 17 12:05 2011.01.zip [user@server 00003484]$ unzip 2011.01.zip Archive: 2011.01.zip extracting: 20110114_140547.jpg extracting: 20110114_143554.jpg replace 20110114_143554.jpg? [y]es, [n]o, [A]ll, [N]one, [r]ename: y extracting: 20110114_143554.jpg extracting: 20110114_153458.jpg (...bunch of files...) extracting: 20110117_170459.jpg extracting: 20110117_173458.jpg extracting: 20110117_180501.jpg Using the wget through apache alias. local:~ user$ wget http://example.com/zipfiles/2011/84/3484/00003484/2011.01.zip --12:38:13-- http://example.com/zipfiles/2011/84/3484/00003484/2011.01.zip => `2011.01.zip' Resolving example.com... ip.ip.ip.ip Connecting to example.com|ip.ip.ip.ip|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 6,327,747 (6.0M) [application/zip] 100% [=====================================================================================================>] 6,327,747 1.03M/s ETA 00:00 12:38:56 (143.23 KB/s) - `2011.01.zip' saved [6327747/6327747] local:~ user$ unzip 2011.01.zip Archive: 2011.01.zip extracting: 20110114_140547.jpg (... same as before...) extracting: 20110117_183459.jpg Using scp to grab the zip local:~ user$ scp user@server:/export1/amos/images/2011/84/3484/00003484/2011.01.zip . 2011.01.zip 100% 6179KB 475.3KB/s 00:13 local:~ user$ unzip 2011.01.zip Archive: 2011.01.zip extracting: 20110114_140547.jpg (...same as before...) extracting: 20110117_183459.jpg Using Fugu to download 2011.01.zip from /export1/amos/images/2011/84/3484/00003484/ gives images 20110113_090457.jpg through 201100114_010554.jpg Using Firefox to download 2011.01.zip from http://example.com/zipfiles/2011/84/3484/00003484/2011.01.zip gives images 20110113_090457.jpg through 201100114_010554.jpg Using Chrome gives same results as Firefox. Relevant section from apache httpd.conf: # ScriptAlias: This controls which directories contain server scripts. # ScriptAliases are essentially the same as Aliases, except that # documents in the realname directory are treated as applications and # run by the server when requested rather than as documents sent to the client. # The same rules about trailing "/" apply to ScriptAlias directives as to # Alias. # ScriptAlias /cgi-bin/ "/var/www/cgi-bin/" Alias /zipfiles/ /export1/amos/images/

    Read the article

  • ScriptAlias makes requests match too many Location blocks. What is going on?

    - by brain99
    We wish to restrict access on our development server to those users who have a valid SSL Client certificate. We are running Apache 2.2.16 on Debian 6. However, for some sections (mainly git-http, setup with gitolite on https://my.server/git/) we need an exception since many git clients don't support SSL client certificates. I have succeeded in requiring client cert authentication for the server, and in adding exceptions for some locations. However, it seems this does not work for git. The current setup is as follows: SSLCACertificateFile ssl-certs/client-ca-certs.crt <Location /> SSLVerifyClient require SSLVerifyDepth 2 </Location> # this works <Location /foo> SSLVerifyClient none </Location> # this does not <Location /git> SSLVerifyClient none </Location> I have also tried an alternative solution, with the same results: # require authentication everywhere except /git and /foo <LocationMatch "^/(?!git|foo)"> SSLVerifyClient require SSLVerifyDepth 2 </LocationMatch> In both these cases, a user without client certificate can perfectly access my.server/foo/, but not my.server/git/ (access is refused because no valid client certificate is given). If I disable SSL client certificate authentication completely, my.server/git/ works ok. The ScriptAlias problem Gitolite is setup using the ScriptAlias directive. I have found that the problem occurs with any similar ScriptAlias: # Gitolite ScriptAlias /git/ /path/to/gitolite-shell/ ScriptAlias /gitmob/ /path/to/gitolite-shell/ # My test ScriptAlias /test/ /path/to/test/script/ Note that /path/to/test/script is a file, not a directory, the same goes for /path/to/gitolite-shell/ My test script simply prints out the environment, super simple: #!/usr/bin/perl print "Content-type:text/plain\n\n"; print "TEST\n"; @keys = sort(keys %ENV); foreach (@keys) { print "$_ => $ENV{$_}\n"; } It seems that if I go to https://my.server/test/someLocation, that any SSLVerifyClient directives are being applied which are in Location blocks that match /test/someLocation or just /someLocation. If I have the following config: <LocationMatch "^/f"> SSLVerifyClient require SSLVerifyDepth 2 </LocationMatch> Then, the following URL requires a client certificate: https://my.server/test/foo. However, the following URL does not: https://my.server/test/somethingElse/foo Note that this only seems to apply for SSL configuration. The following has no effect whatsoever on https://my.server/test/foo: <LocationMatch "^/f"> Order allow,deny Deny from all </LocationMatch> However, it does block access to https://my.server/foo. This presents a major problem for cases where I have some project running at https://my.server/project (which has to require SSL client certificate authorization), and there is a git repository for that project at https://my.server/git/project which cannot require a SSL client certificate. Since the /git/project URL also gets matched agains /project Location blocks, such a configuration seems impossible given my current findings. Question: Why is this happening, and how do I solve my problem? In the end, I want to require SSL Client certificate authorization for the whole server except for /git and /someLocation, with as minimal configuration as possible (so I don't have to modify the configuration each time something new is deployed or a new git repository is added). Note: I rewrote my question (instead of just adding more updates at the bottom) to take into account my new findings and hopefully make this more clear.

    Read the article

  • Windows 7, HTTPS WebDav: Asks for password twice and fails. Any workarounds?

    - by AutoDMC
    Howdy. I have a Dav server running with PHP SabreDav (code.google.com/p/sabredav/wiki/Windows) on Cherokee at an HTTPS secured URL. It's set to use https, and uses Digest Authentication. I can log in with multiple browsers and a few third party clients (BitKinex and Java AnyClient can connect and browse as well, caveats below). However, when attempting to log in with Windows 7 (surprise, surprise), it asks for my password twice, then tells me that my folder is invalid. I have verified that the server is using Digest authentication. I've verified multiple times that third party software can connect. I even went out and bought a GoDaddy SSL certificate so my SSL wouldn't be self signed anymore. I've applied the registry hacks here: support.microsoft.com/kb/943280 (Note that the article says the "fix" already exists for Windows 7, I just need magical registry hax to get it to work) I've applied the registry hacks here: support.microsoft.com/kb/941050 I've applied the registry hacks here: support.microsoft.com/kb/841215 (Supposedly allows Basic Auth, which shouldn't apply, but why not?) All to no avail; Windows continues to ask for my password twice, then state that "The folder you entered does not appear to be valid. Please choose another." Try the command line? Sure: I've attempted to access with NET USE "https://dav.example.com/" password /USER:me (System error 59) I've attempted to access with NET USE "https://dav.example.com/" (System error 1790) I've attempted to access with NET USE "https://dav.example.com/subdir/" password /USER:me (System error 59) I've attempted to access with NET USE "https://dav.example.com/subdir/" (System error 1790) For good luck: ping dav.example.com ... works. And again, web browsers can access the share just fine, so can third party tools. Best I can tell at this point is "HAHA, NO WEBDAV FOR YOU ON WINDOWS 7" which would be fine except everyone who will be using this application... uses Windows 7. And most are not as persistent or pugnacious as I am. I feel like I've burned through every random suggestion I've found anywhere in the first 10 pages of Google on every search term I can think of. Any ideas? I need it to be Webdav, I need it to be over HTTPS, and I really do need a method to access it from Windows 7. EXTRA DETAIL: However, the "third party" programs I've tried have either been buggy, incomplete, or have silly ... "glitches." For example, BitKinex seems to fixate on any http error codes sent, so if there's a glitch reading a directory, BAM, that directory is always listed empty. Long directory listings also show up as blank, even though the transfer panel shows that directory listing is still taking place. In any case, BitKinex is useless for development purposes for the reasons above. And besides, I'm building this for people other than myself, people who will want to get this dav share working "the regular way."

    Read the article

  • Windows Media Player won't launch on Vista - how to repair or reinstall it?

    - by rpm1200
    My friend asked me to look at her Acer Aspire laptop with Vista Home Premium as it is no longer playing DVDs. I found that Windows Media Player would not launch. I found this thread, which contained a number of suggestions, none of which solved the problem. Here is what I tried: Tried running WMP via desktop shortcut, QuickLaunch bar or going to Program Files\Windows Media Player\wmplayer.exe. In all cases, wmplayer would launch then terminate immediately (verified through the Processes tab in Task Manager). Tried running wmplayer.exe as Administrator. The UAC dialog would come up, I'd approve, then wmplayer would launch and terminate immediately. Uninstalled all non-Microsoft media programs except RealPlayer, iTunes, QuickTime, Acer Arcade (the laptop owner uses all those apps). Tried running Program Files\Windows Media Player\setup_wm.exe as Administrator, it launched but said that a newer version of WMP was already installed. Deleted the "Windows Media" folder located under %userprofile%\appdata\local\Microsoft then tried starting WMP - wmplayer would launch and terminate immediately. Register wmp.dll by typing "regsvr32 wmp.dll" in an Administrator cmd window then tried starting WMP - wmplayer would launch and terminate immediately. Run "SFC /SCANFILE" in an Administrator cmd window - get an error message that it found invalid system files and could not fix them, so look at the log file cbs.log. The log file shows that there are broken files associated with Windows Sidebar (which the user does not use) but none relating to WMP. Log off to safe mode and run "SFC /SCANFILE" in an Administrator cmd window again - same results. Try to download and install XP WMP - the microsoft.com site recognizes the OS as Genuine and allows the download, but when I launch the installer it says the system is not Genuine. Clicking the link directs me back to IE where I can authenticate the system as Genuine. The installer still fails to recognize the system as Genuine. It is a Genuine Vista installation. Try to run this update (KB931621). The installer said it did not apply to the system. Set Windows Media Player as default in Program Access and Defaults. Same results. Tried running "for %a in (%systemroot%\system32\wm*.dll) do regsvr32 /s %a" in an Administrator cmd window - same results. Went to this Knowledge Base article (947541) and ran the Microsoft Fix It. The Fix It ran successfully, but WMP would still launch and terminate immediately. Multiple reboots in the process of doing all of these steps. After all this, looked in the Application and Security logs. No events pertaining to WMP were logged. The computer was preinstalled with Vista Home Premium and I have the Acer backup DVDs which will reimage the drive. I do not have Vista install DVDs. Reimaging the system is not an option. I'd also rather not restore the system to an earlier point unless it's absolutely necessary. What else can I do to repair or reinstall WMP?

    Read the article

  • Windows Media Player won't launch on Vista - how to repair or reinstall it?

    - by rpm1200
    My friend asked me to look at her Acer Aspire laptop with Vista Home Premium as it is no longer playing DVDs. I found that Windows Media Player would not launch. I found this thread, which contained a number of suggestions, none of which solved the problem. Here is what I tried: Tried running WMP via desktop shortcut, QuickLaunch bar or going to Program Files\Windows Media Player\wmplayer.exe. In all cases, wmplayer would launch then terminate immediately (verified through the Processes tab in Task Manager). Tried running wmplayer.exe as Administrator. The UAC dialog would come up, I'd approve, then wmplayer would launch and terminate immediately. Uninstalled all non-Microsoft media programs except RealPlayer, iTunes, QuickTime, Acer Arcade (the laptop owner uses all those apps). Tried running Program Files\Windows Media Player\setup_wm.exe as Administrator, it launched but said that a newer version of WMP was already installed. Deleted the "Windows Media" folder located under %userprofile%\appdata\local\Microsoft then tried starting WMP - wmplayer would launch and terminate immediately. Register wmp.dll by typing "regsvr32 wmp.dll" in an Administrator cmd window then tried starting WMP - wmplayer would launch and terminate immediately. Run "SFC /SCANFILE" in an Administrator cmd window - get an error message that it found invalid system files and could not fix them, so look at the log file cbs.log. The log file shows that there are broken files associated with Windows Sidebar (which the user does not use) but none relating to WMP. Log off to safe mode and run "SFC /SCANFILE" in an Administrator cmd window again - same results. Try to download and install XP WMP - the microsoft.com site recognizes the OS as Genuine and allows the download, but when I launch the installer it says the system is not Genuine. Clicking the link directs me back to IE where I can authenticate the system as Genuine. The installer still fails to recognize the system as Genuine. It is a Genuine Vista installation. Try to run this update (KB931621). The installer said it did not apply to the system. Set Windows Media Player as default in Program Access and Defaults. Same results. Tried running "for %a in (%systemroot%\system32\wm*.dll) do regsvr32 /s %a" in an Administrator cmd window - same results. Went to this Knowledge Base article (947541) and ran the Microsoft Fix It. The Fix It ran successfully, but WMP would still launch and terminate immediately. Multiple reboots in the process of doing all of these steps. After all this, looked in the Application and Security logs. No events pertaining to WMP were logged. The computer was preinstalled with Vista Home Premium and I have the Acer backup DVDs which will reimage the drive. I do not have Vista install DVDs. Reimaging the system is not an option. I'd also rather not restore the system to an earlier point unless it's absolutely necessary. What else can I do to repair or reinstall WMP?

    Read the article

  • LXC Container Networking

    - by digitaladdictions
    I just started to experiment with LXC containers. I was able to create a container and start it up but I cannot get dhcp to assign the container an IP address. If I assign a static address the container can ping the host IP but not outside the host IP. The host is CentOS 6.5 and the guest is Ubuntu 14.04LTS. I used the template downloaded by lxc-create -t download -n cn-01 command. If I am trying to get an IP address on the same subnet as the host I don't believe I should need the IP tables rule for masquerading but I added it anyways. Same with IP forwarding. I compiled LXC by hand from the following source https://linuxcontainers.org/downloads/lxc-1.0.4.tar.gz Host Operating System Version #> cat /etc/redhat-release CentOS release 6.5 (Final) #> uname -a Linux localhost.localdomain 2.6.32-431.20.3.el6.x86_64 #1 SMP Thu Jun 19 21:14:45 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Container Config #> cat /usr/local/var/lib/lxc/cn-01/config # Template used to create this container: /usr/local/share/lxc/templates/lxc-download # Parameters passed to the template: # For additional config options, please look at lxc.container.conf(5) # Distribution configuration lxc.include = /usr/local/share/lxc/config/ubuntu.common.conf lxc.arch = x86_64 # Container specific configuration lxc.rootfs = /usr/local/var/lib/lxc/cn-01/rootfs lxc.utsname = cn-01 # Network configuration lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 LXC default.confu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:12:30:f2 brd ff:ff:ff:ff:f #> cat /usr/local/etc/lxc/default.conf lxc.network.type = veth lxc.network.link = br0 lxc.network.flags = up #> lxc-checkconfig Kernel configuration not found at /proc/config.gz; searching... Kernel configuration found at /boot/config-2.6.32-431.20.3.el6.x86_64 --- Namespaces --- Namespaces: enabled Utsname namespace: enabled Ipc namespace: enabled Pid namespace: enabled User namespace: enabled Network namespace: enabled Multiple /dev/pts instances: enabled --- Control groups --- Cgroup: enabled Cgroup namespace: enabled Cgroup device: enabled Cgroup sched: enabled Cgroup cpu account: enabled Cgroup memory controller: /usr/local/bin/lxc-checkconfig: line 103: [: too many arguments enabled Cgroup cpuset: enabled --- Misc --- Veth pair device: enabled Macvlan: enabled Vlan: enabled File capabilities: /usr/local/bin/lxc-checkconfig: line 118: [: -gt: unary operator expected Note : Before booting a new kernel, you can check its configuration usage : CONFIG=/path/to/config /usr/local/bin/lxc-checkconfig Network Config (HOST) #> cat /etc/sysconfig/network-scripts/ifcfg-br0 DEVICE=br0 TYPE=Bridge BOOTPROTO=dhcp ONBOOT=yes #> cat /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 ONBOOT=yes TYPE=Ethernet IPV6INIT=no USERCTL=no BRIDGE=br0 #> cat /etc/networks default 0.0.0.0 loopback 127.0.0.0 link-local 169.254.0.0 #> ip a s 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:12:30:f2 brd ff:ff:ff:ff:ff:ff inet6 fe80::20c:29ff:fe12:30f2/64 scope link valid_lft forever preferred_lft forever 3: pan0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 42:7e:43:b3:61:c5 brd ff:ff:ff:ff:ff:ff 4: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 00:0c:29:12:30:f2 brd ff:ff:ff:ff:ff:ff inet 10.60.70.121/24 brd 10.60.70.255 scope global br0 inet6 fe80::20c:29ff:fe12:30f2/64 scope link valid_lft forever preferred_lft forever 12: vethT6BGL2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether fe:a1:69:af:50:17 brd ff:ff:ff:ff:ff:ff inet6 fe80::fca1:69ff:feaf:5017/64 scope link valid_lft forever preferred_lft forever #> brctl show bridge name bridge id STP enabled interfaces br0 8000.000c291230f2 no eth0 vethT6BGL2 pan0 8000.000000000000 no #> cat /proc/sys/net/ipv4/ip_forward 1 # Generated by iptables-save v1.4.7 on Fri Jul 11 15:11:36 2014 *nat :PREROUTING ACCEPT [34:6287] :POSTROUTING ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A POSTROUTING -o eth0 -j MASQUERADE COMMIT # Completed on Fri Jul 11 15:11:36 2014 Network Config (Container) #> cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp #> ip a s 11: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 02:69:fb:42:ee:d7 brd ff:ff:ff:ff:ff:ff inet6 fe80::69:fbff:fe42:eed7/64 scope link valid_lft forever preferred_lft forever 13: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever

    Read the article

  • How do I make an external hard drive keep the same drive letter permanently?

    - by andygrunt
    I have a desktop PC (2002 vintage) running Windows XP that I turn on about 2 or 3 times per week. I have a mains powered 250Gb Western Digital hard disk connected to it via USB. I always turn the hard disk on before the PC so it's up and running as the PC boots. When I first connected the external hard disk, the PC assigned it a letter ('i' if it matters) and I've installed software to it, created shortcuts to various files and folders on the disk using that letter. For years everything was fine then I would boot the PC and the hard disk was assigned a different letter. I'd then have to go into 'my computer/manage/disk management' and manually change the letter back to 'i'. If I then rebooted the PC, the hard disk would usually still be 'i' but after the next reboot would be some other random letter and I have to manually change it back to 'i'. This would go on for some time then there'd be periods when the it would always be 'i' then, for no apparent reason (no new devices added, for example), the drive letter would start changing again. At the moment it's in random drive letter mood so I thought I'd ask the following question... How do I assign the external hard disk to be 'i' permanently? Answer: Thanks Molly that seems to have done the trick (after a little fiddling) - slightly disappointed there wasn't a way to do it within Windows without installing something else though. For anyone else trying this, it wasn't completely straightforward so here's what happened with me. I installed USBDLM as per the instructions on its website. I guessed that I had to assign the first USB letter to i so replaced the 'Letter1=' lines to 'Letter=I' in the ini file. To test it, I rebooted the PC only to find it came back up with the display set to 640x480 in 16 colours. After some investigation, I re-installed the display drivers and rebooted and set the display back to its usual setting. The external hard disk now gets set to 'i' but I found I had to re-apply sharing status to it so it was seen from my laptop which is on the same network. The end result of all this is that it now does what I wanted although it does act as though the hard drive has just been plugged in a few seconds after the Windows desktop appears i.e. the little box appears with a progress bar as it searches through the contents of the 'new' hard drive and I eventually get a dialogue box saying 'This disk or device contains more than one type of content. What do you want Windows to do?' and lists options such as play media files, print the pictures or open folder to view the files. This is a tiny pain I wish didn't happen but not exactly a huge price to pay. Other than that - it seems to work fine :) Looks like a spoke too soon... Every time I reboot, I have to re-share the 'i' drive (which I didn't have to do before) so it can be seen by my laptop on the same network. Any ideas how to make that permanent?

    Read the article

  • Openlayers - Redraw() layer. Add / Remove layer.

    - by Ozaki
    TLDR: I have an Openlayers map with a layer called 'track' I want to remove track and add track back in. I have an image 'imageFeature' on a layer that rotates on load to the direction being set. I want it to update this rotation that is set in 'styleMap' on a layer called 'tracking'. I set the var 'stylemap' to apply the external image & rotation. The 'imageFeature' is added to the layer at the coords specified. 'imageFeature' is removed. 'imageFeature' is added again in its new location. Rotation is not applied.. As the 'styleMap' applies to the layer I think that I have to remove the layer and add it again rather than just the 'imageFeature' Layer: var tracking = new OpenLayers.Layer.GML("Tracking", "coordinates.json", { format: OpenLayers.Format.GeoJSON, styleMap: styleMap }); styleMap: var styleMap = new OpenLayers.StyleMap({ fillOpacity: 1, pointRadius: 10, rotation: heading, }); Now wrapped in a timed function the imageFeature: map.layers[3].addFeatures(new OpenLayers.Feature.Vector( new OpenLayers.Geometry.Point(longitude, latitude), {rotation: heading, type: parseInt(Math.random() * 3)} )); Type refers to a lookup of 1 of 3 images.: styleMap.addUniqueValueRules("default", "type", lookup); var lookup = { 0: {externalGraphic: "Image1.png", rotation: heading}, 1: {externalGraphic: "Image2.png", rotation: heading}, 2: {externalGraphic: "Image3.png", rotation: heading} } I have tried the 'redraw()' function: but it returns "tracking is undefined" or "map.layers[2]" is undefined. tracking.redraw(true); map.layers[2].redraw(true); Heading is a variable: from a JSON feed. var heading = 13.542; But so far can't get anything to work it will only rotate the image onload. The image will move in coordinates as it should though. So what am I doing wrong with the redraw function or how can I get this image to rotate live? Thanks in advance -Ozaki Add: I managed to get map.layers[2].redraw(true); to sucessfully redraw layer 2. But it still does not update the rotation. I am thinking because the stylemap is updating. But it runs through the style map every n sec, but no updates to rotation and the variable for heading is updating correctly if i put a watch on it in firebug.

    Read the article

  • Silverlight 4 WCF RIA Services and MVVM is not as simple

    - by Thomas Jaskula
    [Disclaimer: I'm ASP.NET MVC Developer] Hi, I'm looking for some best practices with implementing MVVM pattern with WCF RIA in Silverlight 4. I'm not looking to use MEF of IoC for locating my ViewModels. What I would like to know is how to apply MVVM pattern with Silverlight 4 and WCF RIA. I don't want to use other stuff like Prism or MVVM Light toolkit. I found many examples on Internet showing how it is wonderful to drag and drop a datasource on the view and the job is done (it reminds me about my first VB6 developments). I tried to implement MVVM with WCF RIA and it's not strightforward at all. If I understand, the MVVM should contain all the logic in order to unit test it in isolation but when it comes to combine it with WCF RIA it's another story. I have the following questions. Can I use a generated metadata as model ? It would be easier to use it that if I write all from the scratch. As I saw the only way I could get data is through DomainContext or through direct binding in the view (local ressource). I don't want the direct binding in the view, not testable at all. On the other hand I can't use DomainContext, it doesn't expose any single entity !!! All I have is the EntitySet that I can bind to datagrid. How do I bind a single Entity to the DataForm from the ViewModel ? How do I udpate the model to the database ? How do I navigate from one Entity to a collection of it's itemps. For example if I have a Company Entity I would like to show a DataFrom to update a entite informations and a datagrid to show companies adresses. When saving a form would like to save information to Company and for example an information avout which adress was selected as active. Please help me understand how to do it well. Or maybe I should drop the WCF RIA and to do it with WCF from scratch ? What do you think ?

    Read the article

  • drawing thick, textured lines in OpenGL

    - by NateS
    I need to draw thick textured line segments in OpenGL. Actually I need curves made out of short line segments. Here is what I have: In the upper left is an example of two connected line segments. The second image shows once the lines are given width, they overlap. If I apply a texture that uses translucency, the overlap looks terrible. The third image shows that both lines are shortened by half the amount necessary to make the thick line corners just touch. This way I can fill the space between the lines with a triangle. On the right you can see this works well (ignore the horizontal line when the crappy texture repeats). But it doesn't always work well. In the bottom left the curve is made of many short line segments. Note the incorrect texture application. My program is written in Java, making use of the LWJGL OpenGL binding (and minor use of Slick, a 2D helper framework). I've made a zip file that contains an executable JAR so you can easily see the problem. It also has the Java code (there is only one source file) and an Eclipse project, so you can instantly run it through Eclipse and hack at it if you like. Here she is: http://n4te.com/temp/lines.zip To run, execute "java -jar lines.jar". You may need "-Djava.library.path=." before -jar if you are not on Windows. Press space to toggle texture/wireframe. The wireframe only shows the line segments, the triangle between them isn't drawn. I don't need to draw arbitrary lines, just bezier curves similar to what you see in the program. Sorry the code is a bit messy, once I have a solution I will refactor. I have investigated using GLUtessellator. It greatly simplified construction of the line, but I found that applying the texture was perfect. It worked most of the time (top image below), but long vertical curves would have severe texture distortion (bottom image below): This turned out to be much easier to code, but in the end worse than my approach. I believe what I'm trying to do is called "line tessellation" or "stroke tessellation". I assume this has been solved already? Is there standard code I can leverage? Otherwise, how can I fix my code so that the texture does not freak out on short, vertical curves?

    Read the article

  • Facebook Oauth Logout

    - by Derek Troy-West
    I have an application that integrates with Facebook using Oauth 2. I can authorize with FB and query their REST and Graph APIs perfectly well, but when I authorize an active browser session is created with FB. I can then log-out of my application just fine, but the session with FB persists, so if anyone else uses the browser they will see the previous users FB account (unless the previous user manually logs out of FB also). The steps I take to authorize are: Call [LINK: graph.facebook.com/oauth/authorize?client_id...] This step opens a Facebook login/connect window if the user's browser doesn't already have an active FB session. Once they log-in to facebook they redirect to my site with a code I can exchange for an oauth token. Call [LINK: graph.facebook.com/oauth/access_token?client_id..] with the code from (1) Now I have an Oauth Token, and the user's browser is logged into my site, and into FB. I call a bunch of APIs to do stuff: i.e. [LINK: graph.facebook.com/me?access_token=..] Lets say my user wants to log out of my site. The FB terms and conditions demand that I perform Single Sign Off, so when the user logs out of my site, they also are logged out of Facebook. There are arguments that this is a bit daft, but I'm happy to comply if there is any way of actually achieving that. I have seen suggestions that: A. I use the Javascript API to logout: FB.Connect.logout(). Well I tried using that, but it didn't work, and I'm not sure exactly how it could, as I don't use the Javascript API in any way on my site. The session isn't maintained or created by the Javascript API so I'm not sure how it's supposed to expire it either. B. Use [LINK: facebook.com/logout.php]. This was suggested by an admin in the Facebook forums some time ago. The example given related to the old way of getting FB sessions (non-oauth) so I don't think I can apply it in my case. C. Use the old REST api expireSession or revokeAuthorization. I tried both of these and while they do expire the Oauth token they don't invalidate the session that the browser is currently using so it has no effect, the user is not logged out of Facebook. I'm really at a bit of a loose end, the Facebook documentation is patchy, ambiguous and pretty poor. The support on the forums is non-existant, at the moment I can't even log in to the facebook forum, and aside from that, their own FB Connect integration doesn't even work on the forum itself. Doesn't inspire much confidence. Ta for any help you can offer. Derek ps. Had to change HTTPS to LINK, not enough karma to post links which is probably fair enough.

    Read the article

  • WinVerifyTrust API problem

    - by Shayan
    I'm using WinVerifyTrust API in windows XP and I don't want any kind of user interaction. But when I set the WTD_UI_NONE attribute, although it doesn't show any dialog boxes, but it waits for a long time on the files that in fact wanted user interaction (I mean files which without mentioning the NO UI it will ask the user for that file). This is my code: WINTRUST_FILE_INFO FileData; memset(&FileData, 0, sizeof(FileData)); FileData.cbStruct = sizeof(WINTRUST_FILE_INFO); wchar_t fileName[32769]; FileData.pcwszFilePath = fileName; FileData.hFile = NULL; FileData.pgKnownSubject = NULL; /* WVTPolicyGUID specifies the policy to apply on the file WINTRUST_ACTION_GENERIC_VERIFY_V2 policy checks: 1) The certificate used to sign the file chains up to a root certificate located in the trusted root certificate store. This implies that the identity of the publisher has been verified by a certification authority. 2) In cases where user interface is displayed (which this example does not do), WinVerifyTrust will check for whether the end entity certificate is stored in the trusted publisher store, implying that the user trusts content from this publisher. 3) The end entity certificate has sufficient permission to sign code, as indicated by the presence of a code signing EKU or no EKU. */ GUID WVTPolicyGUID = WINTRUST_ACTION_GENERIC_VERIFY_V2; WINTRUST_DATA WinTrustData; // Initialize the WinVerifyTrust input data structure. // Default all fields to 0. memset(&WinTrustData, 0, sizeof(WinTrustData)); WinTrustData.cbStruct = sizeof(WinTrustData); // Use default code signing EKU. WinTrustData.pPolicyCallbackData = NULL; // No data to pass to SIP. WinTrustData.pSIPClientData = NULL; // Disable WVT UI. WinTrustData.dwUIChoice = WTD_UI_NONE; // No revocation checking. WinTrustData.fdwRevocationChecks = WTD_REVOKE_NONE; // Verify an embedded signature on a file. WinTrustData.dwUnionChoice = WTD_CHOICE_FILE; // Default verification. WinTrustData.dwStateAction = 0; // Not applicable for default verification of embedded signature. WinTrustData.hWVTStateData = NULL; // Not used. WinTrustData.pwszURLReference = NULL; // Default. WinTrustData.dwProvFlags = WTD_REVOCATION_CHECK_END_CERT; // This is not applicable if there is no UI because it changes // the UI to accommodate running applications instead of // installing applications. WinTrustData.dwUIContext = 0; // Set pFile. WinTrustData.pFile = &FileData; // WinVerifyTrust verifies signatures as specified by the GUID // and Wintrust_Data. lStatus = WinVerifyTrust( (HWND)INVALID_HANDLE_VALUE, &WVTPolicyGUID, &WinTrustData); printf("%x\n", lStatus);

    Read the article

  • System.UriFormatException: Invalid URI: The hostname could not be parsed.

    - by Shane
    All of a sudden I'm getting the following error on my website. It doesnt access a db. just a simple website using .net 2.0. I did recently apply the available windows server 2003 service packs. Could that have changed things? I should add the error randomly comes and goes and has been doing so for today and yesterday. I leave it for 5 minutes and the error is gone. Server Error in '/' Application. Invalid URI: The hostname could not be parsed. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.UriFormatException: Invalid URI: The hostname could not be parsed. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [UriFormatException: Invalid URI: The hostname could not be parsed.] System.Uri.CreateThis(String uri, Boolean dontEscape, UriKind uriKind) +5367536 System.Uri.CreateUri(Uri baseUri, String relativeUri, Boolean dontEscape) +31 System.Uri..ctor(Uri baseUri, String relativeUri) +34 System.Net.HttpWebRequest.CheckResubmit(Exception& e) +5300867 [WebException: Cannot handle redirect from HTTP/HTTPS protocols to other dissimilar ones.] System.Net.HttpWebRequest.GetResponse() +5314029 System.Xml.XmlDownloadManager.GetNonFileStream(Uri uri, ICredentials credentials) +69 System.Xml.XmlDownloadManager.GetStream(Uri uri, ICredentials credentials) +3929371 System.Xml.XmlUrlResolver.GetEntity(Uri absoluteUri, String role, Type ofObjectToReturn) +54 System.Xml.XmlTextReaderImpl.OpenUrlDelegate(Object xmlResolver) +74 System.Threading.CompressedStack.runTryCode(Object userData) +70 System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) +0 System.Threading.CompressedStack.Run(CompressedStack compressedStack, ContextCallback callback, Object state) +108 System.Xml.XmlTextReaderImpl.OpenUrl() +186 System.Xml.XmlTextReaderImpl.Read() +208 System.Xml.XmlLoader.Load(XmlDocument doc, XmlReader reader, Boolean preserveWhitespace) +112 System.Xml.XmlDocument.Load(XmlReader reader) +108 System.Web.UI.WebControls.XmlDataSource.PopulateXmlDocument(XmlDocument document, CacheDependency& dataCacheDependency, CacheDependency& transformCacheDependency) +303 System.Web.UI.WebControls.XmlDataSource.GetXmlDocument() +153 System.Web.UI.WebControls.XmlDataSourceView.ExecuteSelect(DataSourceSelectArguments arguments) +29 System.Web.UI.WebControls.BaseDataList.GetData() +39 System.Web.UI.WebControls.DataList.CreateControlHierarchy(Boolean useDataSource) +264 System.Web.UI.WebControls.BaseDataList.OnDataBinding(EventArgs e) +55 System.Web.UI.WebControls.BaseDataList.DataBind() +75 System.Web.UI.WebControls.BaseDataList.EnsureDataBound() +55 System.Web.UI.WebControls.BaseDataList.CreateChildControls() +65 System.Web.UI.Control.EnsureChildControls() +97 System.Web.UI.Control.PreRenderRecursiveInternal() +53 System.Web.UI.Control.PreRenderRecursiveInternal() +202 System.Web.UI.Control.PreRenderRecursiveInternal() +202 System.Web.UI.Control.PreRenderRecursiveInternal() +202 System.Web.UI.Control.PreRenderRecursiveInternal() +202 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +4588

    Read the article

  • iPhone OpenGL ES: How do I use gravity vector to correctly transform scene for augmented reality

    - by gpdawson
    I'm trying figure out how to get an OpenGL specified object to be displayed correctly according to the device orientation (ie. according to the gravity vector from the accelerometer, and heading from compass). The GLGravity sample project has an example which is almost like this (despite ignoring heading), but it has some glitches. For example, the teapot jumps 180deg as the device viewing angle crosses the horizon, and it also rotates spuriously if you tilt the device from portrait into landscape. This is fine for the context of this app, as it just shows off an object and it doesn't matter that it does these things. But it means that the code just doesn't work when you attempt to emulate real life viewing of an OpenGL object according to the device's orientation. What happens is that it almost works, but the heading rotation you apply from the compass gets "corrupted" by the spurious additional rotations seen in the GLGravity example project. Can anyone provide sample code that shows how to adjust correctly for the device orientation (ie. gravity vector), or to fix the GLGravity example so that it doesn't include spurious heading changes? //Clear matrix to be used to rotate from the current referential to one based on the gravity vector bzero(matrix, sizeof(matrix)); matrix[3][3] = 1.0; //Setup first matrix column as gravity vector matrix[0][0] = accel[0] / length; matrix[0][1] = accel[1] / length; matrix[0][2] = accel[2] / length; //Setup second matrix column as an arbitrary vector in the plane perpendicular to the gravity vector {Gx, Gy, Gz} defined by by the equation "Gx * x + Gy * y + Gz * z = 0" in which we arbitrarily set x=0 and y=1 matrix[1][0] = 0.0; matrix[1][1] = 1.0; matrix[1][2] = -accel[1] / accel[2]; length = sqrtf(matrix[1][0] * matrix[1][0] + matrix[1][1] * matrix[1][1] + matrix[1][2] * matrix[1][2]); matrix[1][0] /= length; matrix[1][1] /= length; matrix[1][2] /= length; //Setup third matrix column as the cross product of the first two matrix[2][0] = matrix[0][1] * matrix[1][2] - matrix[0][2] * matrix[1][1]; matrix[2][1] = matrix[1][0] * matrix[0][2] - matrix[1][2] * matrix[0][0]; matrix[2][2] = matrix[0][0] * matrix[1][1] - matrix[0][1] * matrix[1][0]; //Finally load matrix glMultMatrixf((GLfloat*)matrix);

    Read the article

  • Implementing Unit Testing on the iPhone

    - by james.ingham
    Hi, So I've followed this tutorial to setup unit testing on my app when I got a little stuck. At bullet point 8 in that tutorial it shows this image, which is what I should be expecting when I build: However this isn't what I get when I build. I get this error message: Command /bin/sh failed with exit code 1 as well as the error message the unit test has created. Then, when I expand on the first error I get this: PhaseScriptExecution "Run Script" "build/3D Pool.build/Debug-iphonesimulator/LogicTests.build/Script-1A6BA6AE10F28F40008AC2A8.sh" cd "/Users/james/Desktop/FYP/3D Pool" setenv ACTION build setenv ALTERNATE_GROUP staff ... setenv XCODE_VERSION_MAJOR 0300 setenv XCODE_VERSION_MINOR 0320 setenv YACC /Developer/usr/bin/yacc /bin/sh -c "\"/Users/james/Desktop/FYP/3D Pool/build/3D Pool.build/Debug-iphonesimulator/LogicTests.build/Script-1A6BA6AE10F28F40008AC2A8.sh\"" /Developer/Tools/RunPlatformUnitTests.include:412: note: Started tests for architectures 'i386' /Developer/Tools/RunPlatformUnitTests.include:419: note: Running tests for architecture 'i386' (GC OFF) objc[12589]: GC: forcing GC OFF because OBJC_DISABLE_GC is set Test Suite '/Users/james/Desktop/FYP/3D Pool/build/Debug-iphonesimulator/LogicTests.octest(Tests)' started at 2010-01-04 21:05:06 +0000 Test Suite 'LogicTests' started at 2010-01-04 21:05:06 +0000 Test Case '-[LogicTests testFail]' started. /Users/james/Desktop/FYP/3D Pool/LogicTests.m:17: error: -[LogicTests testFail] : Must fail to succeed. Test Case '-[LogicTests testFail]' failed (0.000 seconds). Test Suite 'LogicTests' finished at 2010-01-04 21:05:06 +0000. Executed 1 test, with 1 failure (0 unexpected) in 0.000 (0.000) seconds Test Suite '/Users/james/Desktop/FYP/3D Pool/build/Debug-iphonesimulator/LogicTests.octest(Tests)' finished at 2010-01-04 21:05:06 +0000. Executed 1 test, with 1 failure (0 unexpected) in 0.000 (0.002) seconds /Developer/Tools/RunPlatformUnitTests.include:448: error: Failed tests for architecture 'i386' (GC OFF) /Developer/Tools/RunPlatformUnitTests.include:462: note: Completed tests for architectures 'i386' Command /bin/sh failed with exit code 1 Now this is very odd as it is running the tests (and succeeding as you can see my STFail firing) because if I add a different test which passes I get no errors, so the tests are working fine. But why am I getting this extra build fail? It may also be of note that when downloading solutions/templates which should work out the box, I get the same error. I'm guessing I've set something up wrong here but I've just followed a tutorial 100% correctly!! If anyone could help I'd be very grateful! Thanks EDIT: According to this blog, this post and a few other websites, I'm not the only one getting this problem. It has been like this since the release of xCode 3.2, assuming the apple dev center documents and tutorials etc are pre-3.2 as well. However some say its a known issue whereas others seem to think this was intentional. I for one would like both the extended console and in code messages, and I certainly do not like the "Command /bin/sh..." error and really think they would have documented such an update. Hopefully it will be fixed soon anyway. UPDATE: Here's confirmation it's something changed since the release of xCode 3.2.1. This image: is from my test build using 3.2.1. This one is from an older version (3.1.4): . (The project for both was unchanged).

    Read the article

< Previous Page | 323 324 325 326 327 328 329 330 331 332 333 334  | Next Page >