Search Results

Search found 13947 results on 558 pages for 'television shows'.

Page 493/558 | < Previous Page | 489 490 491 492 493 494 495 496 497 498 499 500  | Next Page >

  • List repositories from multiple projects in Trac using mod_python

    - by Steffen Eriksen
    Currently working on a customized webpage that shows the available projects I have in Trac (1.0.1). I am using mod_python to connect the trac interface. I found a standard page for this, but it didn't show listing of repositories. The page showed some variables to link to the different projects, but I can't find variables to the different repositories inside the projects. I have set up the webpage from reading this: http://trac.edgewall.org/wiki/TracInterfaceCustomization (under Site Appearance) Short summary; editing ../conf.d/trac.conf: PythonOption TracEnvParentDir /parent/dir/of/projects PythonOption TracEnvIndexTemplate /path/to/template And making a template file I can edit at /path/to/template: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:py="http://genshi.edgewall.org/" xmlns:xi="http://www.w3.org/2001/XInclude"> <head> <title>Available Projects</title> </head> <body> <h1>Available Projects</h1> <ul> <dl> <li py:for="project in projects" py:choose=""> <a py:when="project.href" href="$project.href" title="$project.description">$project.name</a> ## <dd> WANT TO ADD CODE HERE! </dd> <py:otherwise> <small>$project.name: <em>Error</em> <br /> ($project.description)</small> </py:otherwise> </li> </dl> </ul> </body> </html> So... The code I want to add is something like: <dd py:for="repos in project.repository" py:choose=""> <a py:when="repos.href" href="$repos.href"> $repos.name</a> </dd> I can't figure out where to add the variables, or if there already exists some variables I can use. After searching through the files it seemed like main.py had something to do with the variables (/usr/local/Trac-1.0.1/trac/web/main.py), but at first look it didn't seem easy to just add more variables. Is there a simple way to find the rest of the variables ? And how hard is it to add more variables? Will it perhaps be easier to do this an alternative way ? All I need is to link to the repositories dynamically

    Read the article

  • Sent items listed by sender's name (my name!) not recipient in Outlook for Mac 2011

    - by user568458
    Outlook 2011 on a Mac (OSX7 Lion), on a (mostly PC) office network using Microsoft Exchange server. In Outlook 2011, in the network "Sent Items" folder, all emails are listed showing the sender's name (my name), not the recipients' names. That means every single email is headed with my own name, like this: My Name 08/09/2012 Some subject line [flag] My Name 08/09/2012 Re: Another subject line [flag] My Name 07/09/2012 Re: different subject line [flag] My Name 07/09/2012 different subject line [flag] ...and so on. There's no clue at all as to who these emails were to, until I open each and every one. I guess it's reassuring to be told that every email that I sent was sent by me... and if I was ever to forget my own name while browsing my sent emails folder, it'd be super useful... But it's not very helpful for navigating sent emails. Is this normal for Office for Mac 2011? How can I fix this, so that the list shows the recipients' names instead of endlessly reminding me of my own name? Things I've found while researching this: This can also happen in Outlook for Windows. On Windows, it's easily fixed by resetting the field. That method doesn't work on Mac because the fields can't be selected separately. It seems this can also happen in Apple Mail too, and a lot of people seem stumped by this. I can't find anything on this specific to Outlook 2011 or Outlook for Mac in general. A simple guide on how to fix this would be the best answer, but I'd also welcome any knowledgable thoughts from Microsoft Exchange Server people on whether this sounds like an Outlook settings issue which I can fix on my machine, or some issue relating to how the Mac gets data from the server and network. The fact that Apple Mail users have encountered the same issue with no apparent fix makes me think the problem might be in the network rather than the mail client - but that's way beyond my limited knowledge of these things. I don't know whether the local ("On my computer") Sent Items folder has the same problem, as it's configured so that no emails except drafts are ever stored in these local folders. Drafts saved locally are listed by recipient as expected.

    Read the article

  • DVD ROM is not working

    - by Cyril N.
    (note: I don't know in which StackExchange site to put this question, I'll thank the moderator that will move it to a more appropriate place, if there is a S.E. available for my question). I have a DVD RW drive that is well listed in the bios, and if no CD is in, it is also present in the "My Computer" of my Fedora 16. But when I put a disc on it, the icon disapear from "My Computer", and I can not do anything with this ! (Like erasing a RW disc). I'd like to boot a Fedora 17 Live CD image. I burned it on an other computer but when I try to run it in bios, nothing is done and I'm redirected to Grub of my HD. The command cdrecord -scanbus shows this : wodim: Warning: controller returns wrong size for CD capabilities page. wodim: Cannot get CD capabilities data. 6,1,0 601) 'HD-DT%ST' 'DVD%RAM G@22NP20' '1&04' Removable CD-ROM And when I try to mount manually the disc, I got this error : mount: block device /dev/sr0 is write-protected, mounting read-only mount: /dev/sr0: can't read superblock Here's a paste of dmesg | grep sr0 : [ 5.161265] sr0: scsi-1 drive [ 5.161621] sr 6:0:1:0: Attached scsi CD-ROM sr0 [ 834.545978] sr0: Hmm, seems the drive doesn't support multisession CD's [ 841.731194] sr0: CDROM (ioctl) error, command: Get configuration 46 00 00 00 00 00 00 00 20 00 [ 842.021640] sr 6:0:1:0: [sr0] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [ 842.021652] sr 6:0:1:0: [sr0] Sense Key : Aborted Command [current] [ 842.021662] sr 6:0:1:0: [sr0] Add. Sense: No additional sense information [ 842.021672] sr 6:0:1:0: [sr0] CDB: Read(10): 28 00 00 00 00 00 00 00 02 00 [ 842.021688] end_request: I/O error, dev sr0, sector 0 [ 842.021697] Buffer I/O error on device sr0, logical block 0 [ 842.023715] sr 6:0:1:0: [sr0] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [ 843.048203] sr 6:0:1:0: [sr0] Sense Key : Aborted Command [current] [ 843.048211] sr 6:0:1:0: [sr0] Add. Sense: No additional sense information [ 843.048219] sr 6:0:1:0: [sr0] CDB: Read(10): 28 00 00 00 00 00 00 00 01 00 [ 843.048234] end_request: I/O error, dev sr0, sector 0 [ 843.048274] EXT4-fs (sr0): unable to read superblock [ 843.063155] sr0: CDROM (ioctl) error, command: Get configuration 46 00 00 00 00 00 00 00 20 00 [ 843.075904] sr0: CDROM (ioctl) error, command: Get configuration 46 00 00 00 00 00 00 00 20 00 [ 843.220512] sr 6:0:1:0: [sr0] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [ 843.220522] sr 6:0:1:0: [sr0] Sense Key : Aborted Command [current] [ 843.220530] sr 6:0:1:0: [sr0] Add. Sense: No additional sense information [ 843.220538] sr 6:0:1:0: [sr0] CDB: Read(10): 28 00 00 00 00 00 00 00 01 00 [ 843.220553] end_request: I/O error, dev sr0, sector 0 [ 843.220609] FAT-fs (sr0): unable to read boot sector The lines from Sense Key .. (line 6) to DRIVER_SENSE (line 11) are repeating a lot. I then changed my DVD player with an other spare one I had, and the disc didn't boot neither. I then changed the IDE cable, but still no success. What can I do to make it work? Thanks for your help.

    Read the article

  • Legacy VB6 application under Win7 SQL error

    - by Shial
    We have a rather unfortunate legacy application at work, written originally in VB6 it predates anybody in our IT department by at least 5 years. We have a contracted developer for ongoing maintenance and where he can he rewrites sections over into .NET code (Not sure about his techniques here, this is a side job for his regular work as an IBM engineer) the application works fine (such as it is) under windows XP. We have only a couple of Windows 7 machines mainly for testing and this application seems to run into a wall. Things like the background not loading and SQL errors. This is even running under administrator. Running an SQL trace from the ODBC control panel shows several interesting things. It makes a connection to the database successfully initially where it runs a query to determine if it is running the correct version. This query works fine. 558-1af0 ENTER SQLExecDirectW HSTMT 0x020D7548 WCHAR * 0x04C8F0F0 [ 115] "SELECT count(*) c FROM tblSoftwareVersion WHERE fldSoftwareVersion = '123456' AND fldSoftwareName = 'Application.VB'" SDWORD 115 BMS 558-1af0 EXIT SQLExecDirectW with return code 1 (SQL_SUCCESS_WITH_INFO) HSTMT 0x020D7548 WCHAR * 0x04C8F0F0 [ 115] "SELECT count(*) c FROM tblSoftwareVersion WHERE fldSoftwareVersion = '123456' AND fldSoftwareName = 'Application.VB'" SDWORD 115 It then seems to drop its connection and can't find the ODBC connection despite the fact its connecting to the same DB. From the trace it looks like it configures the connection then it starts firing off SQLFreeStmt to unbind and close out then when in the application and it tries to do its thing there is no connection. 558-1af0 ENTER SQLFreeStmt HSTMT 0x020D7548 UWORD 2 <SQL_UNBIND> BMS 558-1af0 EXIT SQLFreeStmt with return code 0 (SQL_SUCCESS) HSTMT 0x020D7548 UWORD 2 <SQL_UNBIND> Then this happens when I try to do something that pulls data 558-1af0 ENTER SQLDriverConnectW HDBC 0x020DDA00 HWND 0x00000000 WCHAR * 0x73EF8634 [ -3] "******\ 0" SWORD -3 WCHAR * 0x73EF8634 SWORD -3 SWORD * 0x00000000 UWORD 0 <SQL_DRIVER_NOPROMPT> BMS 558-1af0 EXIT SQLDriverConnectW with return code -1 (SQL_ERROR) HDBC 0x020DDA00 HWND 0x00000000 WCHAR * 0x73EF8634 [ -3] "******\ 0" SWORD -3 WCHAR * 0x73EF8634 SWORD -3 SWORD * 0x00000000 UWORD 0 <SQL_DRIVER_NOPROMPT> DIAG [IM002] [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified (0) Nearly all of my searching on this issue comes up with programming issues where the connection string has a problem. The only thing that is different in this particular scenario though is Windows 7, I know the connection string is fine since it works on the XP machines. The VB components are supposed to be still functional under Win7. My computer is running 32 bit win7 and my VP is running Win7 64 bit and both have the same problem so that can be ruled out. I have already tried reinstalling the SQL Native Client and the VB runtime as well as the application in question. Hopefully I can find a solution and not have to resort to using the XP VM.

    Read the article

  • Changing default datadirectory to one on a External NAS or add external share

    - by Hagbart Celine
    So I have been searching the web for days, looking for a solution to my problem. Now my only hope are you guys. I have installed Owncloud successfully on my Windows Server 2008R2. It all runs smoothly and I can connect without problems. So first checks are OK. Now I wanted to change the default data directory from my server to a shared folder on my NAS (Synology DS1813+, DSM 5.0-4493 Update 3). Tried following: changing the directory in config.php I changed the path in the config file from : "C:\inetpub\wwwroot\myfolder\data" to "\NASIP\cloud". by doing this the owncloud server only shows: Code: Select all Daten-Verzeichnis (\192.168.2.4\Cloud\data) ist ungültig Bitte stelle sicher, dass das Daten-Verzeichnis eine Datei namens ".ocdata" im Wurzelverzeichnis enthält. I also tried coping the files that were created in the local data storage, to the share on the NAS. No Luck. Now I tried it by mapping a network drive and using that in the config.php But still no luck. I get the same message with the missing .ocdata file. Now I tried the "External Storage APP" that comes with owncloud I thought that at least I could add the share as an external storage. But this also does not work. tried UNC, Mapped Drive Name (Z:) but nothing helped. So now I'm turning to you.. Does anyone have expirience with this kind of setup? Or can you even tell me how to make it work? (default or external storgae, I don't care anymore ) Using NAS (Synology DS1813+, DSM 5.0-4493 Update 3), Owncloud 7, Windows Server 2008 R2, IIS7 I got an answer on an other forum: The second option is how it should be done: 1. Put OC in maintenance mode 2. Mount (mapping in the windows world) your NAS directly to your OS 3. Copy the local data directory to the NAS mount 4. Ensure the permission is setup to give the web user access to the NAS mount 5. Update OC config.php with the new data path 6. Disable OC maintenance mode And this seems like the right way.. Ensure the permission is setup to give the web user access to the NAS mount I guess this is where I am not sure. What user is it exactly on my Server that is making the requests to the NAS? If the user is for example "IUSR" I can just create an account on my synology NAS and give him full access to my share? (But what is IUSRs password?) I have full root ssh access to my NAS, so if you can tell me what chmod or chown I need to use on my cloud folder...

    Read the article

  • PDF Corruption When Sending with Microsoft Products

    - by Winner
    I have the same PDF corruption problem in two different offices that I am the tech support for. Office 1: Started in the middle of December. PDF received from outside the office and is viewable with no problems. I have no control over how it is created. If it is forwarded to anyone else, the PDF is corrupted. I have forwarded it to multiple people in the office. I have tried viewing with Reader 8, 9, Sumatra and Fox IT. I have tried forwarding to Gmail and their viewer says it is corrupted. If I save the PDF and create a new email, it will be corrupted when sent using Outlook 2003, Outlook 2007, Microsoft Live Mail and Outlook Express. If I create the email using Thunderbird 3, Gmail or the webclient Iclient for IPSwitch IMail it will not be corrupted. I have confirmed the same results when using our IMail SMTP and also Using Gmail as the SMTP server. To be clear, if I created in Thunderbird, Gmail or Iclient and received on any of the MS products, it will be viewable. This office receives PDFs daily from multiple sources. There is only a small subset that are having this problem. So far they problem PDFs are from two different companies they deal with, but not all of the PDFs are bad. Office 2: PDFs are created by a management system. I'm not sure what engine is used to create them. Same exact same issues. At both offices, I noticed that the file size is wrong. One small PDF the proper file size is 12kb for the PDF when it's viewable, when it shows up corrupted it is only 8kb. We handle the email for both offices. Both are POP servers, not Exchange. IMail was updated after these issues start. I have tried different SMTP servers and it still seems to happen only when using Microsoft products to send. Anyone else having problems with PDFs getting corrupted? Any ideas how to find out a resolution?

    Read the article

  • Pinning based on origin of a reprepro repository.

    - by Shtééf
    I'm on Ubuntu 10.04, and trying to set up a repository using reprepro. I'd also like the pin everything in that repository to be preferred over anything else, even if packages are older versions. (It will only contain a select set of packages.) However, I cannot seem to get the pinning to work, and believe it has something to do with the repository side of things, rather than the apt configuration on the client. I've taken the following steps to set up my repository Installed a web server (my personal choice here is Cherokee), Created the directory /var/www/apt/, Created the file conf/distributions, like so: Origin: Shteef Label: Shteef Suite: lucid Version: 10.04 Codename: lucid Architectures: i386 amd64 source Components: main Description: My personal repository Ran reprepro export from the /var/www/apt/ directory. Now on any other machine, I can add this (empty) repository over HTTP to my /etc/apt/sources.list, and run apt-get update without any errors: Ign http://archive.lan lucid Release.gpg Ign http://archive.lan/apt/ lucid/main Translation-en_US Get:1 http://archive.lan lucid Release [2,244B] Ign http://archive.lan lucid/main Packages Ign http://archive.lan lucid/main Sources Ign http://archive.lan lucid/main Packages Ign http://archive.lan lucid/main Sources Hit http://archive.lan lucid/main Packages Hit http://archive.lan lucid/main Sources In my case, now I want to use an old version of Asterisk, namely Asterisk 1.4. I rebuilt the asterisk-1:1.4.21.2~dfsg-3ubuntu2.1 package from Ubuntu 9.04 (with some small changes to fix dependencies) and uploaded it to my repository. At this point I can see the new package in aptitude, but it naturally prefers the newer Asterisk 1.6 currently in the Ubuntu 10.04 repositories. To try and fix that, I have created /etc/apt/preferences.d/personal like so: Package: * Pin: release o=Shteef Pin-Priority: 1000 But when I try to install the asterisk package, it will still prefer the 1.6 version over my own 1.4 version. This is what apt-cache policy asterisk shows: asterisk: Installed: (none) Candidate: 1:1.6.2.5-0ubuntu1 Version table: 1:1.6.2.5-0ubuntu1 0 500 http://nl.archive.ubuntu.com/ubuntu/ lucid/universe Packages 1:1.4.21.2~dfsg-3ubuntu2.1shteef1 0 500 http://archive.lan/apt/ lucid/main Packages Clearly, it is not picking up my pin. In fact, when I run just apt-cache policy, I get the following: Package files: 100 /var/lib/dpkg/status release a=now 500 http://archive.lan/apt/ lucid/main Packages origin archive.lan 500 http://security.ubuntu.com/ubuntu/ lucid-security/multiverse Packages release v=10.04,o=Ubuntu,a=lucid-security,n=lucid,l=Ubuntu,c=multiverse origin security.ubuntu.com [...] Unlike Ubuntu's repository, apt doesn't seem to pick up a release-line at all for my own repository. I'm suspecting this is the cause why I can't pin on release o=Shteef in my preferences file. But I can't find any noticable difference between my repository's Release files and Ubuntu's that would cause this. Is there a step I've missed or mistake I've made in setting up my repository?

    Read the article

  • Secure openVPN using IPTABLES

    - by bob franklin smith harriet
    Hey, I setup an openVPN server and it works ok. The next step is to secure it, I opted to use IPTABLES to only allow certain connections through but so far it is not working. I want to enable access to the network behind my openVPN server, and allow other services (web access), when iptables is disabaled or set to allow all this works fine, when using my following rules it does not. also note, I already configured openVPN itself to do what i want and it works fine, its only failing when iptables is started. Any help to tell me why this isnt working will appreciated here. These are the lines that I added in accordance with openVPN's recommendations, unfortunately testing these commands shows that they are requiered, they seem incredibly insecure though, any way to get around using them? # Allow TUN interface connections to OpenVPN server -A INPUT -i tun+ -j ACCEPT #allow TUN interface connections to be forwarded through other interfaces -A FORWARD -i tun+ -j ACCEPT # Allow TAP interface connections to OpenVPN server -A INPUT -i tap+ -j ACCEPT # Allow TAP interface connections to be forwarded through other interfaces -A FORWARD -i tap+ -j ACCEPT These are the new chains and commands i added to restrict access as much as possible unfortunately with these enabled, all that happens is the openVPN connection establishes fine, and then there is no access to the rest of the network behind the openVPN server note I am configuring the main iptables file and I am paranoid so all ports and ip addresses are altered, and -N etc appears before this so ignore that they dont appear. and i added some explanations of what i 'intended' these rules to do, so you dont waste time figuring out where i went wrong : 4 #accepts the vpn over port 1192 -A INPUT -p udp -m udp --dport 1192 -j ACCEPT -A INPUT -j INPUT-FIREWALL -A OUTPUT -j ACCEPT #packets that are to be forwarded from 10.10.1.0 network (all open vpn clients) to the internal network (192.168.5.0) jump to [sic]foward-firewall chain -A FORWARD -s 10.10.1.0/24 -d 192.168.5.0/24 -j FOWARD-FIREWALL #same as above, except for a different internal network -A FORWARD -s 10.10.1.0/24 -d 10.100.5.0/24 -j FOWARD-FIREWALL # reject any not from either of those two ranges -A FORWARD -j REJECT -A INPUT-FIREWALL -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT-FIREWALL -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT-FIREWALL -j REJECT -A FOWARD-FIREWALL -m state --state RELATED,ESTABLISHED -j ACCEPT #80 443 and 53 are accepted -A FOWARD-FIREWALL -m tcp -p tcp --dport 80 -j ACCEPT -A FOWARD-FIREWALL -m tcp -p tcp --dport 443 -j ACCEPT #192.168.5.150 = openVPN sever -A FOWARD-FIREWALL -m tcp -p tcp -d 192.168.5.150 --dport 53 -j ACCEPT -A FOWARD-FIREWALL -m udp -p udp -d 192.168.5.150 --dport 53 -j ACCEPT -A FOWARD-FIREWALL -j REJECT COMMIT now I wait :D

    Read the article

  • usb_modeswitch not switching

    - by deniz
    After I upgraded from kernel 2.6.18 to 3.5.3 modeswitch started not to work for me. Although lsusb shows my usb modem, usb_modeswitch does not switch it. My system information is like below. I ran lsusb, dmesg, usb-devices and usb_modeswitch their output is like below. usb_modeswitch instead of switching my modem it says "No devices in default mode found. Nothing to do. Bye.". Can you offer a solution? Kernel: Linux 3.5.3 usb_modeswitch: 1.2.3-1 usb_modeswitch-data: 20120120-1 usbutils: 006-1 libusb: 1.0.8-0.1 root@localhost$ lsusb Bus 002 Device 029: ID 12d1:1446 Huawei Technologies Co., Ltd. root@localhost$ dmesg [70112.477080] usb 2-1.4: new high-speed USB device number 30 using ehci_hcd [70112.567757] scsi49 : usb-storage 2-1.4:1.0 [70112.567842] scsi50 : usb-storage 2-1.4:1.1 [70113.571433] scsi 49:0:0:0: CD-ROM HUAWEI Mass Storage 2.31 PQ: 0 ANSI: 2 [70113.572304] scsi 50:0:0:0: Direct-Access HUAWEI TF CARD Storage PQ: 0 ANSI: 2 [70113.574169] sr0: scsi-1 drive [70113.574223] sr 49:0:0:0: Attached scsi CD-ROM sr0 [70113.574250] sr 49:0:0:0: Attached scsi generic sg1 type 5 [70113.574350] sd 50:0:0:0: Attached scsi generic sg2 type 0 [70113.577173] sd 50:0:0:0: [sdb] Attached SCSI removable disk root@localhost$ usb-devices T: Bus=02 Lev=02 Prnt=02 Port=03 Cnt=01 Dev#= 30 Spd=480 MxCh= 0 D: Ver= 2.00 Cls=00(>ifc ) Sub=00 Prot=00 MxPS=64 #Cfgs= 1 P: Vendor=12d1 ProdID=1446 Rev=00.00 S: Manufacturer=Huawei Technologies S: Product=HUAWEI Mobile C: #Ifs= 2 Cfg#= 1 Atr=c0 MxPwr=500mA I: If#= 0 Alt= 0 #EPs= 2 Cls=08(stor.) Sub=06 Prot=50 Driver=usb-storage I: If#= 1 Alt= 0 #EPs= 2 Cls=08(stor.) Sub=06 Prot=50 Driver=usb-storage root@localhost$ cat /etc/usb_modeswitch.d/12d1\:1446 # Huawei, newer modems TargetVendor= 0x12d1 TargetProductList="1001,1406,140b,140c,1412,141b,1433,1436,14ac,1506" MessageContent="55534243123456780000000000000011062000000100000000000000000000" root@localhost$ usb_modeswitch -c /etc/usb_modeswitch.d/12d1:1446 -v 12d1 -p 1446 -W * usb_modeswitch: handle USB devices with multiple modes * Version 1.2.3 (C) Josua Dietze 2012 * Based on libusb0 (0.1.12 and above) ! PLEASE REPORT NEW CONFIGURATIONS ! DefaultVendor= 0x12d1 DefaultProduct= 0x1446 TargetVendor= 0x12d1 TargetProduct= not set TargetClass= not set TargetProductList="1001,1406,140b,140c,1412,141b,1433,1436,14ac,1506" DetachStorageOnly=0 HuaweiMode=0 SierraMode=0 SonyMode=0 QisdaMode=0 GCTMode=0 KobilMode=0 SequansMode=0 MobileActionMode=0 CiscoMode=0 MessageEndpoint= not set MessageContent="55534243123456780000000000000011062000000100000000000000000000" NeedResponse=0 ResponseEndpoint= not set InquireDevice enabled (default) Success check disabled System integration mode disabled usb_set_debug: Setting debugging level to 15 (on) usb_os_find_busses: Skipping non bus directory devices usb_os_find_busses: Skipping non bus directory drivers usb_os_find_busses: Skipping non bus directory uevent usb_os_find_busses: Skipping non bus directory drivers_probe usb_os_find_busses: Skipping non bus directory drivers_autoprobe Looking for target devices ... No devices in target mode or class found Looking for default devices ... No devices in default mode found. Nothing to do. Bye. Thanks in advance.

    Read the article

  • How can I troubleshoot Virtualbox port forwarding from Windows guest to OSX host not working?

    - by joe larson
    There are a plethora of questions about virtual box port forwarding problems but none with my specific details. I have a Windows install living in Virtual Box, hosted within OSX. I've got several webservers running on localhost on different ports within the Windows install. I cannot for the life of me get port forwarding to work so I can access those webservers from OSX. My settings look like this (yes I have a NAT adapter): And in my vbox configuration file the relavent portion looks like this: <NAT> <DNS pass-domain="true" use-proxy="false" use-host-resolver="false"/> <Alias logging="false" proxy-only="false" use-same-ports="false"/> <Forwarding name="RLPWeb" proto="1" hostport="7084" guestip="127.0.0.1" guestport="7084"/> <Forwarding name="UtilWeb" proto="1" hostport="4040" guestip="127.0.0.1" guestport="4040"/> <Forwarding name="WCARLP" proto="1" hostport="8084" guestip="127.0.0.1" guestport="8084"/> <Forwarding name="WCAUtil" proto="1" hostport="4848" guestip="127.0.0.1" guestport="4848"/> </NAT> I've turned off the Windows firewall to ensure it is not interfering, and I am not running a firewall on OSX. Anyway, when I attempt to go to for example http://127.0.0.1:4040/ on any of my OSX browsers, it will eventually time out. The log file for this VM shows that it is correctly reading the settings and implying it's doing the right thing here: 00:00:08.286 NAT: set redirect TCP host port 4848 => guest port 4848 @ 127.0.0.1 00:00:08.286 NAT: set redirect TCP host port 8084 => guest port 8084 @ 127.0.0.1 00:00:08.286 NAT: set redirect TCP host port 4040 => guest port 4040 @ 127.0.0.1 00:00:08.286 NAT: set redirect TCP host port 7084 => guest port 7084 @ 127.0.0.1 00:00:08.290 Changing the VM state from 'LOADING' to 'SUSPENDED'. 00:00:08.290 Changing the VM state from 'SUSPENDED' to 'RESUMING'. 00:00:08.290 Changing the VM state from 'RESUMING' to 'RUNNING'. 00:00:08.337 Display::handleDisplayResize(): uScreenId = 0, pvVRAM=000000012017d000 w=1834 h=929 bpp=32 cbLine=0x1CA8, flags=0x1 00:00:09.139 AIOMgr: Host limits number of active IO requests to 16. Expect a performance impact. 00:00:13.454 NAT: DHCP offered IP address 10.0.2.15 I've tried setting the Host IP to 127.0.0.1, and I've tried setting Guest IP blank and also 10.0.2.15. None of these seem to help. What else can I look at to troubleshoot this issue? Details of setup: OSX 10.6.8 Windows 7 Professional 64bit VirtualBox 4.1.2

    Read the article

  • RSH between servers not working

    - by churnd
    I have two servers: one CentOS 5.8 & one Solaris 10. Both are joined to my workplace AD domain via PBIS-Open. A user will log into the linux server & run an application which issues commands over RSH to the solaris server. Some commands are also run on the linux server, so both are needed. Due to the application these servers are being used for (proprietary GE software), the software on the linux server needs to be able to issue rsh commands to the solaris server on behalf of the user (the user just runs a script & the rest is automatic). However, rsh is not working for the domain users. It does work for a local user, so I believe I have the necessary trust settings between the two servers correct. However, I can rlogin as a domain user from the linux server to the solaris server. SSH works too (how I wish I could use it). Some relevant info: via rlogin: [user@linux~]$ rlogin solaris connect to address 192.168.1.2 port 543: Connection refused Trying krb4 rlogin... connect to address 192.168.1.2 port 543: Connection refused trying normal rlogin (/usr/bin/rlogin) Sun Microsystems Inc. SunOS 5.10 Generic January 2005 solaris% via rsh: [user@linux ~]$ rsh solaris ls connect to address 192.168.1.2 port 544: Connection refused Trying krb4 rsh... connect to address 192.168.1.2 port 544: Connection refused trying normal rsh (/usr/bin/rsh) permission denied. [user@linux ~]$ relevant snippet from /etc/pam.conf on solaris: # # rlogin service (explicit because of pam_rhost_auth) # rlogin auth sufficient pam_rhosts_auth.so.1 rlogin auth requisite pam_lsass.so set_default_repository rlogin auth requisite pam_lsass.so smartcard_prompt try_first_pass rlogin auth requisite pam_authtok_get.so.1 try_first_pass rlogin auth sufficient pam_lsass.so try_first_pass rlogin auth required pam_dhkeys.so.1 rlogin auth required pam_unix_cred.so.1 rlogin auth required pam_unix_auth.so.1 # # Kerberized rlogin service # krlogin auth required pam_unix_cred.so.1 krlogin auth required pam_krb5.so.1 # # rsh service (explicit because of pam_rhost_auth, # and pam_unix_auth for meaningful pam_setcred) # rsh auth sufficient pam_rhosts_auth.so.1 rsh auth required pam_unix_cred.so.1 # # Kerberized rsh service # krsh auth required pam_unix_cred.so.1 krsh auth required pam_krb5.so.1 # I have not really seen anything useful in either system log that seem to be directly related to the failed login attempt. I've tail -f'd /var/adm/messages on solaris & /var/log/messages on linux during the failed attempts & nothing shows up. Maybe I need to be doing something else?

    Read the article

  • Remotely from Chrome or IE page loads ~60seconds, from Firefox or IE on local machine - instantly.

    - by Janis Veinbergs
    The problem: If i access SharePoint from Windows 7 with IE8 or Chrome5 - I must wait for like a minute to get a response. If i use other Windows 7 with IE8, just the same - just wait a MINUTE. If i use Firefox3.6 on W7 machine - page opens up instantly. Now switch to IE rendering engine in Firefox, you will have to wait just as with IE. Now i tried IE8 on XP SP3 - page opens up instantly. I tried IE8 on Windows Server 2003 SP2 (machine on which SharePoint is hosted) - page opens up instantly. IIS6 Logs I did request almost instantly from all 3 browsers and this is what shows up in IIS logs (first 2 entries for each browser): Chrome Ok, IIS saw first Chrome request when i Hit enter in browser, but i had to wait long for things to move on 2010-06-01 05:46:04 W3SVC1794621940 192.168.0.9 GET /sapulces - 80 - 192.168.0.186 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/533.4+(KHTML,+like+Gecko)+Chrome/5.0.375.55+Safari/533.4 401 2 2148074254 Loading... 2010-06-01 05:47:07 W3SVC1794621940 192.168.0.9 GET /sapulces - 80 - 192.168.0.186 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/533.4+(KHTML,+like+Gecko)+Chrome/5.0.375.55+Safari/533.4 401 1 0 ... etc... Firefox All Instantly 2010-06-01 05:46:06 W3SVC1794621940 192.168.0.9 GET /sapulces - 80 - 192.168.0.186 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+lv;+rv:1.9.2.3)+Gecko/20100401+Firefox/3.6.3 401 2 2148074254 2010-06-01 05:46:06 W3SVC1794621940 192.168.0.9 GET /sapulces - 80 - 192.168.0.186 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+lv;+rv:1.9.2.3)+Gecko/20100401+Firefox/3.6.3 401 1 0 ... etc... IE I did hit enter when it was 05:46:06, but these are first entries in IIS logs 2010-06-01 05:47:08 W3SVC1794621940 192.168.0.9 GET /sapulces - 80 - 192.168.0.186 Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+6.1;+Trident/4.0;+SLCC2;+.NET+CLR+2.0.50727;+.NET+CLR+3.5.30729;+.NET+CLR+3.0.30729;+Media+Center+PC+6.0;+Tablet+PC+2.0;+.NET+CLR+1.1.4322;+.NET4.0C;+.NET4.0E) 401 1 0 2010-06-01 05:47:08 W3SVC1794621940 192.168.0.9 GET /sapulces - 80 - 192.168.0.186 Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+6.1;+Trident/4.0;+SLCC2;+.NET+CLR+2.0.50727;+.NET+CLR+3.5.30729;+.NET+CLR+3.0.30729;+Media+Center+PC+6.0;+Tablet+PC+2.0;+.NET+CLR+1.1.4322;+.NET4.0C;+.NET4.0E) 401 1 0 ... etc... Nothing to see in Event Logs. The question Similar question has been asked but there is no response and i`m trying to access page without SSL and that happens even on GET requests. Where do I look? Where would be the problem? Browser? OS? I don't even know what to think about. Just a note Just a note about chrome's process isolation: I found it sad that while I was waiting that minute with Chrome, i could not use any other tab (i could switch, but i could not, for example, scroll or use any controls)

    Read the article

  • Problems when trying to connect to a router wirelessly

    - by Ruud Lenders
    The situation - At my girlfriend's parents' place there are six Windows 7 devices that are wired or wireless connected to a router: 3 dekstops and 3 laptops. There are also several smartphones using the router. The router is secured with WPA2 (AES). The problem - We never had any problems with the router for over a year. But recently - about 3 weeks ago - my girlfriend's laptop (HP) and my laptop (ASUS) started to develop problems while trying to connect to the router. The router has stopped showing up from the network list. Sometimes it comes back and shows up, but then it keeps saying something along the lines of "Could not connect", and not long after that it dissapears again. The range of the router is not the problem here, because we experience the same when we sit next to the router. Sometimes, if we are lucky, and waited a long time (10-15 minutes) without using the laptop for anything, the laptop will eventually succesful connect to the router. The attempts - Of course, the Window 7 troubleshooter. We tried troubleshooting the connection problems and the wireless network adapter, but no luck. We also reset the router enough times to know that's not helping either. Here's the full list of things we tried, but did not help: Running the Windows 7 troubleshooter Resetting the router (more than once) Setting the router settings to factory defaults Disconnecting all other devices except one laptop Applying a system restore Trying static/dynamic IP/DNS - Dynamic is better, right? Enabling/disabling IPv6 - Should I keep IPv6 disabled? Running the command: netsh wlan stop hostednetwork Running the command: netsh wlan set hostednetwork mode=disallow Updating/reïnstalling wireless adapter drivers The tests - To help finding the core of the problem, we tested the following: Plugging an ethernet cable in the router and in our laptops - worked fine Connecting someone else's laptop to the router (wireless) - worked fine Connecting our laptops to someone else's router - worked fine The router - This information might be relevant: Router model: Sitecom 300N Wireless Router Router hardware: version 01 The DCHP Server's IPs range from 192.168.0.100 to 192.168.0.200. Router settings: Wireless channel: 12 Channel bandwidth: 20/40 MHz Extension channel: 8 Preamble type: Long 802.11g protection: Disabled UPnP: Enabled The laptops - If you are wondering about our laptops: My laptop model: ASUS Pro64JQ Girlfriend's laptop: HP Pavillion G6 OS: Both Windows 7 Professional x64 - with Service Pack 1 My wireless adapter: Atheros AR9285 AdHoc 11n: Enabled The question - Does anyone have experienced the same problems as I do? Or does someone know how to solve this? Are there more tricks I can try, or settings I should change? Note - Our laptops are not slow or old. My laptop is 1.5 years old, and the other laptop is just 5 months old. I know how to keep laptops clean and I'm pretty sure both laptops are not bloated with useless software.

    Read the article

  • Our VPS is being used as a Warez mule

    - by Mikuso
    The company I work for runs a series of ecommerce stores on a VPS. It's a WAMP stack, 50gb storage. We use an archaic piece of ecommerce software which operates almost entirely client-side. When an order is taken, it writes it to disk and then we schedule a task to download the orders once every 10 minutes. A few days ago, we ran out of disk space, which caused orders to fail to be written. I quickly hopped on to delete some old logs from the mailserver and freed up a couple of GB pretty quickly, but I wondered how we could fill up 50gb will nothing much more than logs. Turns out, we didn't. Hidden deep within the c:\System Volume Information directory, we have a stack of pirated videos, which seem to have appeared (looking at the timestamps) over the past three weeks. Porn, American Sports, Australian cooking shows. A very odd collection. Doesn't look like an individual's personal tastes - more like the VPS is being used as a mule. We have a 5-attempts and you're blocked policy on our FTP server (plus, there is no FTP account with access to that directory), and the windows user account has had it's password changed recently. The main avenues are sealed - and logs can verify that. I thought I'd watch and see if it happened again, and yes, another cooking show has appeared this morning. I am the only one to know of this problem at my company, and only one of two with access to the VPS (the other being my boss, but no - it's not him). So how is this happening? Is there a vulnerability in some of the software on the VPS? Are the VPS owners peddling warez across our rented space? (can they do this?) I don't want to delete the warez in case it is seen as a hostile action against this outside force, and they choose to retaliate. What should I do? How do I troubleshoot this? Has this happened to anyone else before?

    Read the article

  • Sendmail smtp-auth issues

    - by SlackO
    I'm running into a problem with Sendmail trying to implement SMTP- auth. I"m running 8.14.5 and have saslauthd running under FreeBSD 7.0-R. I don't believe I have starttls enabled (but I also compiled a version with it and have been testing it too - same problem) - just looking for basic auth, but am wondering if my configuration is not compatible with modern mail clients? I don't think I have any certs set up. It seems an older version of Microsoft Outlook Express works fine with SMTP-auth with no problems, but Outlook 2010 won't work, and neither will Eudora (basic settings to not use encryption and use same uid/pw as pop3 account name) When trying to send mail the server reports: "550 571 Relaying Denied. Proper authentication required." Is there some config that I am missing? Why does it work with Outlook Express but not other e-mail clients? my site.config.m4 has: APPENDDEF(confENVDEF',-DSASL=2') APPENDDEF(conf_sendmail_LIBS',-lsasl2') dnl APPENDDEF(confLIBDIRS',-L/usr/local/lib/sasl2') APPENDDEF(confLIBDIRS',-L/usr/local/lib') APPENDDEF(confINCDIRS',-I/usr/local/include') My sendmail.mc has: define(ConfAUTH_OPTIONS',A') TRUST_AUTH_MECH(LOGIN PLAIN')dnl define(ConfAUTH_MECHANISMS',`LOGIN PLAIN')dnl My /usr/local/lib/sasl2/Sendmail.conf has: pwcheck_method: saslauthd When I restart sendmail this shows up in the logs: Jun 16 12:36:24 x sm-mta[79090]: restarting /usr/sbin/sendmail due to signal Jun 16 12:36:24 x sm-mta[81145]: starting daemon (8.14.5): SMTP+queueing@00:30:00 Jun 16 12:36:24 x sm-mta[81147]: STARTTLS=client, relay=mxgw1.mail.nationalnet.com., version=TLSv1/SSLv3, verify=FAIL, cipher=DHE-RSA-AES256-SHA, bits=256/256 Jun 16 12:36:24 x sm-mta[81148]: STARTTLS=client, relay=mxgw1.mail.nationalnet.com., version=TLSv1/SSLv3, verify=FAIL, cipher=DHE-RSA-AES256-SHA, bits=256/256 testing on the cmd line: telnet localhost 587 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. 220 xxxt ESMTP Sendmail 8.14.5/8.14.5; Fri, 15 Jun 2012 18:28:03 -0500 (CDT) ehlo localhost 250-xxxx Hello localhost [127.0.0.1], pleased to meet you 250-ENHANCEDSTATUSCODES 250-PIPELINING 250-8BITMIME 250-SIZE 250-DSN 250-AUTH GSSAPI DIGEST-MD5 CRAM-MD5 LOGIN PLAIN 250-DELIVERBY 250 HELP I am not using any certs or ssl right now - just trying to get basic auth to work. Anyone have any ideas?

    Read the article

  • Tomcat and IIS 7 both on different ip's and different ports

    - by n00b
    I have Tomcat and IIS 7 installed together on a Windows 2008 server. The machine has two IPs (134.133.1.1 and 134.133.2.2). I want Tomcat to handle 134.133.1.1, on port 80, and IIS to handle both 134.133.2.2, on port 80 AND 134.133.1.1, on port 443, but can't seem to get the last two together (I can get one or the other by themselves on IIS, along with the first IP address on Tomcat). I have configured Tomcat to successfully listen to ip 134.133.1.1, on port 80 with this configuration; <Connector port="80" protocol="HTTP/1.1" address="134.133.1.1" connectionTimeout="20000" redirectPort="8443" /> I also have a site configured in IIS bound to ip 134.133.1.1, on port 443 (SSL). When I turn on IIS, after Tomcat, I can reach both 134.133.1.1:80 (Tomcat) and 134.133.1.1:443 (IIS) successfully (as desired). The problem now comes when I want to introduce a new site via IIS, at the new ip address. In IIS I have setup a new site at IP 134.133.2.2, port 80. I can not start the site. The event log shows this error; Unable to bind to the underlying transport for [::]:80. The IP Listen-Only list may contain a reference to an interface which may not exist on this machine. The data field contains the error number. I think this is because IIS 7 tries to listen to port 80 on all IPs, and it cant because Tomcat is taking port 80 for 134.133.1.1. From reading, the resolution is to specify the IP address you want IIS to bind on port 80. The problem is, when I add 134.133.2.2 to the iplisten list, then I get a 404 when I try navigating to 134.133.1.1:443. I assume this is because IIS is no longer listening to ANY port on 134.133.1.1. How do I resolve this such that IIS will return both sites? EDIT: Per request my IIS binding for site A is 134.133.2.2 on port 80 (http) and 134.133.2.2 on port 443. For site B in IIS, the binding is 134.133.1.1 on port 443 (https). Note the IPs in this example are just for example purposes, but consistent with my setup.

    Read the article

  • server 2008 r2 - wbadmin systemstatebackup - system writer not found in the backup

    - by TWood
    I am trying to manually run a systemstatebackup command on my server 2008 r2 box and I am getting an error code '2155347997' when I view the backup event log details. The command line tells me that I have log files written to the c:\windows\logs\windowsserverbackup\ path but I have no files of the .log type there. My command window tells me "System Writer is not found in the backup". However when I run vssadmin list writers I find System Writer in the list and it shows normal status with no last errors stored. I am running this from an elevated command prompt as well as from a logged on administrator account. My backup target path has permission for network service to have full control and it has plenty of free space. Looking in eventlog I have two VSS error 8194 that happen immediately before the Backup error 517 which has the errorcode 2155347997 listed. All three of these errors are a result of trying to run the command for the systemstatebackup. It's my belief that some VSS related permission is failing and exiting the backup process before it ever gets started. Because of this the initial code that creates the log files must not be running and this is why I have no files. When running the systemstatebackup command from the command prompt and watching the windowsserverbackup directory I do see that I have a Wbadmin.0.etl file which gets created but it is deleted when the backup errors out and stops. I have looked online and there are numerous opinions as to the cause of this error. These are the things I have corrected to try and fix this issue before posting here: Machine runs a HP 1410i smart array controller but at one time also used a LSI scsi card. Used networkadminkb.com's kb# a467 to find one LSI_SCSI entry in HKLMSysCurrentControlSetServices which start was set to 0x0 and I modified to 0x3. No changes. In HKLMSystemCurrentControlSetServicesVSSDiag I gave network service full control where it previously only had "Special Permission". No changes. I followed KB2009272 to manually try to fix system writer. These are all of the things I have tried. What else should I look at to resolve this issue? It may be important to note that I run Mozy Pro on this server and that was known in the past to use VSS for copying operations and it occasionally threw an error. However since an update last year those error event log entries have stopped.

    Read the article

  • Why might one host be unable to access the Internet, when it can ping the router and when all other hosts can?

    - by user1444233
    I have a Draytek Vigor 2830n. It's kicking out a 192.168.3.0 LAN. It performs load-balancing across dual-WAN ports, although I've turned off the second WAN to simplify testing. There are many hosts on the LAN. All IPs are allocated through DHCP, most freely allocated from the pool, but one or two are bound to NIC MAC addresses. All hosts can access the Internet, save one. That host (192.168.3.100 or 'dot100' for short) gets allocated an IP address (and the right gateway address, DNS server addresses, subnet etc.) dot100 can ping itself. It can ping the gateway, and access the latter's web interface via port 80. It's responsive and loss-free (sustained ping over a couple of minutes reports no data loss). Yet, for some reason that evades me, dot100 can't ping an external IP address or domain name. I suspect it's never been able to, because it was getting some Internet access from a second adaptor (different subnet), but that's now been turned off, which exposed the problem. In dot100, I've tried: two operating systems (Windows 8 and Knoppix), to rule out anti-virus programs etc. two physical adaptors two cables, on each adaptor two IPs (e.g. .100 and .103 assigned by Mac and .26 from the pool) both dynamic and assigned (MAC-bound) DHCP-allocated IPs but none of this experiments yielded any variation in the result. dot100 is a crucial host. It's a file server for the network, so I need it to be reliably allocated a consistent IP. Can anyone offer a potential solution or a way forward with the analysis please? My guess My analysis so far leads me to believe it's a router issue. I've checked the web interface very carefully. There are no filters setup in Firewall - General Setup or Filter Setup. I suspect it's a corrupted internal routing table, but the web UI shows this as the Routing table: Key: C - connected, S - static, R - RIP, * - default, ~ - private * 0.0.0.0/ 0.0.0.0 via 62.XX.XX.X WAN1 * 62.XX.XX.X/ 255.255.255.255 via 62.XX.XX.X WAN1 S 82.YY.YYY.YYY/ 255.255.255.255 via 82.YY.YYY.YYY WAN1 C 192.168.1.0/ 255.255.255.0 directly connected WAN2 C~ 192.168.3.0/ 255.255.255.0 directly connected LAN2

    Read the article

  • udp expected behaviour not responding to test result

    - by ernst
    I have a local network topology that is structured as follows: three hosts and a switch in the middle. I am using a switch that supports 10,100,1000 Mbit/s full/half duplex connection. I have configured the hosts with a static ip 172.16.0.1-2-3/25. This is the output of ifconfig eth0 Link encap: Ethernet HWaddr ***** inet addr:172.16.0.3 Bcast:172.16.0.127 Mask:255.255.255.128 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:16 The output on H1 and H2 is perfectly matchable They are mutually reachable since i have tested the network with ping. I have forced the ethernet interface to work at 10M with ethtool -s eth0 speed 10 duplex full autoneg on this is the output of ethtool eth0 supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Half 1000baseT/Full S upported pause frame use: No Supports auto-negotiation: Yes Advertised link modes: 10baseT/Full Advertised pause frame use: Symmetric A dvertised auto-negotiation: Yes Speed: 10Mb/s Duplex: Full Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: on MDI-X: Unknown Supports Wake-on: g Wake-on: d Current message level: 0x000000ff (255) drv probe link timer ifdown ifup rx_err tx_err Link detected: yes – I am doing an experimental test using nttcp to calculate the GOODPUT in the case that H1 and H2 at the same time send data to H3. Since the three links have the same forced capability and the amount of arrving data speed is 10 from H1+10 from H2--20M to H3 it would be expected a bottleneck effect and, due to the non reliable nature of udp, a packet loss. But this doesn't appen since the output of nttcp application shows the same number of byte sended and received. this is the output of nttcp on h3 nttcp -T -r -u 172.16.0.2 & nttcp -T -r -u 172.16.0.1 [1] 4071 Bytes Real s CPU s Real-MBit/s CPU-MBit/s Calls Real-C/s CPU-C/s l 8388608 13.74 0.05 4.8848 1398.0140 2049 149.14 42684.8 Bytes Real s CPU s Real-MBit/s CPU-MBit/s Calls Real-C/s CPU-C/s l 8388608 14.02 0.05 4.7872 1398.0140 2049 146.17 42684.8 1 8388608 13.56 0.06 4.9500 1118.4065 2051 151.28 34181.1 1 8388608 13.89 0.06 4.8310 1198.3084 2051 147.65 36623.0 – How is this possible? Am i missing something? Any help will be gratefully apprecciated, Best regards

    Read the article

  • IE and Google Chrome timeout on an IIS6 hosted SSL page that Firefox handles well

    - by Thomas
    Ok, here's the scenario: Up until a few weeks ago, none of us noticed anything wrong with the corporate website. People were using it without complaint. Then, a client complained that a specific page on the site was timing out for him, and only when he committed a POST action on a form filled with data. I checked it out, and it timed out for me, too. But, it only timed out in Google Chrome and IE, not in Firefox. Additionally, the same page, on the same server, but served from a different domain name (one not under the protection of SSL, either) does not time out under any browser. To clarify: https://www.mysite.com/changes.php times out on POST, but the same with http works fine. That distinction (SSL vs. Non-SSL) seems to be important, as nothing else has changed. Our certificate is valid, and Firefox detects no errors thrown by the page. I've looked at the Request and Response headers from the page, and they all follow the correct formats. Then, after wandering through the site, I noticed a few other things. Both IE and Chrome will frequently time out on any page that is PHP-based. They never time out on static images or html files. I've looked at the site from a variety of different servers, my home and work workstations, and my netbook. Because of that, I've discounted a viral infection, as I highly doubt a virus is going to hit every one of the machines to which I have access in exactly the same manner. My setup is: Server: Win2k3, II6, PHP 5.2.9-1. Clients: IE7, IE8, Chrome (regular and dev channel): Frequent timeouts on PHP pages. Firefox 2, Firefox 3: No timeouts. Firebug shows no errors or even lengthy periods serving the pages. I've spent 2 days searching for any tech knowledge that I can find, and my search parameters are all too general. Everyone has problems loading SSL pages in IE and Chrome for a wide variety of reasons. The infrequent nature of the timeouts and the fact that there are no errors being reported anywhere is starting to drive me insane. Does anyone have any insight on a problem like this?

    Read the article

  • APC File Cache not working but user cache is fine

    - by danishgoel
    I have just got a VPS (with cPanel/WHM) to test what gains i could get in my application with using apc file cache AND user cache. So firstly I got the PHP 5.3 compiled in as a DSO (apache module). Then installed APC via PECL through SSH. (First I tried with WHM Module installer, it also had the same problem, so I tried it via ssh) All seemed fine and phpinfo showed apc loaded and enabled. Then I checked with apc.php. All seemed OK But as I started testing my php application, the stats in apc for File Cache Information state: Cached Files 0 ( 0.0 Bytes) Hits 1 Misses 0 Request Rate (hits, misses) 0.00 cache requests/second Hit Rate 0.00 cache requests/second Miss Rate 0.00 cache requests/second Insert Rate 0.00 cache requests/second Cache full count 0 Which meant no PHP files were being cached, even though I had browsed through over 10 PHP files having multiple includes. So there must have been some Cached Files. But the user cache is functioning fine. User Cache Information Cached Variables 0 ( 0.0 Bytes) Hits 1000 Misses 1000 Request Rate (hits, misses) 0.84 cache requests/second Hit Rate 0.42 cache requests/second Miss Rate 0.42 cache requests/second Insert Rate 0.84 cache requests/second Cache full count 0 Its actually from an APC caching test script which tries to retrieve and store 1000 entries and gives me the times. A sort of simple benchmark. Can anyone help me here. Even though apc.cache_by_default = 1, no php files are being cached. This is my apc config Runtime Settings apc.cache_by_default 1 apc.canonicalize 1 apc.coredump_unmap 0 apc.enable_cli 0 apc.enabled 1 apc.file_md5 0 apc.file_update_protection 2 apc.filters apc.gc_ttl 3600 apc.include_once_override 0 apc.lazy_classes 0 apc.lazy_functions 0 apc.max_file_size 1M apc.mmap_file_mask apc.num_files_hint 1000 apc.preload_path apc.report_autofilter 0 apc.rfc1867 0 apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.serializer default apc.shm_segments 1 apc.shm_size 32M apc.slam_defense 1 apc.stat 1 apc.stat_ctime 0 apc.ttl 0 apc.use_request_time 1 apc.user_entries_hint 4096 apc.user_ttl 0 apc.write_lock 1 Also most php files are under 20KB, thus, apc.max_file_size = 1M is not the cause. I have also tried using 'apc_compile_file ' to force some files into opcode cache with no luck. I have also re-installed APC with Debugging enabled, but nothing shows in the error_log I have also tried setting mmap_file_mask to /dev/zero and /tmp/apc.xxxxxx, i have also set /tmp permissions to 777 to no avail Any clue anyone. Update: I have tried following things and none cause APC file cache to populate 1. set apc.enable_cli = 1 AND run a script from cli 2. Set apc.max_file_size = 5M (just in case) 3. switched php handler from dso to FastCGI in WHM (then switched it back to dso as it did not solve the problem) 4. Even tried restarting the container

    Read the article

  • Windows 2003 GPO Software Restrictions

    - by joeqwerty
    We're running a Terminal Server farm in a Windows 2003 Domain, and I found a problem with the Software Restrictions GPO settings that are being applied to our TS servers. Here are the details of our configuration and the problem: All of our servers (Domain Controllers and Terminal Servers) are running Windows Server 2003 SP2 and both the domain and forest are at Windows 2003 level. Our TS servers are in an OU where we have specific GPO's linked and have inheritance blocked, so only the TS specific GPO's are applied to these TS servers. Our users are all remote and do not have workstations joined to our domain, so we don't use loopback policy processing. We take a "whitelist" approach to allowing users to run applications, so only applications that we approve and add as path or hash rules are able to run. We have the Security Level in Software Restrictions set to Disallowed and Enforcement is set to "All software files except libraries". What I've found is that if I give a user a shortcut to an application, they're able to launch the application even if it's not in the Additional Rules list of "whitelisted" applications. If I give a user a copy of the main executable for the application and they attempt to launch it, they get the expected "this program has been restricted..." message. It appears that the Software Restrictions are indeed working, except for when the user launches an application using a shortcut as opposed to launching the application from the main executable itself, which seems to contradict the purpose of using Software Restrictions. My questions are: Has anyone else seen this behavior? Can anyone else reproduce this behavior? Am I missing something in my understanding of Software Restrictions? Is it likely that I have something misconfigured in Software Restrictions? EDIT To clarify the problem a little bit: No higher level GPO's are being enforced. Running gpresults shows that in fact, only the TS level GPO's are being applied and I can indeed see my Software Restictions being applied. No path wildcards are in use. I'm testing with an application that is at "C:\Program Files\Application\executable.exe" and the application executable is not in any path or hash rule. If the user launches the main application executable directly from the application's folder, the Software Restrictions are enforced. If I give the user a shortcut that points to the application executable at "C:\Program Files\Application\executable.exe" then they are able to launch the program. EDIT Also, LNK files are listed in the Designated File Types, so they should be treated as executable, which should mean that they are bound by the same Software Restrictions settings and rules.

    Read the article

  • Apache proxy pass in nginx

    - by summerbulb
    I have the following configuration in Apache: RewriteEngine On #APP ProxyPass /abc/ http://remote.com/abc/ ProxyPassReverse /abc/ http://remote.com/abc/ #APP2 ProxyPass /efg/ http://remote.com/efg/ ProxyPassReverse /efg/ http://remote.com/efg/ I am trying to have the same configuration in nginx. After reading some links, this is what I have : server { listen 8081; server_name localhost; proxy_redirect http://localhost:8081/ http://remote.com/; location ^~/abc/ { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://remote.com/abc/; } location ^~/efg/ { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://remote.com/efg/; } } I already have the following configuration: server { listen 8080; server_name localhost; location / { root html; index index.html index.htm; } location ^~/myAPP { alias path/to/app; index main.html; } location ^~/myAPP/images { alias another/path/to/images autoindex on; } } The idea here is to overcome a same-origin-policy problem. The main pages are on localhost:8080 but we need ajax calls to http://remote.com/abc. Both domains are under my control. Using the above configuration, the ajax calls either don't reach the remote server or get cut off because of the cross origin. The above solution worked in Apache and isn't working in nginx, so I am assuming it's a configuration problem. I think there is an implicit question here: should I have two server declarations or should I somehow merge them into one? EDIT: Added some more information EDIT2: I've moved all the proxy_pass configuration into the main server declaration and changed all the ajax calls to go through port 8080. I am now getting a new error: 502 Connection reset by peer. Wireshark shows packets going out to http://remote.com with a bad IP header checksum.

    Read the article

  • Event ID 9331 MSExchangeSA & Event ID 9335 MSExchangeSA

    - by George
    I get this two Exchange 2010 Global Address book related event IDs: Event ID 9331 MSExchangeSA OABGen encountered error 80004005 (internal ID 50101f1) accessing the public folder database while generating the offline address list for address list '/'. -\Default Offline Address List and Event ID 9335 MSExchangeSA OABGen encountered error 80004005 while cleaning the offline address list public folders under /o=xxxxx xxxx/cn=addrlists/cn=oabs/cn=Default Offline Address List. Please make sure the public folder database is mounted and replicas exist of the offline address list folders. No offline address lists have been generated. Please check the event log for more information. -\Default Offline Address List It is Exchange 2010 SP2 sitting on Windows 2008 enterprise edition. Essentially the issue is that the global address book is not being updated on Outlook clients. We are using Outlook 2007 and 2010. So far I have tried running the following command: Update-FileDistributionService -Identity ExchangeServer -Type "OAB" And I tried this solution as well: 1) Make sure the Microsoft Exchange System Attendant is running. It will be set to start automatically by default, but it doesn't. This is a known issue. Start this service manually. When running, you will not get an error when trying to update the GAL. 2) "Apply" any changes made to any address lists before the GAL will update Outlook properly. In Organization Configuration - Mailbox in EMC, view the properties of the Default Global Address Book in the Offline Address Book tab. In the properties window, select the Address Lists tab. This shows which address lists makes up the GAL. 3) Close the properties window and select the Address Lists tab in the Organization Configuration - Mailbox. Right-click each address list used by the Def GAL and click "Apply" (make sure the "Immediately" radio button is checked). 4) Last, go back to the Offline Address Book tab, right-click the GAL and select "Update". After a few send/receives in the Outlook clients, their Glogal Address List should update to show the latest changes. Neither one of those solutions helped. So I am not really sure what to do here. Also, I am aware of changing registry on each local computers, but it would be close to impossible as we have 8 offices in 3 different countries. Any suggestions? EDIT 7.XII.2012 @ 10.35 I forgot to mention that we did rebuild the address book and that didn't help.

    Read the article

  • kvm and qemu host: Is there a limit for max CPUs (Ubuntu 10.04)?

    - by Valentin
    Today we encountered a really strange behaviour on two identical kvm and qemu hosts. The host systems each have 4 x 10 Cores, which means that 40 physical cores are displayed as 80 within the operating system (Ubuntu Linux 10.04 64 Bit). We started a Windows 2003 32 Bit VM (1 CPU, 1 GB RAM, we changed those values multiple times) on one of the nodes and noticed that it took 15 minutes until the boot process began. During those 15 minutes, a black screen is shown and nothing happens. libvirt and the host system show that the qemu-kvm process for the guest is almost idling. stracing this process only shows some FUTEX entries, but nothing special. After those 15 minutes, the Windows VM suddenly starts booting and the Windows logo occurs. After a few seconds, the VM is ready to be used. The VM itself is very performant, so this is no performance issue. We tried to pin the CPUs with the virsh and taskset tools, but this only made things worse. When we boot the Windows VM with a Linux Live CD there is also a black screen for several minutes, but not as long as 15. When booting another VM on this host (Ubuntu 10.04) it also has the black screen problem, and also here the black screen is only shown for 2-3 minutes (instead of 15). So, summerinzing this: Each guest on each of those identical nodes suffers from idling a few minutes after being started. After a few minutes, the boot process suddenly starts. We have observed that the idling time happens right after the bios of the guest was initialized. One of our employees had the idea to limit the amount of CPUs with maxcpus=40 (because of 40 physical cores existing) within Grub (kernel parameter) and suddenly the "black-screen-idling"-behaviour disappeared. Searching the KVM and Qemu mailing lists, the internet, forums, serverfault and other various sites for known bugs etc. showed no useful results. Even asking in the dev IRC channels brought no new ideas. The people there recommend us to use CPU pinning, but as stated before it didn't help. My question is now: Is there a sort of limit of CPUs for a qemu or kvm host system? Browsing the source code of those two tools showed that KVM would send a warning if your host has more than 255 CPUs. But we are not even scratching on that limit. Some stuff about the host system: 3.0.0-20-server kvm 1:84+dfsg-0ubuntu16+0.14.0+noroms+0ubuntu4 kvm-pxe 5.4.4-7ubuntu2 qemu-kvm 0.14.0+noroms-0ubuntu4 qemu-common 0.14.0+noroms-0ubuntu4 libvirt 0.8.8-1ubuntu6 4 x Intel(R) Xeon(R) CPU E7-4870 @ 2.40GHz, 10 Cores

    Read the article

< Previous Page | 489 490 491 492 493 494 495 496 497 498 499 500  | Next Page >