Search Results

Search found 12472 results on 499 pages for 'remote debugging'.

Page 461/499 | < Previous Page | 457 458 459 460 461 462 463 464 465 466 467 468  | Next Page >

  • Windows 7 losing one of my displays after restart

    - by j_kubik
    I have an Intel DZ68BC motherboard with Intel HD graphics card using two monitors (on DVI and on HDMI » VGA). My friend asked me to test if his NVIDIA graphics card works well on my computer (at his it was doing some trouble), so I inserted it in my computer, installed the NVIDIA driver and it worked quite well. Then I removed it, uninstalled everything NVIDIA-related I could find and switched monitors back to my Intel card. Since then after every system start/restart, the system sees only monitor on HDMI » VGA connector, completely ignoring the DVI monitor. I noticed that installing the Intel video drivers causes the system to recognize the second monitor if I don't immediately reboot. After a reboot, the system recognizes only the HDMI » VGA monitor. I also tried starting in safe-mode and using DriveSweeper to remove the remains of NVIDIA drivers. While it seems that some drivers were removed, the situation didn't change. Now I am out of ideas and I really wouldn't like to reinstall the system (again...). I also tried restoring the system to the state before this whole story, but it also didn't change anything. EDIT: I am still trying to troubleshoot this problem. The only point that I could start was driver re-instalation. I traced down the part that restores right settings to a call: C:\Users\Jarek\Desktop\GFX_Win7_64_8.15.10.2696\x64\Drv64.exe -driverinf "C:\Users\Jarek\Desktop\GFX_Win7_64_8.15.10.2696\Graphics\igdlh64.inf" -flags 20 -keypath "Software\Intel\Difx64" This call fixes my displays, and as workaround, I will add it for now to my autorun. I am still looking for better solution anyway... EDIT2: Using DriverView i made a list of currently used drivers both before and after fixing my display using above command. Then i compared logs: No drivers were removed by fixing command. Drivers added by fixing command: MS Remote Access serial network driver (asyncmac.sys) security processor (spsys.sys) Drivers that changed base address (indicates driver-reload?) Canonical Display Driver (cdd.dll) Intel Graphics Kernel Mode Driver (igdkmd64.sys) Monitor Driver (monitor.sys) Added drivers seem rather unrelated to the problem to me, reloaded drivers are just a cnsequence of installing new driver file so there is not much to go here... I really cannot make heads or tails out of it...

    Read the article

  • Apache error log interpretation

    - by HTF
    It looks like someone gained access to my server. How I can find out which Apache vHosts this log is related to? How these commands from the log are invoked and how/why they are printed to the log file - is this some remote shell or PHP script? /var/log/httpd/error_log mkdir: cannot create directory `/tmp/.kdso': File exists --2014-06-13 13:29:17-- http://updates.dyndn-web.com/abc.txt Resolving updates.dyndn-web.com... 94.23.49.91 Connecting to updates.dyndn-web.com|94.23.49.91|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 5055 (4.9K) [text/plain] Saving to: `abc.txt' 0K .... 100% 303K=0.02s 2014-06-13 13:29:17 (303 KB/s) - `abc.txt' saved [5055/5055] % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed ^M 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0^M101 5055 101 5055 0 0 79686 0 --:--:-- --:--:-- --:--:-- 154k minerd64: no process killed minerd32: no process killed named: no process killed kernelupdates: no process killed kernelcfg: no process killed kernelorg: no process killed ls: cannot access /tmp/.ICE-unix: No such file or directory mkdir: cannot create directory `/tmp': File exists --2014-06-13 13:29:18-- http://updates.dyndn-web.com/64.tar.gz Resolving updates.dyndn-web.com... 94.23.49.91 Connecting to updates.dyndn-web.com|94.23.49.91|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 205812 (201K) [application/x-tar] Saving to: `64.tar.gz' 0K .......... .......... .......... .......... .......... 24% 990K 0s 50K .......... .......... .......... .......... .......... 49% 2.74M 0s 100K .......... .......... .......... .......... .......... 74% 2.96M 0s 150K .......... .......... .......... .......... .......... 99% 3.49M 0s 200K 100% 17.4M=0.1s 2014-06-13 13:29:18 (1.99 MB/s) - `64.tar.gz' saved [205812/205812] sh: ./kernelupgrade: Permission denied

    Read the article

  • Share the same subnet between Internal network and VPN Clients

    - by Pascal
    I would like to set up a configuration where VPN clients connecting to my Forefront TMG can access all the resources of my Internal network without having the to use the option "Use default gateway on remote network" on the VPN's TCP/IP Ipv4 Advanced Settings. This is important to me, since they can use their own internet while accessing my network through VPN (the security implications of this are acceptable on my cenario) My Internal network runs on 10.50.75.x, and I set up Forefront TMG to relay the DHCP of my Internal network to the VPN clients, so they get IPs from the same range as the Internal network. This setup initially works, and the VPN clients use their own internet, and can access anything that is on the internal network. However, after a while, HTTP Proxy Traffic from the Internal network starts getting routed to the IP of the RRAS Dial In Interface, instead of the IP of the Internal's network gateway. When this happens, the HTTP Proxy starts getting denied for obvious reasons. My first question is: does this happen because Forefront TMG wasn't designed to handle a cenario that I described above, and it "loses itself"? My second question is: Is there any way to solve this problem, either through configuration or firewall policies? My third question is: If there's no way that it can work with the cenario above, is there another cenario that will solve my problem, and do what I'd like it to do properly? Below are my network routes: 1 => Local Host Access => Route => Local Host => All Networks 2 => VPN Clients to Internal Network => Route => VPN Clients => Internal 3 => Internet Access => NAT => Internal, Perimeter, VPN Clients => External 4 => Internal to Perimeter => Route => Internal, VPN Clients => Perimeter Tks!

    Read the article

  • SMBfs mounting OK, listing OK, Read KO, smbclient OK

    - by Kwaio
    I've tried to make the title the most meaningfull I could but it still looks ugly. The premises. We are using RHEL3-U8 as OS on most servers here, don't ask me why or suggest to upgrade, it's not on today's schedule. That means kernel used is 2.4.21 I have no access to the remote server, but I know it is a netApp NAS rack. $> smbclient --version Version 3.0.9-1.3E.9 Here is the /etc/fstab line : //NASHOSTNAME/share /mnt/mydir smbfs ro,uid=123,gid=123,workgroup=XXXX,credentials=/somefile 0 0 Here is the following mount output line //NASHOSTNAME/share on /mnt/mydir type smbfs (0) The symptoms. I can list the share without problems, even cd in there. The issue appears if I try to read any file : $> cat /mnt/mydir/fileX.txt cat: /mnt/mydir/fileX.txt: Input/output error In the system logs (/var/log/kernel for example) the following errors appear. Jul 30 15:40:02 hostname kernel: smb_errno: class ERRHRD, code 31 from command 0x2 Jul 30 15:40:02 hostname kernel: smb_errno: class ERRHRD, code 31 from command 0x2 Jul 30 15:40:02 hostname kernel: smb_open: fileX.txt open failed, result=-5 Jul 30 15:40:02 hostname kernel: smb_errno: class ERRHRD, code 31 from command 0x2 Jul 30 15:40:02 hostname kernel: smb_errno: class ERRHRD, code 31 from command 0x2 Jul 30 15:40:02 hostname kernel: smb_open: fileX.txt open failed, result=-5 Jul 30 15:40:02 hostname kernel: smb_readpage_sync: fileX.txt open failed, error=-5 The ERRHRD code 0x001F error is "General hardware failure" although it seems samba sometimes uses it for a different purpose, see http://www.ubiqx.org/cifs/SMB.html [Strange behaviour Alert] Additionnal informations : There is another SMB mountpoint on the system pointing to a (linux) host using samba and this one works. What I have tried. I have tried adding debug=4 to the mounting options and remounting the share and the logs still look the same. I have tried to mount the share with smbclient and I am able to fetch files with the get command. Both targets are in the same subnet, so network problem should be out, even if the LAN goes through a VPN with optimizers, MTU has already been decreased to 1450. I can also mount the share through NFS but then the files are all root.root 700 and I need to read them with another user...

    Read the article

  • Hyper-V Ubuntu Networking Problems Copying Large Amounts of Data

    - by Anonymous
    I am trying to copy a large amount (about 50 GB) of data over my network from a Hyper-V-hosted virtual machine running Ubuntu 11.04 (Natty Narwhal) to another (non-virtual) Ubuntu host that I plan to use for testing upgrades to one of our web applications. The problem I am having is with the virtual machine, which I shall refer to in what follows as "source.host". This machine is running 64-bit Ubuntu Server with the 2.6.38-8-server kernel and the Microsoft Linux Integration Components for Hyper-V kernel modules (hv_utils, hv_timesource, hv_netvsc, hv_blkvsc, hv_storvsc, and hv_vmbus) loaded. It uses a Hyper-V "synthetic network adapter" for its networking interface. To do the copy, I log on to the machine with the data and run the following commands (Call the remote machine "destination.host".): $ cd /path/to/data $ tar -cvf - datafolder/ | ssh [email protected] "cat > ~/data.tar" This runs for a while and then suddenly stops after transferring somewhere from 2-6 GB. The terminal on the source.host machine displays a Write failed: broken pipe error. The odd part is this: after this occurs, the "source.host" machine is no longer able to talk to the rest of the network. I cannot ping any other hosts on the network from the "source.host" machine, and I cannot ping the "source.host" machine from any other host on the network. I am equally unable to access the any of the web services hosted on "source.host". Running ifconfig on "source.host" shows the network adapter to be up and running as usual with the correct IP address and everything. I tried restarting the networking service with $ /etc/init.d/networking restart but the problem does not go away. Restarting the machine makes it capable of talking to the network again -- it can ping and be pinged by other hosts, and the web services are also accessible and usable as normal -- but attempting the copy operation again results in the same failure, requiring another restart. As an experiment, I tried replacing the tar -- ssh pipeline above with a straight scp: $ scp -r datafolder/ [email protected]:~ but to no avail Thinking that the issue might have to do with the kernel packet-send buffers filling up, I tried increasing the buffer size to 12 MB (up from the 128 KB default) with # echo 12582911 > /proc/sys/net/core/wmem_max but this also had no effect. I'm guessing at this point that it might be a problem with the Microsoft synthetic network driver, but I don't really know. Does anyone have any suggestions? Thank you very much in advance!

    Read the article

  • USB connection is unstable with Nexus S 2.3.4 on AMD 64 running 64-bit Windows 7, but works with 32-bit Windows Vista

    - by Mike
    The USB connection is unstable with Nexus S (Android 2.3.4) on AMD 64 running 64-bit Windows 7, but it works with 32-bit Windows Vista. Problem Description: On the 64-bit Windows 7 machine my Nexus S appears to connect, but then it disconnects moments later. Neither accessing USB storage or loading an Android application package file (APK) using the Android Debug Bridge (ADB) work. On 32-bit Windows Vista using the same USB cable, USB storage works. I haven't tried the ADB on 32-bit Windows Vista. Reproduction steps for USB storage: (I have provided the reproduction steps for USB storage and not ADB, because if one isn't working, then the other isn't working either and the USB storage reproduction steps are shorter to document.) Connect the USB cable to the Nexus S and my Windows 7 machine. Effect: The "USB Mass Storage, USB Connected" dialog appears with the button "Turn on USB storage." Click "Turn on USB Storage" Effect: The "working circle" appears. A dialog briefly appears saying "USB storage in use," then it either returns me to Step 1 (now that I am running 2.3.4) or is replaced with the Nexus S's application homepage (while I was running 2.3.3). I'm not sure if the version matters, but I mention it for completeness. On the 32-bit Windows Vista machine the connection is stable. I am able to navigate through the Nexus S file system create, read, update, and delete files, etc. I haven't tried connecting with the ADB. Troubleshooting summary: Tried and failed: Uninstalling and reinstalling the Android USB drivers including removing the files. Uninstalling my custom software Pulling the Nexus S's battery Restarting the Nexus S Restarting 64-bit Windows 7 Changing USB ports on the 64-bit Windows 7 box Compared the dates and file size on the DLLs in my google-usb_driver\amd64 directory and the windows\System32 directory. They match. The sizes for the google-usb_driver\i386 directory do not match (expected). Turning off Debugging mode on the Nexus S does not resolve the problem. Searching Google. Tried and succeeded: Connecting to another machine (Windows Vista) using the same USB cable and Nexus S phone. Troubleshooting observations: I notice that uninstalling the device drivers and deleting the files, then reinstalling the drivers, then rebooting 64-bit Windows 7 then unplugging the Nexus S, then plugging it back in occasionally helps for a short amount of time (minutes to hours, not days). When it is working, I can both access the Nexus S's drive and load/test applications using the ADB. I have observed some wonky behavior in the Device Manager that I haven't tracked down. Sometimes the black Nexus S image appears in the list of devices. Sometimes the image displays as a computer with a green ISA card. Sometimes it neither appears on the top level of devices nor under “other devices,” but it does appear under "disk drives" as "Android UMS Composite USB Device." System configuration: The Nexus S is running Android OS 2.3.4's "Settings\about phone\System updates" indicates that it is up to date as of May 21st 2011. Both 32-bit Windows Vista and 64-bit Windows 7 are up to date. The Windows Vista system is running on an Intel 32-bit processor. Windows 7 is running on an AMD 64-bit processor. I have done Android development on both systems, but I usually develop on the 64-bit Windows 7 machine.

    Read the article

  • Can't ping Ip over bridge

    - by tmn29a
    I'm unable to ping another host over a bridge I created, I can't see the error -.- It's a remote machine running debian stable with some backports for which I want to set up DHCP on the new Subnet 172.30.xxx.xxx to be used for KVM-Guests. ifconfig : bond0 Link encap:Ethernet HWaddr e4:11:5b:d4:94:30 inet addr:10.54.2.84 Bcast:10.54.2.127 Mask:255.255.255.192 inet6 addr: fe80::e611:5bff:fed4:9430/64 Scope:Link UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1 RX packets:34277 errors:0 dropped:0 overruns:0 frame:0 TX packets:18379 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2638709 (2.5 MiB) TX bytes:2887894 (2.7 MiB) br0 Link encap:Ethernet HWaddr f2:fc:4d:7f:15:f0 inet addr:172.30.254.66 Bcast:172.30.254.127 Mask:255.255.255.192 inet6 addr: fe80::f0fc:4dff:fe7f:15f0/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:252 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:10800 (10.5 KiB) Pings : ping -I br0 172.30.xxx.65 PING 172.30.xxx.65 (172.30.xxx.65) from 172.30.xxx.66 br0: 56(84) bytes of data. --- 172.30.xxx.65 ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 2017ms ping -I bond0 172.30.254.65 PING 172.30.xxx.65 (172.30.xxx.65) from 10.54.2.84 bond0: 56(84) bytes of data. 64 bytes from 172.30.x.65: icmp_req=1 ttl=64 time=0.599 ms 64 bytes from 172.30.x.65: icmp_req=2 ttl=64 time=0.575 ms 64 bytes from 172.30.x.65: icmp_req=3 ttl=64 time=0.565 ms --- 172.30.x.65 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1999ms rtt min/avg/max/mdev = 0.565/0.579/0.599/0.031 ms Route : Destination Gateway Genmask Flags Metric Ref Use Iface 172.30.x.64 * 255.255.255.192 U 0 0 0 br0 10.54.x.64 * 255.255.255.192 U 0 0 0 bond0 default 10.54.x.65 0.0.0.0 UG 0 0 0 bond0 default 172.30.x.65 0.0.0.0 UG 0 0 0 br0 The Interface : cat /etc/network/interfaces auto lo br0 iface lo inet loopback # Bonding Interface auto bond0 iface bond0 inet static address 10.54.x.84 netmask 255.255.255.192 network 10.54.x.64 gateway 10.54.x.65 slaves eth0 eth1 bond_mode active-backup bond_miimon 100 bond_downdelay 200 bond_updelay 200 iface br0 inet static bridge_ports bond0 address 172.30.x.66 broadcast 172.30.x.127 netmask 255.255.x.192 gateway 172.30.x.65 bridge_maxwait 0 If you need more info please ask. Thanks for your help !

    Read the article

  • What is the difference between running a Windows service vs. running through shell?

    - by Zack
    I am trying to troubleshoot an issue on a Windows 2008 server where running attempting to connect to a "Timberline Data Source" ODBC driver crashes if the call is in a "service" context, but succeeds if the call is initiated manually in a Remote Desktop session. I have set the service to run as my user. I'm wondering if, all else being equal (user, machine, etc), are there any fundamental security/environment differences between running a process as a service vs manually? --- Implementation Details --- In case it is helpful for anyone, I had a system that started as an attempt to connect to a Timberline Database using ODBC and a Python CGI script called via IIS 7. The script itself works fine, however, as soon as I attempt to perform the ODBC connect function, the script crashes without throwing an exception. The script was able to connect fine when executed via command line. The same thing happened when using a C#/.net service, attempting to run via Apache, Windows Scheduler or even a 3rd party scheduling tool. With the last option (the 3rd party scheduling tool, pycron) I set the service up log in as my user and had the same issue (I confirmed via Task Manager that the process running user was, in fact, me). It just doesn't make sense to me why a service, which should be running as my user, appears to still be operating in a different security context or environment. Also, if it's important, the Timberline database is referenced by computer name on the network ("\\timberline-server\Timberline Office\Accounts\AT" or something to that effect) I also realized that, as Joel pointed out, the server DOES have a mapped drive ("Y:" which is mapped to "\\timberline-server\Timberline Office") The DSN is set up at the "System DSN" level which, according to the ODBC Administration Tool, means that the DSN is available to users and services Since I'm not allowed to answer this question yet, I'll post the solution that I arrived on: As Joel Coel mentioned, there actually was a mapped drive scenario. I didn't realize this because the DSN specified a path using UNC. However, it seems as though the actual Timberline Driver referred to a mapped drive. Since services don't start with the mapped drive, I was forced to add the drive mapping code into my service. Since it was written in python, I used code from a Stackoverflow answer that was able to map the drive on the fly.

    Read the article

  • IE9 Error: There was a pr?blem sending the command to the program

    - by HK1
    I'm working on a new/fresh Windows 7 32bit machine that now has IE9 installed. The user is using the Dell Stardock application as his primary "desktop" (all his links there). When we place an internet link there and click on it we get the following error message: There was a problem sending the command to the program. To me this indicates that IE9 is having trouble going to the website we want to go to, which should get passed as a parameter to the browser when it opens. I don't think this is a StarDock/ObjectDock problem because we also have some other problems with internet links. For example, we cannot move an internet link from the Desktop to the Quick Launch on the task bar. When we do try, it forces us to put the link with the IE icon as part of the IE menu instead of allowing us to have a shortcut there as it's own entry. I should mention however, that links on the desktop and in the taskbar do work as we expect them too (without showing the above error message). It appears that this problem started after installing Windows Updates. Since we installed a whole bunch of updates at once I have no idea which one caused the problem. I did have Google Chrome installed but I uninstalled it since the user wants to use IE. The problem started before I uninstalled Chrome. I also reset the browser settings on IE9. It didn't help. Next I uninstalled IE9 which took me back to IE8. This actually did resolve the problem but the problem came back as soon as I installed IE9 again. We have Verizon Internet Security installed. It's actually a McAfee product rebranded to look like Verizon. I'm not real crazy over this software but the customer has a subscription so we're not planning to change it. I have no reason to believe that this is causing the problem and yet I know that security software is often to blame for strange issues. I've looked at the registry settings for the following keys and everything appears to be ok for every single one of them: HKEY_CLASSES_ROOT\.htm HKEY_CLASSES_ROOT\.html HKEY_CLASSES_ROOT\http\shell\open\command HKEY_CLASSES_ROOT\http\shell\open\ddeexec\Application HKEY_CLASSES_ROOT\https\shell\open\command HKEY_CLASSES_ROOT\https\shell\open\ddeexec\Application HKEY_CLASSES_ROOT\htmlfile\shell\open\command HKEY_CLASSES_ROOT\Microsoft.Website\Shell\Open\Command Edit1: I've found two potential solutions but I won't be able to try them until tomorrow. One is to disable the "Windows Font Cache" service. Another is to clear IE cache and browsing history. I won't be able to try out either solution until tomorrow since this is a remote client's machine. I see there are lots of other suggestions online but if you take the time to read them through you'll see that the other suggestions didn't fix the problem.

    Read the article

  • Network unreachable on Ubuntu guest after trying to set up a host only network on Virtualbox

    - by gkb0986
    I have a Mac OS X host and a bunch of guests including Ubuntu and Arch Linux. I was trying to set up a host-only network at eth1 to let me ssh into the system. But now eth0 isn't working properly either. Ubuntu can no longer connect to remote hosts or browse the internet. It tells me that the network is unreachable. What's gone wrong here? I've included some diagnostics below. $ifconfig lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:10968 errors:0 dropped:0 overruns:0 frame:0 TX packets:10968 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:897264 (897.2 KB) TX bytes:897264 (897.2 KB) Other diagnostic commands and the output: $sudo lspci -n 00:00.0 0600: 8086:1237 (rev 02) 00:01.0 0601: 8086:7000 00:01.1 0101: 8086:7111 (rev 01) 00:02.0 0300: 80ee:beef 00:03.0 0200: 8086:100e (rev 02) 00:04.0 0880: 80ee:cafe 00:05.0 0401: 8086:2415 (rev 01) 00:06.0 0C03: 106B:003F 00:07.0 0680: 8086:7113 (REV 08) 00:0D.0 0106: 8086:2829 (REV 02) $sudo lshw -c network *-network DISABLED description: Ethernet interface product: 82540EM Gigabit Ethernet Controller vendor: Intel Corporation physical id: 3 bus info: pci@0000:00:03.0 logical name: eth0 version: 02 serial: 08:00:27:7d:22:df size: 1Gbit/s capacity: 1Gbit/s width: 32 bits clock: 66MHz capabilities: pm pcix bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=e1000 driverversion=7.3.21-k8-NAPI duplex=full firmware=N/A latency=64 link=no mingnt=255 multicast=yes port=twisted pair speed=1Gbit/s resources: irq:19 memory:f0000000-f001ffff ioport:d010(size=8) $lsmod Module Size Used by nls_utf8 12557 1 isofs 40257 1 vboxsf 43743 2 vesafb 13844 1 snd_intel8x0 38570 2 snd_ac97_codec 134869 1 snd_intel8x0 ac97_bus 12730 1 snd_ac97_codec snd_pcm 97275 2 snd_intel8x0,snd_ac97_codec snd_seq_midi 13324 0 snd_rawmidi 30748 1 snd_seq_midi snd_seq_midi_event 14899 1 snd_seq_midi rfcomm 47604 0 snd_seq 61929 2 snd_seq_midi,snd_seq_midi_event bnep 18281 2 bluetooth 180113 10 rfcomm,bnep ppdev 17113 0 psmouse 97519 0 snd_timer 29990 2 snd_pcm,snd_seq joydev 17693 0 snd_seq_device 14540 3 snd_seq_midi,snd_rawmidi,snd_seq vboxvideo 12622 1 serio_raw 13211 0 snd 79041 11 snd_intel8x0,snd_ac97_codec,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device soundcore 15091 1 snd vboxguest 235498 7 vboxsf parport_pc 32866 0 drm 241971 2 vboxvideo i2c_piix4 13301 0 snd_page_alloc 18529 2 snd_intel8x0,snd_pcm mac_hid 13253 0 lp 17799 0 parport 46562 3 ppdev,parport_pc,lp usbhid 47238 0 hid 99636 1 usbhid e1000 108589 0

    Read the article

  • Router behind Router--second router (and its clients) cannot be "seen" even after both routers are D

    - by Trioke
    Couple of terminology I guess I should get out of the way for consistency's sake throughout the post: External Router/Modem - SMC 8014WG - External IP 173.32.144.134 - Internal IP 192.168.0.1 Internal Router - LinkSys WRT120N - "External" IP of 192.168.0.175 - Internal IP 192.168.1.1 - Connected via Ethernet Cable (a really long one, from the basement to the second floor) PC - IP 192.168.200 - Connected Wirelessly via WAP2 Personal. Laptop - Used to try and diagnose the problem, a 4th machine to the setup which won't be part of the final setup once everything works. The actual problem: I've tried setting the LinkySys router as a DMZ'd client on the SMC router, and then DMZ'd the actual PC on the LinkSys. So the DMZ looks like this: On the SMZ, client with IP 192.168.0.175 is DMZ'd. On the LinkSys, client with IP 192.168.1.200 is DMZ'd. No dice. I then tried port forwarding the necessary port on the SMC to the LinkSys (lets just say, port 80). Then port forwarded Port 80 on the LinkSys to the PC. Same as the DMZ scenario above, but change DMZ with port forwarding. No dice, still :(. Now here's where I went stupid--and tell me if one should never do this--I enabled both DMZ and port forwarding at the same time. I fired up Opera--my browser of choice ;)--typed in 173.32.144.134:6333 and... ... Third time is the charm they say? Well, clearly not. Otherwise I wouldn't be here ;). To diagnose the problem, I enabled "Allow remote access to the Admin panel" on the LinkSys router, and specified port 6333 as the port to use. I port forwarded port 6333 on the SMC to 192.168.0.175, and access my external IP of 173.32.144.134:6333 in hopes of seeing the Admin panel... No dice (I think I've ran out of dice by now ;)). So to see where the problem was, I connected a laptop to the SMC via LAN cable, and typed in 192.168.0.175:6333, and viola, Admin Panel access! So the problem looks like it lies with the SMC--But that's as far as I've got, I've done the port forwarding, the DMZ'ing, and I've even disabled the built-in firewall for safe measures, but nothing worked. So, here I am. Unable to connect to the PC behind the Internal router externally, and without anything to go on other than to come here and ask for the wisdom of the the superuser folks :). If any more detail is required, just ask. (Apologies in advance, if questions should never be this long winded!)

    Read the article

  • Reliable file copy (move) process - mostly Unix/Linux

    - by mfinni
    Short story : We have a need for a rock-solid reliable file mover process. We have source directories that are often being written to that we need to move files from. The files come in pairs - a big binary, and a small XML index. We get a CTL file that defines these file bundles. There is a process that operates on the files once they are in the destination directory; that gets rid of them when it's done. Would rsync do the best job, or do we need to get more complex? Long story as follows : We have multiple sources to pull from : one set of directories are on a Windows machine (that does have Cygwin and an SSH daemon), and a whole pile of directories are on a set of SFTP servers (Most of these are also Windows.) Our destinations are a list of directories on AIX servers. We used to use a very reliable Perl script on the Windows/Cygwin machine when it was our only source. However, we're working on getting rid of that machine, and there are other sources now, the SFTP servers, that we cannot presently run our own scripts on. For security reasons, we can't run the copy jobs on our AIX servers - they have no access to the source servers. We currently have a homegrown Java program on a Linux machine that uses SFTP to pull from the various new SFTP source directories, copies to a local tmp directory, verifies that everything is present, then copies that to the AIX machines, and then deletes the files from the source. However, we're finding any number of bugs or poorly-handled error checking. None of us are Java experts, so fixing/improving this may be difficult. Concerns for us are: With a remote source (SFTP), will rsync leave alone any file still being written? Some of these files are large. From reading the docs, it seems like rysnc will be very good about not removing the source until the destination is reliably written. Does anyone have experience confirming or disproving this? Additional info We will be concerned about the ingestion process that operates on the files once they are in the destination directory. We don't want it operating on files while we are in the process of copying them; it waits until the small XML index file is present. Our current copy job are supposed to copy the XML file last. Sometimes the network has problems, sometimes the SFTP source servers crap out on us. Sometimes we typo the config files and a destination directory doesn't exist. We never want to lose a file due to this sort of error. We need good logs If you were presented with this, would you just script up some rsync? Or would you build or buy a tool, and if so, what would it be (or what technologies would it use?) I (and others on my team) are decent with Perl.

    Read the article

  • Need help troubleshooting highly variable ping times

    - by Elliot.Bradshaw
    I'm at work using Citrix (think Remote Desktop) to connect to client sites. With my job I have to write a fair bit of code while I'm connected remotely via Citrix, so the latency of my internet connection is important. If I'm getting ping times above 250ms, then it becomes almost impossible to scroll, click or type with accuracy. Recently my Comcast business internet has been exhibiting highly variable ping times. If I ping google.com, I'll get pings that range from 9ms all the way up to 1300ms. The problem seems to be at its worst during the hours of 1PM to 4:30PM. Outside of those hours and the variance in pings settles down, mostly between 9ms and 50ms. The signal to noise ratio and upstream power are both fine on my modem--the values are here: http://pastebin.com/D4hWGPXf I ran a trace route from my computer to google.com (the results of which are here: http://pastebin.com/GcdjYvMh) and did another test ping to the IP of the first hop outside of our local network (73.98.44.1)--the variance in ping times existed in exactly the same manner as if I were pinging Google. Connecting directly to the cable modem by CAT5 makes no difference. Here is a screenshot demonstrating the variance of the ping times: http://postimage.org/image/haocdeauv/full/ -- as you can see it can get pretty bad. Three Comcast techs have been out (two of them were here when the problem wasn't happening) and they as well as the regional tier 2 Comcast support were unable to diagnose the problem. I now have a ticket open with tier 3 support, but have yet to hear back from them. Does anyone know what could cause these sorts of problems or have any idea from the traceroute above where it could be originating? The regional tier 2 guy tried to tell me that what I'm seeing is normal--are highly variable ping times like that ever acceptable? Anything I should ask Comcast to do or look at to get this problem fixed? Any tips/advice much appreciated! Edit: This is Comcast cable internet at a small start-up, we've ruled out congestion in our private LAN as a cause (i.e., no one's watching YouTube when the pings become variable). Update: Tier 3 Comcast support advised swapping out the modem, a tech came here today and did that--same problem persists.

    Read the article

  • unable to connect site to different port

    - by JohnMerlino
    I have a domain was registered at godaddy named http://mysite.com/. I logged into godaddy and I went to All Products Domains Domain Management. I clicked on the appropriate domain and it took me to the Domain Details page. I clicked Launch under DNS Manager and it took me to the Zone File Editor. I noticed that notify.mysite.com was pointing to an IP address pointing to a dead server, so I switched it to an operating server. Then I pinged the domain to see where it was pointing to and it was correctly pointing to the working server. So I copied the default configuration under sites-available: sudo cp default notify.mysite.com. And then I made some edits to it to have it point to a different document root to serve files at a different port: Listen 1740 Listen 64.135.xx.xxx:1740 #I also tried this as well: NameVirtualHost 64.135.xx.xxx:1740 <VirtualHost 64.135.xx.xxx:1740> ServerAdmin [email protected] ServerName notify.mysite.com DocumentRoot /var/www/test/public <Directory /var/www/test/public> Order allow,deny allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> Then I enabled the virtual host. Then I went to the document root and added an index.html file with some text in it. Then I restarted apache. The restart gave no errors. Then I type the correct domain in URL: http://notify.mysite.com:1740/ and I get: Oops! Google Chrome could not connect to notify.mysite.com:1740 Somehow it took out all my other sites. Now even the ones that were responding on port 80 are no longe responding, even though I did not touch the virtual hosts for them. I get this message now: Oops! Google Chrome could not connect to mysite.com However, ping responds: ping mysite.com PING mysite.com (64.135.12.134): 56 data bytes 64 bytes from 64.135.12.134: icmp_seq=0 ttl=49 time=20.839 ms 64 bytes from 64.135.12.134: icmp_seq=1 ttl=49 time=20.489 ms The result of telnet: $ telnet guarddoggps.com 80 Trying 64.135.12.134... telnet: connect to address 64.135.12.134: Connection refused telnet: Unable to connect to remote host

    Read the article

  • How can I minimize the amount my router slows down my Internet connection speed?

    - by Lord Torgamus
    Background I'm working with what I assume is a pretty common Internet setup: a cable modem, a wireless router and a few Internet-connected devices. Lately, I've started being more demanding on my Internet connection, and noticed that using my router slows down my download speeds considerably. I just kind of dealt with it until Zune Marketplace on the Xbox 360 told me that a movie was going to take well over ten hours to download, and I just didn't want to wait that long. Good little scientist that I am, I tried to reduce the problem down to one variable. The test As a control, I turned off all the devices in the house that use wireless Internet, and unplugged all the wired devices except for the Xbox. I also power-cycled both the modem and the router. I then tried to download the movie again, and was told that it would still take over ten hours. Next, I unplugged the router, and connected the Xbox directly to the modem. The movie downloaded in just over one hour. As far as I can tell, this means that my ISP, other cable users near me, the remote servers, anything wireless-related and my machines' disk speeds can't be at fault. A similar experiment that replaced the Xbox with a wired laptop produced similar results. To me, this says "the router is responsible for things taking around ten times longer to download." My question I'd still prefer to use the router for a few reasons: it's a pain to connect and disconnect everything every time there's a big file to download direct connection to the modem isn't good for security only one machine can be connected directly to the modem at a time What can I do to have fast connection speeds while still using the router? I don't mind turning other machines off, as long as I don't have to mess with power and ethernet cables. EDIT : After asking this followup question and then this one, I installed dd-wrt on my router, and I seem to be getting higher and more consistent speeds. Perhaps more importantly, my memory use is fairly constant. I know this isn't an answer — which is why I'm not posting it as an answer — but it is how I resolved the situation, and hopefully it'll be helpful for someone.

    Read the article

  • reading the file name from user input in MIPS assembly

    - by Hassan Al-Jeshi
    I'm writing a MIPS assembly code that will ask the user for the file name and it will produce some statistics about the content of the file. However, when I hard code the file name into a variable from the beginning it works just fine, but when I ask the user to input the file name it does not work. after some debugging, I have discovered that the program adds 0x00 char and 0x0a char (check asciitable.com) at the end of user input in the memory and that's why it does not open the file based on the user input. anyone has any idea about how to get rid of those extra chars, or how to open the file after getting its name from the user?? here is my complete code (it is working fine except for the file name from user thing, and anybody is free to use it for any purpose he/she wants to): .data fin: .ascii "" # filename for input msg0: .asciiz "aaaa" msg1: .asciiz "Please enter the input file name:" msg2: .asciiz "Number of Uppercase Char: " msg3: .asciiz "Number of Lowercase Char: " msg4: .asciiz "Number of Decimal Char: " msg5: .asciiz "Number of Words: " nline: .asciiz "\n" buffer: .asciiz "" .text #----------------------- li $v0, 4 la $a0, msg1 syscall li $v0, 8 la $a0, fin li $a1, 21 syscall jal fileRead #read from file move $s1, $v0 #$t0 = total number of bytes li $t0, 0 # Loop counter li $t1, 0 # Uppercase counter li $t2, 0 # Lowercase counter li $t3, 0 # Decimal counter li $t4, 0 # Words counter loop: bge $t0, $s1, end #if end of file reached OR if there is an error in the file lb $t5, buffer($t0) #load next byte from file jal checkUpper #check for upper case jal checkLower #check for lower case jal checkDecimal #check for decimal jal checkWord #check for words addi $t0, $t0, 1 #increment loop counter j loop end: jal output jal fileClose li $v0, 10 syscall fileRead: # Open file for reading li $v0, 13 # system call for open file la $a0, fin # input file name li $a1, 0 # flag for reading li $a2, 0 # mode is ignored syscall # open a file move $s0, $v0 # save the file descriptor # reading from file just opened li $v0, 14 # system call for reading from file move $a0, $s0 # file descriptor la $a1, buffer # address of buffer from which to read li $a2, 100000 # hardcoded buffer length syscall # read from file jr $ra output: li $v0, 4 la $a0, msg2 syscall li $v0, 1 move $a0, $t1 syscall li $v0, 4 la $a0, nline syscall li $v0, 4 la $a0, msg3 syscall li $v0, 1 move $a0, $t2 syscall li $v0, 4 la $a0, nline syscall li $v0, 4 la $a0, msg4 syscall li $v0, 1 move $a0, $t3 syscall li $v0, 4 la $a0, nline syscall li $v0, 4 la $a0, msg5 syscall addi $t4, $t4, 1 li $v0, 1 move $a0, $t4 syscall jr $ra checkUpper: blt $t5, 0x41, L1 #branch if less than 'A' bgt $t5, 0x5a, L1 #branch if greater than 'Z' addi $t1, $t1, 1 #increment Uppercase counter L1: jr $ra checkLower: blt $t5, 0x61, L2 #branch if less than 'a' bgt $t5, 0x7a, L2 #branch if greater than 'z' addi $t2, $t2, 1 #increment Lowercase counter L2: jr $ra checkDecimal: blt $t5, 0x30, L3 #branch if less than '0' bgt $t5, 0x39, L3 #branch if greater than '9' addi $t3, $t3, 1 #increment Decimal counter L3: jr $ra checkWord: bne $t5, 0x20, L4 #branch if 'space' addi $t4, $t4, 1 #increment words counter L4: jr $ra fileClose: # Close the file li $v0, 16 # system call for close file move $a0, $s0 # file descriptor to close syscall # close file jr $ra Note: I'm using MARS Simulator, if that makes any different

    Read the article

  • Android RelativeLayout fill_parent unexpected behavior in a ListView with varying row heights

    - by Jameel Al-Aziz
    I'm currently working on a small update to a project and I'm having an issue with Relative_Layout and fill_parent in a list view. I'm trying to insert a divider between two sections in each row, much like the divider in the call log of the default dialer. I checked out the Android source code to see how they did it, but I encountered a problem when replicating their solution. To start, here is my row item layout: <?xml version="1.0" encoding="utf-8"?> <RelativeLayout android:id="@+id/RelativeLayout01" android:layout_width="fill_parent" xmlns:android="http://schemas.android.com/apk/res/android" android:padding="10dip" android:layout_height="fill_parent" android:maxHeight="64dip" android:minHeight="?android:attr/listPreferredItemHeight"> <ImageView android:id="@+id/infoimage" android:layout_width="wrap_content" android:layout_height="wrap_content" android:clickable="true" android:src="@drawable/info_icon_big" android:layout_alignParentRight="true" android:layout_centerVertical="true"/> <View android:id="@+id/divider" android:background="@drawable/divider_vertical_dark" android:layout_marginLeft="11dip" android:layout_toLeftOf="@+id/infoimage" android:layout_width="1px" android:layout_height="fill_parent" android:layout_marginTop="5dip" android:layout_marginBottom="5dip" android:layout_marginRight="4dip"/> <TextView android:id="@+id/TextView01" android:textAppearance="?android:attr/textAppearanceLarge" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_centerVertical="true" android:layout_toRightOf="@+id/ImageView01" android:layout_toLeftOf="@+id/divider" android:gravity="left|center_vertical" android:layout_marginLeft="4dip" android:layout_marginRight="4dip"/> <ImageView android:id="@+id/ImageView01" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignParentLeft="true" android:background="@drawable/bborder" android:layout_centerVertical="true"/> </RelativeLayout> The issue I'm facing is that each row has a thumbnail of varying height (ImageView01). If I set the RelativeLayout's layout_height property to fill_parent, the divider does not scale vertically to fill the row (it just remains a 1px dot). If I set layout_height to "?android:attr/listPreferredItemHeight", the divider fills the row, but the thumbnails shrink. I've done some debugging in the getView() method of the adapter, and it seems that the divider's height is not being set properly once the row has it's proper height. Here is a portion of the getView() method: public View getView(int position, View view, ViewGroup parent) { if (view == null) { view = inflater.inflate(R.layout.tag_list_item, parent, false); } The rest of the method simply sets the appropriate text and images for the row. Also, I create the inflater object in the adapter's constructor with: inflater = LayoutInflater.from(context); Am I missing something essential? Or does fill_parent just not work with dynamic heights?

    Read the article

  • Wcf Facility Metadata publishing for this service is currently disabled.

    - by cvista
    Hey I'm trying to connect to my Wcf service which is configured using castles wcf facility. When I go to the service in a browser i get: Metadata publishing for this service is currently disabled. Which lists a load of instructions which i cant do because the configuration isnt in the web.config. when I try to connect using VS/add service reference i get: The HTML document does not contain Web service discovery information. Metadata contains a reference that cannot be resolved: 'http://s.ibzstar.com/userservices.svc'. Content Type application/soap+xml; charset=utf-8 was not supported by service http://s.ibzstar.com/userservices.svc. The client and service bindings may be mismatched. The remote server returned an error: (415) Cannot process the message because the content type 'application/soap+xml; charset=utf-8' was not the expected type 'text/xml; charset=utf-8'.. If the service is defined in the current solution, try building the solution and adding the service reference again. Anyone know what I need to do to get this working? The end client is an iPhone app written using Monotouch if that matters - so no castle windsor on the client side. cheers w:// Here's the Windsor.config from the service: <?xml version="1.0" encoding="utf-8" ?> <configuration> <components> <component id="eventServices" service="IbzStar.Domain.IEventServices, IbzStar.Domain" type="IbzStar.Domain.EventServices, IbzStar.Domain" lifestyle="transient"> </component> <component id="userServices" service="IbzStar.Domain.IUserServices, IbzStar.Domain" type="IbzStar.Domain.UserServices, IbzStar.Domain" lifestyle="transient"> </component> The Web.config section: <system.serviceModel> <serviceHostingEnvironment aspNetCompatibilityEnabled="true"/> <services> </services> <behaviors> <serviceBehaviors> <behavior name="IbzStar.WebServices.Service1Behavior"> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="true"/> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="false"/> </behavior> </serviceBehaviors> </behaviors> My App_Start contains this: Container = new WindsorContainer(new XmlInterpreter(new ConfigResource())) .AddFacility<WcfFacility>() .Install(Configuration.FromXmlFile("Windsor.config")); As for the client config - I'm using the wizard to add the service.

    Read the article

  • Combine config-paramters with parameters passed from commanline

    - by Frederik
    I have created a SSIS-package that imports a file into a table (simple enough). I have some variables, a few set in a config-file such as server, database, importfolder. at runtime I want to pass the filename. This is done through a stored procedure using dtexec. When setting the paramters throught the configfile it works fine also when setting all parameters in the procedure and passing them with the \Set statement (se below). when I try to combine the config-version with settings parameters on the fly I get an error refering to the config-files path that was set at design time. Has anybody come across this and found a solution for it? Regards Frederik DECLARE @SSISSTR VARCHAR(8000), @DataBaseServer VARCHAR(100), @DataBaseName VARCHAR(100), @PackageFilePath VARCHAR(200), @ImportFolder VARCHAR(200), @HandledFolder VARCHAR(200), @ConfigFilePath VARCHAR(200), @SSISreturncode INT; /* DEBUGGING DECLARE @FileName VARCHAR(100), @SelectionId INT SET @FileName = 'Test.csv'; SET @SelectionId = 366; */ SET @PackageFilePath = '/FILE "Y:\SSIS\Packages\PostalCodeSelectionImport\ImportPackage.dtsx" '; SET @DataBaseServer = 'STOSWVUTVDB01\DEV_BSE'; SET @DataBaseName = 'BSE_ODR'; SET @ImportFolder = '\\Stoswvutvbse01\Application\FileLoadArea\ODR\\'; SET @HandledFolder = '\\Stoswvutvbse01\Application\FileLoadArea\ODR\Handled\\'; --SET @ConfigFilePath = '/CONFIGFILE "Y:\SSIS\Packages\PostalCodeSelectionImport\Configuration\DEV_BSE.dtsConfig" '; ----now making "dtexec" SQL from dynamic values SET @SSISSTR = 'DTEXEC ' + @PackageFilePath; -- + @ConfigFilePath; SET @SSISSTR = @SSISSTR + ' /SET \Package.Variables[User::SelectionId].Properties[Value];' + CAST( @SelectionId AS VARCHAR(12)); SET @SSISSTR = @SSISSTR + ' /SET \Package.Variables[User::DataBaseServer].Properties[Value];"' + @DataBaseServer + '"'; SET @SSISSTR = @SSISSTR + ' /SET \Package.Variables[User::ImportFolder].Properties[Value];"' + @ImportFolder + '" '; SET @SSISSTR = @SSISSTR + ' /SET \Package.Variables[User::DataBaseName].Properties[Value];"' + @DataBaseName + '" '; SET @SSISSTR = @SSISSTR + ' /SET \Package.Variables[User::ImportFileName].Properties[Value];"' + @FileName + '" '; SET @SSISSTR = @SSISSTR + ' /SET \Package.Variables[User::HandledFolder].Properties[Value];"' + @HandledFolder + '" '; -- Now execute dynamic SQL by using EXEC. EXEC @SSISreturncode = xp_cmdshell @SSISSTR;

    Read the article

  • Python soap using soaplib (server) and suds (client)

    - by Celso Axelrud
    This question is related to: http://stackoverflow.com/questions/1751027/python-soap-server-client In the case of soap with python, there are recommendation to use soaplib (http://wiki.github.com/jkp/soaplib) as soap server and suds (https://fedorahosted.org/suds/) as soap client. My target is to create soap services in python that can be consumed by several clients (java, etc). I tried the HelloWorld example from soaplib (http://trac.optio.webfactional.com/wiki/HelloWorld). It works well when the client is also using soaplib. Then, I tried to use suds as client consuming the HelloWorld services and it fail. -Why this is happening? Does soaplib server has problems to consumed by different clients? Here the code for the server: from soaplib.wsgi_soap import SimpleWSGISoapApp from soaplib.service import soapmethod from soaplib.serializers.primitive import String, Integer, Arraycode class HelloWorldService(SimpleWSGISoapApp): @soapmethod(String,Integer,_returns=Array(String)) def say_hello(self,name,times): results = [] for i in range(0,times): results.append('Hello, %s'%name) return results if __name__=='__main__': from cherrypy.wsgiserver import CherryPyWSGIServer #from cherrypy._cpwsgiserver import CherryPyWSGIServer # this example uses CherryPy2.2, use cherrypy.wsgiserver.CherryPyWSGIServer for CherryPy 3.0 server = CherryPyWSGIServer(('localhost',7789),HelloWorldService()) server.start() This is the soaplib client: from soaplib.client import make_service_client from SoapServerTest_1 import HelloWorldService client = make_service_client('http://localhost:7789/',HelloWorldService()) print client.say_hello("Dave",5) Results: >>> ['Hello, Dave', 'Hello, Dave', 'Hello, Dave', 'Hello, Dave', 'Hello, Dave'] This is the suds client: from suds.client import Client url = 'http://localhost:7789/HelloWordService?wsdl' client1 = Client(url) client1.service.say_hello("Dave",5) Results: >>> Unhandled exception while debugging... Traceback (most recent call last): File "C:\Python25\Lib\site-packages\RTEP\Sequencing\SoapClientTest_1.py", line 10, in <module> client1.service.say_hello("Dave",5) File "c:\python25\lib\site-packages\suds\client.py", line 537, in __call__ return client.invoke(args, kwargs) File "c:\python25\lib\site-packages\suds\client.py", line 597, in invoke result = self.send(msg) File "c:\python25\lib\site-packages\suds\client.py", line 626, in send result = self.succeeded(binding, reply.message) File "c:\python25\lib\site-packages\suds\client.py", line 658, in succeeded r, p = binding.get_reply(self.method, reply) File "c:\python25\lib\site-packages\suds\bindings\binding.py", line 158, in get_reply result = unmarshaller.process(nodes[0], resolved) File "c:\python25\lib\site-packages\suds\umx\typed.py", line 66, in process return Core.process(self, content) File "c:\python25\lib\site-packages\suds\umx\core.py", line 48, in process return self.append(content) File "c:\python25\lib\site-packages\suds\umx\core.py", line 63, in append self.append_children(content) File "c:\python25\lib\site-packages\suds\umx\core.py", line 140, in append_children cval = self.append(cont) File "c:\python25\lib\site-packages\suds\umx\core.py", line 61, in append self.start(content) File "c:\python25\lib\site-packages\suds\umx\typed.py", line 77, in start found = self.resolver.find(content.node) File "c:\python25\lib\site-packages\suds\resolver.py", line 341, in find frame = Frame(result, resolved=known, ancestry=ancestry) File "c:\python25\lib\site-packages\suds\resolver.py", line 473, in __init__ resolved = type.resolve() File "c:\python25\lib\site-packages\suds\xsd\sxbasic.py", line 63, in resolve raise TypeNotFound(qref) TypeNotFound: Type not found: '(string, HelloWorldService.HelloWorldService, )'

    Read the article

  • Tips / techniques for high-performance C# server sockets

    - by McKenzieG1
    I have a .NET 2.0 server that seems to be running into scaling problems, probably due to poor design of the socket-handling code, and I am looking for guidance on how I might redesign it to improve performance. Usage scenario: 50 - 150 clients, high rate (up to 100s / second) of small messages (10s of bytes each) to / from each client. Client connections are long-lived - typically hours. (The server is part of a trading system. The client messages are aggregated into groups to send to an exchange over a smaller number of 'outbound' socket connections, and acknowledgment messages are sent back to the clients as each group is processed by the exchange.) OS is Windows Server 2003, hardware is 2 x 4-core X5355. Current client socket design: A TcpListener spawns a thread to read each client socket as clients connect. The threads block on Socket.Receive, parsing incoming messages and inserting them into a set of queues for processing by the core server logic. Acknowledgment messages are sent back out over the client sockets using async Socket.BeginSend calls from the threads that talk to the exchange side. Observed problems: As the client count has grown (now 60-70), we have started to see intermittent delays of up to 100s of milliseconds while sending and receiving data to/from the clients. (We log timestamps for each acknowledgment message, and we can see occasional long gaps in the timestamp sequence for bunches of acks from the same group that normally go out in a few ms total.) Overall system CPU usage is low (< 10%), there is plenty of free RAM, and the core logic and the outbound (exchange-facing) side are performing fine, so the problem seems to be isolated to the client-facing socket code. There is ample network bandwidth between the server and clients (gigabit LAN), and we have ruled out network or hardware-layer problems. Any suggestions or pointers to useful resources would be greatly appreciated. If anyone has any diagnostic or debugging tips for figuring out exactly what is going wrong, those would be great as well. Note: I have the MSDN Magazine article Winsock: Get Closer to the Wire with High-Performance Sockets in .NET, and I have glanced at the Kodart "XF.Server" component - it looks sketchy at best.

    Read the article

  • How can I use WCF with only basichttpbinding, SSL and Basic Authentication in IIS?

    - by Tim
    Hello, Is it possible to setup a WCF service with SSL and Basic Authentication in IIS using only BasicHttpBinding-binding? (I can’t use the wsHttpBinding-binding) The site is hosted on IIS 7, with the following authentication set up: - Anonymous access: off - Basic authentication: on - Integrated Windows authentication: off !! Service Config: <services> <service name="NameSpace.SomeService"> <host> <baseAddresses> <add baseAddress="https://hostname/SomeService/" /> </baseAddresses> </host> <!-- Service Endpoints --> <endpoint address="" binding="basicHttpBinding" bindingNamespace="http://hostname/SomeMethodName/1" contract="NameSpace.ISomeInterfaceService" name="Default" /> <endpoint address="mex" binding="mexHttpsBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpsGetEnabled="true"/> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="false"/> <exceptionShielding/> </behavior> </serviceBehaviors> </behaviors> I tried 2 types of bindings with two different errors: 1 - IIS Error: 'Could not find a base address that matches scheme http for the endpoint with binding BasicHttpBinding. Registered base address schemes are [https]. <bindings> <basicHttpBinding> <binding> <security mode="TransportCredentialOnly"> <transport clientCredentialType="Basic"/> </security> </binding> </basicHttpBinding> </bindings> 2 - IIS Error: Security settings for this service require 'Anonymous' Authentication but it is not enabled for the IIS application that hosts this service. <bindings> <basicHttpBinding> <binding> <security mode="Transport"> <transport clientCredentialType="Basic"/> </security> </binding> </basicHttpBinding> </bindings> Does somebody know how to configure this correctly? (if possible?)

    Read the article

  • Objective-C Out of scope problem

    - by davbryn
    Hi, I'm having a few problems with some Objective-C and would appreciate some pointers. So I have a class MapFileGroup which has the following simple interface (There are other member variables but they aren't important): @interface MapFileGroup : NSObject { NSMutableArray *mapArray; } @property (nonatomic, retain) NSMutableArray *mapArray; mapArray is @synthesize'd in the .m file. It has an init method: -(MapFileGroup*) init { self = [super init]; if (self) { mapArray = [NSMutableArray arrayWithCapacity: 10]; } return self; } It also has a method for adding a custom object to the array: -(BOOL) addMapFile:(MapFile*) mapfile { if (mapfile == nil) return NO; mapArray addObject:mapfile]; return YES; } The problem I get comes when I want to use this class - obviously due to a misunderstanding of memory management on my part. In my view controller I declare as follows: (in the @interface): MapFileGroup *fullGroupOfMaps; With @property @property (nonatomic, retain) MapFileGroup *fullGroupOfMaps; Then in the .m file I have a function called loadMapData that does the following: MapFileGroup *mapContainer = [[MapFileGroup alloc] init]; // create a predicate that we can use to filter an array // for all strings ending in .png (case insensitive) NSPredicate *caseInsensitivePNGFiles = [NSPredicate predicateWithFormat:@"SELF endswith[c] '.png'"]; mapNames = [unfilteredArray filteredArrayUsingPredicate:caseInsensitivePNGFiles]; [mapNames retain]; NSEnumerator * enumerator = [mapNames objectEnumerator]; NSString * currentFileName; NSString *nameOfMap; MapFile *mapfile; while(currentFileName = [enumerator nextObject]) { nameOfMap = [currentFileName substringToIndex:[currentFileName length]-4]; //strip the extension mapfile = [[MapFile alloc] initWithName:nameOfMap]; [mapfile retain]; // add to array [fullGroupOfMaps addMapFile:mapfile]; } This seems to work ok (Though I can tell I've not got the memory management working properly, I'm still learning Objective-C); however, I have an (IBAction) that interacts with the fullGroupOfMaps later. It calls a method within fullGroupOfMaps, but if I step into the class from that line while debugging, all fullGroupOfMaps's objects are now out of scope and I get a crash. So apologies for the long question and big amount of code, but I guess my main question it: How should I handle a class with an NSMutableArray as an instance variable? What is the proper way of creating objects to be added to the class so that they don't get freed before I'm done with them? Many thanks

    Read the article

  • ASP.NET MVC2: Getting textbox data from a view to a controller

    - by mr_plumley
    Hi, I'm having difficulty getting data from a textbox into a Controller. I've read about a few ways to accomplish this in Sanderson's book, Pro ASP.NET MVC Framework, but haven't had any success. Also, I've ran across a few similiar questions online, but haven't had any success there either. Seems like I'm missing something rather fundamental. Currently, I'm trying to use the action method parameters approach. Can someone point out where I'm going wrong or provide a simple example? Thanks in advance! Using Visual Studio 2008, ASP.NET MVC2 and C#: What I would like to do is take the data entered in the "Investigator" textbox and use it to filter investigators in the controller. I plan on doing this in the List method (which is already functional), however, I'm using the SearchResults method for debugging. Here's the textbox code from my view, SearchDetails: <h2>Search Details</h2> <% using (Html.BeginForm()) { %> <fieldset> <%= Html.ValidationSummary() %> <h4>Investigator</h4> <p> <%=Html.TextBox("Investigator")%> <%= Html.ActionLink("Search", "SearchResults")%> </p> </fieldset> <% } %> Here is the code from my controller, InvestigatorsController: private IInvestigatorsRepository investigatorsRepository; public InvestigatorsController(IInvestigatorsRepository investigatorsRepository) { //IoC: this.investigatorsRepository = investigatorsRepository; } public ActionResult List() { return View(investigatorsRepository.Investigators.ToList()); } public ActionResult SearchDetails() { return View(); } public ActionResult SearchResults(SearchCriteria search) { string test = search.Investigator; return View(); } I have an Investigator class: [Table(Name = "INVESTIGATOR")] public class Investigator { [Column(IsPrimaryKey = true, IsDbGenerated = false, AutoSync=AutoSync.OnInsert)] public string INVESTID { get; set; } [Column] public string INVEST_FNAME { get; set; } [Column] public string INVEST_MNAME { get; set; } [Column] public string INVEST_LNAME { get; set; } } and created a SearchCriteria class to see if I could get MVC to push the search criteria data to it and grab it in the controller: public class SearchCriteria { public string Investigator { get; set; } } } I'm not sure if project layout has anything to do with this either, but I'm using the 3 project approach suggested by Sanderson: DomainModel, Tests, and WebUI. The Investigator and SearcCriteria classes are in the DomainModel project and the other items mentioned here are in the WebUI project. Thanks again for any hints, tips, or simple examples! Mike

    Read the article

  • Databinding to ObservableCollection in a different UserControl?

    - by Dave
    Question re-written on 2010-03-24 I have two UserControls, where one is a dialog that has a TabControl, and the other is one that appears within said TabControl. I'll just call them CandyDialog and CandyNameViewer for simplicity's sake. There's also a data management class called Tracker that manages information storage, which for all intents and purposes just exposes a public property that is an ObservableCollection. I display the CandyNameViewer in CandyDialog via code behind, like this: private void CandyDialog_Loaded( object sender, RoutedEventArgs e) { _candyviewer = new CandyViewer(); _candyviewer.DataContext = _tracker; candy_tab.Content = _candyviewer; } The CandyViewer's XAML looks like this (edited for kaxaml): <Page xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"> <Page.Resources> <DataTemplate x:Key="CandyItemTemplate"> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="120"></ColumnDefinition> <ColumnDefinition Width="150"></ColumnDefinition> </Grid.ColumnDefinitions> <TextBox Grid.Column="0" Text="{Binding CandyName}" Margin="3"></TextBox> <!-- just binding to DataContext ends up using InventoryItem as parent, so we need to get to the UserControl --> <ComboBox Grid.Column="1" SelectedItem="{Binding SelectedCandy, Mode=TwoWay}" ItemsSource="{Binding RelativeSource={RelativeSource FindAncestor, AncestorType={x:Type UserControl}}, Path=DataContext.CandyNames}" Margin="3"></ComboBox> </Grid> </DataTemplate> </Page.Resources> <Grid> <ListBox DockPanel.Dock="Top" ItemsSource="{Binding CandyBoxContents, Mode=TwoWay}" ItemTemplate="{StaticResource CandyItemTemplate}" /> </Grid> </Page> Now everything works fine when the controls are loaded. As long as CandyNames is populated first, and then the consumer UserControl is displayed, all of the names are there. I obviously don't get any errors in the Output Window or anything like that. The issue I have is that when the ObservableCollection is modified from the model, those changes are not reflected in the consumer UserControl! I've never had this problem before; all of my previous uses of ObservableCollection updated fine, although in those cases I wasn't databinding across assemblies. Although I am currently only adding and removing candy names to/from the ObservableCollection, at a later date I will likely also allow renaming from the model side. Is there something I did wrong? Is there a good way to actually debug this? Reed Copsey indicates here that inter-UserControl databinding is possible. Unfortunately, my favorite Bea Stollnitz article on WPF databinding debugging doesn't suggest anything that I could use for this particular problem.

    Read the article

< Previous Page | 457 458 459 460 461 462 463 464 465 466 467 468  | Next Page >