Search Results

Search found 12670 results on 507 pages for 'ie tweaker plus'.

Page 464/507 | < Previous Page | 460 461 462 463 464 465 466 467 468 469 470 471  | Next Page >

  • Random Computer Crashes

    - by Josh W.
    Ok, here's a wierd one for you all. Occasionally my PC here at work will crash in a very peculiar way. My dual monitors will suddenly go blank as if there is no longer a video signal, the USB mouse light will go dark and mouse stays unresponsive, the keyboard lights will not change status when the appropriate keys are pressed (Num/Caps/Scoll Lock). The CD Tray WILL open and close. But the computer will not respond to a ping request. For all intents & purposes it's as if the computer is off, except it wasn't intentional by me. The power light and internal fans are still on and I've now lost any unsaved work. Now here's where it gets wierd. This PC is part of a batch of PC's we got from a local vendor who does our initial system builds. Mine, and 6 other co-workers PCs all have the same issue. Originally we thought it was a bad combination of hardware, but through trial and error the only thing we haven't eliminated are the OS, Mobo & CPU. The problem was so bad for some of them that they ended up going back to their 5 year old dinosaurs in order to get some work done, for me the problem isn't as bad, maybe once every other day or so, but still enough to bite me in the ass if I've forgotten my ritualistic pressing of CTRL-S every 1-5 minutes. In this case we've tried two different video cards, two different power supplies, two different memory configurations, running on a UPS/not on UPS, updating/rolling back video drivers, three different bios revisions. The only things we haven't swapped are the mobo & cpu, mainly because a new mobo means a new Hardware Abstraction Layer, ie re-install of windows and there's alot of other software on this PC that takes forever to reload by hand. There was a base image that our systems team created with all the drivers installed and the basic setup of software our company uses, but they then must customize the setup for us programmers so it takes a while to get a new configuration up and going. I'm a programmer by day and am usually pretty good at diagnosing computer problems whether through trial and error or not. We've pretty much exhausted all the ideas we can think of here, short of a new mobo/cpu. Was hoping someone out there might have anything else we can try.. Relevant Parts: OS: XP Pro 32-bit Motherboard: Intel DG41RQ CPU: Intel Core-2 Quad Q9400 @ 2.66GHz Current BIOS Version/Date Intel Corp. RQG4110H.86A.0014.2010.0306.1151, 3/6/2010 Dual LCD's, Viewsonic VG930m & Samsung SyncMaster 910v (other people have different models, but listed in case there's some very wierd problem with the signals being sent/received) PS/2 Keyboard USB Microsoft Intellimouse BIOS Versions Tried: R 0013 12/23/2009 R 0014 3/6/2010 Video cards Tried GeForce 8400 GS Radeon HD 4350 - ASUS EAH4350 Two Different Power Supplies a 380W & 550W Ram Configurations 2GB - 1 x 2GB 4GB - 2 x 2GB

    Read the article

  • Cannot open simplest mac application

    - by streetpc
    I created a very simple app, which is only a wrapper of a shell script (so that I can select this script in application selectors, like startup apps). I try to launch it and yesterday it worked, but today I changed the executable script's content and name (with something that perfeclty works in a shell script launched in the Terminal) and it will only display a Finder-iconed dialog saying Cannot open the application because it is not supported on this kind of Mac. I restored the previous script (content/name) but I still get the error! Same when re-bundling the app from scratch, or completely changing the bundle identifier… If I try to open it in the Terminal using open My.app, I get The application cannot be opened because it has an incorrect executable format. But when I executes directly the Contents/MacOS/Script, it allways works (iwth both contents). Also, it is displayed with correct icon and meta-information in the Finder (so I guess the Info.plist is understood). The app's file tree is: Contents/ Info.plist MacOS/ Script (executable bit set, works when launched directly) PkgInfo Resources/ AppIcon.icns Here is the Info.plist content: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>CFBundleExecutable</key> <string>Script</string> <key>CFBundleIconFile</key> <string>AppIcon</string> <key>CFBundleIdentifier</key> <string>asdf.ScriptApp</string> <key>CFBundleInfoDictionaryVersion</key> <string>6.0</string> <key>CFBundleName</key> <string>My script</string> <key>CFBundlePackageType</key> <string>APPL</string> <key>CFBundleShortVersionString</key> <string>1.0</string> <key>CFBundleSignature</key> <string>????</string> <key>CFBundleVersion</key> <string>1</string> <key>LSMinimumSystemVersion</key> <string>10.4</string> </dict> </plist> And the PkgInfo file only contains APPL????. I tested the Script with a simple echo "ok" and echo "ok" >/tmp/test (plus #!/bin/sh header). So my questions are: * Is there some kind of validity caching for applications ? based on what ? how do I flush it ? * Where does this message come from ? I tried to google it but all I get is a page talking about 32/64 bits Java…

    Read the article

  • VMware vSphere cluster design for site redundancy

    - by Stefan Radovanovici
    I have a question about the best design for site redudancy when using vSphere clusters. A bit of background info about our situation first though. We are a medium-sized company with two main offices, located in different countries. Our networks are linked by a Layer2 150Mbps leased line which is currently underused. We have a variety of services running for internal use within the company, some on physycal servers and some on existing vSphere clusters. In our department we also run several services (almost all running under various forms of Linux) like NTP, Syslog, jump servers, monitoring servers and so on. We have now the requirement that those servers need to be redundant within each location (which they are not at the moment) and also site redudant (which they are to some extent, the servers are duplicated in the 2nd location with configurations kept in sync via various methods at the application layer). There is no SAN available for us, at least not something that we can use at the moment. Cost is also an issue. While we do have some budget available for this, we can't afford to buy SANs for both locations for example. I looked at the VSA feature and it seems that this could be something for us but I am unsure how to solve the site-redudancy requirement. At the moment for testing purposes I am setting up in a lab a vSphere 5 with VSA on two ESXi hosts. I am currently using the Essentials Plus kit with VSA license, which allows me to build a VSA cluster on up to 3 hosts, together with a vCenter license to manage them. The hosts each have two dual-port network cards and two 600GB drives, running in Raid1. Hardware-wise this will be enough for us to run the all the services we need as VMs and will provide redundandcy within the site. At the moment I see only two option to have site redundancy: build an identical VSA cluter in the second location and keep the various services sync'ed at application layer (database sync, rsync and so on). simply move one of the hosts from the existing cluster to the second location, basically having the VSA cluster span the 150Mbps link between the sites. I would very much prefer the second option but I am unsure how well it'll work, if it can work at all. Technically it should, we can span the needed VLANs across the leased line and have them available in the second location. The advantage would be that we don't need to worry at all about sync'ing databases and the like. But I have the feeling that the bandwidth will not be enough, I have no way of knowing how much traffic will the VSA cluster generate between the hosts. I realize that this will most likely depend on the individual usage of the VMs but still, I have no idea how VSA replicates data between the ESXi hosts. Are these my only options or can my goals be achieved in some other way ? Is there perhaps a way to have some sort of "cold stand by" cluster in the second location where the VMs would be sync'ed once per night from the main location ? The idea is that in case the first site becomes unavailable, we would be able to bring all those VMs online there. We would be ok with the data being 1 day old. Any answers are appreciated. Best regards, Stefan

    Read the article

  • Tool or script to detect moved or renamed files on Linux prior to a backup

    - by Pharaun
    Basically I am searching to see if there exists a tool or script that can detect moved or renamed files so that I can get a list of renamed/moved files and apply the same operation on the other end of the network to conserve on bandwidth. Basically disk storage is cheap but bandwidth isn't, and the problem is that the files often will be reorganized or moved around into a better directory structure thus when you use rsync to do the backup, rsync won't notice that its a renamed or moved file and re-transmission it over the network all over again despite having the same file on the other end. So I am wondering if there exists a script or tool that can record where all the files are and their names, then just prior to a backup, it would rescan and detect moved or renamed files, then I can take that list and re-apply the move/rename operation on the other side. Here's a list of the "general" features of the files: Large unchanging files They can be renamed or moved around [Edit:] These all are good answers, and what I end up doing in the end was looking at all of the answers and will be writing some code to deal with this. Basically what I am thinking/working on now is: Using something like AIDE for the "initial" scan and enable me to keep checksums on the files because they are supposed to never change, so it would aid on detecting corruption. Creating an inotify daemon that would monitor these files/directory and recording any changes relating to renames & moving the files around to a log file. There are some edge cases where inotify might fail to record that something happened to the file system, thus there is a final step of using find to search the file system for files that has a change time latter than the last backup. This has several benefits: Checksums/etc from AIDE to be able to check/make sure that some media did not get corrupt Inotify keeps resource usage low and no need to re-scan the filesystem over and over No need to patch rsync; If I have to patch things I can, but I would prefer to avoid patching things to keep the burden lower, (IE don't need to re-patch everytime there is an update). I've used Unison before and its really nice, however I could've sworn that Unison does keep copies around on the filesystem and that its "archive" files can grow to be rather large?

    Read the article

  • I want to virtualize my workstation (Tier 1), Looking for Bare Metal Hypervisor for consumer grade components

    - by Chase Florell
    I find myself in this similar bind at least once a year. The bind whereby I'm either upgrading a motherboard, or an OS hard drive. It drives me crazy to have to reinstall Windows, Visual Studio, all my addins, reconfigure my settings etc... every single time. I have a layout and I like and I want to stick with it. My question is... Is there a Bare Metal Hypervisor on the market that will enable me to virtualize my consumer grade workstation? I really want to avoid Host/Client virtualization. Bare Metal is definitely a better way to go for my needs. Is this a good approach, or am I going to suffer some other undesirable side effects by doing this? Clarification My machine has very limited purposes. My primary use is Visual Studio 2010 Professional where I develop ASP.NET MVC Web Applications. The second piece of software that I use (that's system intensive) is Photoshop CS3. Beyond that, my applications are limited to Outlook, Internet Explorer, Firefox, Opera, Chrome, LinqPad, and various other (small) apps. Beyond this, I'm considering working on a node.js project and might run ubuntu on the same hypervisor if possible. System Specs: Gigabyte Motherboard Intel i7 920 12 GB Ram basic 500GB 7200RPM HDD for OS 4 VelociRaptors in Raid 1/0 for build disk Dual GTS250 (512MB) Graphics cards (non SLI) for quad monitors On a side note I also wouldn't be opposed to an alternative suggestion if the limitations are too great. I could install the ESXi (or Zen Server) on my box, and build a separate "thin client" to RDP into the virtual machine. It appears as though RDP supports dual monitors. Edit (Dec 9, 2011) It's been nearly a year since I first asked this question. Since then, there have been a lot of great strides in Hypervisor technology... AND MokaFive is now released for corporate use. I'd love to dig into this question a little more and find out if there is a solid BareMetal Hypervisor for workstations running consumer grade components (IE: not Dell, HP, Lenovo, Etc).

    Read the article

  • how to setup kismet.conf on Ubuntu

    - by Registered User
    I installed Kismet on my Ubuntu 10.04 machine as apt-get install kismet every thing seems to work fine. but when I launch it I see following error kismet Launching kismet_server: //usr/bin/kismet_server Suid priv-dropping disabled. This may not be secure. No specific sources given to be enabled, all will be enabled. Non-RFMon VAPs will be destroyed on multi-vap interfaces (ie, madwifi-ng) Enabling channel hopping. Enabling channel splitting. NOTICE: Disabling channel hopping, no enabled sources are able to change channel. Source 0 (addme): Opening none source interface none... FATAL: Please configure at least one packet source. Kismet will not function if no packet sources are defined in kismet.conf or on the command line. Please read the README for more information about configuring Kismet. Kismet exiting. Done. I followed this guide http://www.ubuntugeek.com/kismet-an-802-11-wireless-network-detector-sniffer-and-intrusion-detection-system.html#more-1776 how ever in kismet.conf I am not clear with following line source=none,none,addme as to what should I change this to. lspci -vnn shows 0c:00.0 Network controller [0280]: Broadcom Corporation BCM4312 802.11b/g [14e4:4315] (rev 01) Subsystem: Dell Device [1028:000c] Flags: bus master, fast devsel, latency 0, IRQ 17 Memory at f69fc000 (64-bit, non-prefetchable) [size=16K] Capabilities: [40] Power Management version 3 Capabilities: [58] Vendor Specific Information <?> Capabilities: [e8] Message Signalled Interrupts: Mask- 64bit+ Queue=0/0 Enable- Capabilities: [d0] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting <?> Capabilities: [13c] Virtual Channel <?> Capabilities: [160] Device Serial Number Capabilities: [16c] Power Budgeting <?> Kernel driver in use: wl Kernel modules: wl, ssb and iwconfig shows lo no wireless extensions. eth0 no wireless extensions. eth1 IEEE 802.11bg ESSID:"WIKUCD" Mode:Managed Frequency:2.462 GHz Access Point: <00:43:92:21:H5:09> Bit Rate=11 Mb/s Tx-Power:24 dBm Retry min limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Managementmode:All packets received Link Quality=1/5 Signal level=-81 dBm Noise level=-90 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:169 Invalid misc:0 Missed beacon:0 So what should I be putting in place of source=none,none,addme with output I mentioned above ?

    Read the article

  • Nginx Reverse Proxy Node.js and Wordpress + Static Files Issue

    - by joemccann
    I have had quite a time trying to get nginx to serve static assets from my wordpress blog. Have a look at the config and let me know if you can help. ( https://gist.github.com/1130332 - to see the entire thing) server { listen 80; server_name subprint.com; access_log /var/www/subprint/logs/access.log; error_log /var/www/subprint/logs/error.log; root /var/www/subprint/server/public; # express serves static resources for subprint.com out of here location / { proxy_pass http://127.0.0.1:8124; root /var/www/subprint/server; access_log on; } #serve static assets location ~* ^(?!\/).+\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$ { expires max; access_log off; } # the route for the wordpress blog # unfortunately the static assets (css, img, etc.) are not being pathed/served properly location /blog { root /var/www/localhost/public; index index.php; access_log /var/www/localhost/logs/access.log; error_log /var/www/localhost/logs/error.log; if (!-e $request_filename) { rewrite ^/(.*)$ /index.php?q=$1 last; break; } if (!-f $request_filename) { rewrite /blog$ /blog/index.php last; break; } } # actually serves the wordpress and subsequently phpmyadmin location ~* (?!\/blog).+\.php$ { fastcgi_pass localhost:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/localhost/public$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; include /usr/local/nginx/conf/fastcgi_params; } # This works fine, but ONLY with a symlink inside the /var/www/localhost/public directory pointing to /usr/share/phpmyadmin location /phpmyadmin { index index.php; access_log /var/www/phpmyadmin/logs/access.log; error_log /var/www/phpmyadmin/logs/error.log; alias /usr/share/phpmyadmin/; if (!-f $request_filename) { rewrite /phpmyadmin$ /phpmyadmin/index.php permanent; break; } } # opt-in to the future add_header "X-UA-Compatible" "IE=Edge,chrome=1"; }

    Read the article

  • Windows 7 & Virtual PC and Internet (gateway) problems on host PC

    - by Mufasa
    I upgraded to Windows 7 on a PC that is a few years old. The CPU was one revision away from having Hyper-V on it. So, I had to install Microsoft Virtual PC 2007 (v6.0.156.0) to run full XP instances instead of the seamless XP virtualization that is advertised so much. That's fine though; the 'older' version is useful since I use it to run different versions of the whole XP/IE stack for testing. (I'm a web developer.) ...And for the one 16-bit application we still use at the office for scheduling. * sigh * The virtual instances work fine, including networking. My issue is that after a reboot or coming out of sleep mode, my host Windows 7 won't connect to the Internet. It will connect to the local network fine. If I disable the "Virtual Machine Network Services" item (I'll call "VMNS" from here on) in the LAN Connection properties box, it starts working. But than the Virtual PC instances lose their network connectivity. If I re-enable VMNS again in the same instance, everything works (Internet on host and in the virtualized instances). But after the next reboot/sleep cycle this starts over. The route table gave me a clue though. When doing a cycle w/ VMNS enabled: IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 On-link 10.0.3.51 20 0.0.0.0 0.0.0.0 10.0.10.10 10.0.3.51 276 ... After VMNS is disabled, the first route goes away. I assume that is for VMNS to intercept virtualized instance's network connections and forward them correctly? Just a guess though. More info: I checked my Firewall settings and Services (because I'm sort of a control nazi and turn off a lot) but couldn't find anything that made sense and if turned on changed anything. So it might be something there I'm missing, but I don't know what. My current hacked solution: So, I figured I'd mess with the routes myself to see if that helped, it did. If I run a route delete 0.0.0.0 on the universal (0.0.0.0) gateway routes, and add back in just the 2nd line with route add 0.0.0.0 mask 0.0.0.0 10.0.10.10--the one that points to my actual gateway (10.0.10.10)--then I don't have to mess with the disable/enable cycle of VMNS, and everything works. Running those two commands is faster then bringing up connection options and disabling and re-enabling VMNS, but I still don't want to have use that hack script every boot either. (Oh, and I also tried messing with hard-coding TCP/IP settings in my network adapter, including setting high metrics, etc., but that didn't help either.) Any suggestions on the right way to fix this?

    Read the article

  • Can you set up a gaming LAN using OpenVPN installed in a VMware guest OS and be playing the game on the host OS?

    - by Coder
    I would like to setup a gaming VPN. Ie. I have some games that work over LAN and would like to play them with people that are not on my LAN. I know I can do this with OpenVPN. My ultimate goal would be to run OpenVPN portably on my host OS and not even need any virtualization. As such i don't want to install it on my host, but i'm fine with running it portably. I'm even fine with temporarily adding registry keys, and then running a .reg file to remove these entries once i'm done. To this effect i have installed OpenVPN on a virtual machine and diffed the registry. I then manually (using a .reg file) added all the keys that seem important on my host OS and copied the installation folder of OpenVPN onto my host machine. Then i try to run openVPN GUI 1.0.3 as a test and it says "Error opening registy for reading (HKLM\SOFTWARE\OpenVPN). OpenVPN is probably not installed". I verified that that key is indeed in the registry with all subkeys and it looks correct. I have tried running the GUI as an administrator and in compatibility mode with no success. I am running Windows 7. If this fails then i would be happy with installing OpenVPN on a virtual machine in VMWare but they key is that i will be running the game installed on my host machine. The first question for this option is if this is even possible. The second is, that I can't get the VM to have internet access if I use bridging but i can if i use NAT. Is it possible to do this game VPN setup with VMWare guest OS running using NAT? Summary of questions: -Is it possible to run openVPN portably and if so what did i miss above? -If it's not possible to run it portably, then can setup a gaming LAN by installing OpenVPN in a guest OS with NAT and how can i do this? -If the above is not possible then can i install OpenVPN in a guest using bridging and if so how can i set this up with a Windows 7 host and Windows XP guest as currently i can't get the guest to be able to access the internet in bridging mode, but it working in NAT mode. -In general is there any good documentation on setting up a gaming LAN with OpenVPN (i am using 2.1.4) as i have never set up a VPN of any sort before so any help would be much appreciated. Thanks!

    Read the article

  • understanding my site's DNS records

    - by DaveM
    firstly apologies for using the word 'pointage' this is the word my french domain registrar uses so I may have used to wrong term. OK I would like to better understand what is going on on my 'pointage' record on my domain registrars site. for my (currently empty) web site it reports the following details... Type : Host : Destination A : www.mydomain.org : 62.210.176.146 A : mail.mydomain.org : 84.246.225.176 Mx : .mydomain.org : mail.mydomain.org I think I understand the MX record, that simply relays anything onto the mail.mydomain.org location. However why are the destination for the www and mail domains different. Even more confusing (for me) is the fact that if I ping either of www.mydomain.org or mail.mydomain.org the ping returns a different IP address. This IP address is consistent with that of my server (ie 92.39.247.92). So what exactly is going on ? I'm sure I could find the information on the web,I've read a few thing on the debianhelp site regarding DNS records, and it seems to suggest that the record should be a reverse lookup, but certains isn't the reverse of my servers IP ? but I don't what I should be looking for, so links to docs and search terms for google will be happily accepteed (even though they go against the grain of SO answers to question). thanks in advance. David. ps. I should add that everything seems to work just fine, and I've just descovered this part of the management page of my registrar. Edit: Addition of DNS records and ping results. The DNS record for the site. From what I've read there should only realy be a single 'A' record, so has something gone wrong ? should I change it (remove the extras and then just point www.facilitee.org - .facilitee.org and mail.facilitee.org - .facilitee.org here is the DNS record A www.facilitee.org ? 92.39.247.92 A .facilitee.org ? 92.39.247.92 A mail.facilitee.org ? 92.39.247.92 A webmail.facilitee.org ? 92.39.247.92 MX .facilitee.org ? mail.facilitee.org ping results... ~$ ping www.facilitee.org PING www.facilitee.org (92.39.247.92) 56(84) bytes of data. 64 bytes from vps4576-cloud.dns26.com (92.39.247.92): ~$ ping mail.facilitee.org PING mail.facilitee.org (92.39.247.92) 56(84) bytes of data. 64 bytes from vps4576-cloud.dns26.com (92.39.247.92): So the DNS and the ping correspond, but the 'pointage' doesn't. ~ how can I get a report of the pointage records other than from my registrar ?

    Read the article

  • Cannot open simplest script mac application

    - by streetpc
    I created a very simple app, which is only a wrapper of a shell script (so that I can select this script in application selectors, like startup apps). I try to launch it and yesterday it worked, but today I changed the executable script's content and name (with something that perfeclty works in a shell script launched in the Terminal) and it will only display a Finder-iconed dialog saying Cannot open the application because it is not supported on this kind of Mac. I restored the previous script (content/name) but I still get the error! Same when re-bundling the app from scratch, or completely changing the bundle identifier… If I try to open it in the Terminal using open My.app, I get The application cannot be opened because it has an incorrect executable format. But when I executes directly the Contents/MacOS/Script, it allways works (iwth both contents). Also, it is displayed with correct icon and meta-information in the Finder (so I guess the Info.plist is understood). The app's file tree is: Contents/ Info.plist MacOS/ Script (executable bit set, works when launched directly) PkgInfo Resources/ AppIcon.icns Here is the Info.plist content: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>CFBundleExecutable</key> <string>Script</string> <key>CFBundleIconFile</key> <string>AppIcon</string> <key>CFBundleIdentifier</key> <string>asdf.ScriptApp</string> <key>CFBundleInfoDictionaryVersion</key> <string>6.0</string> <key>CFBundleName</key> <string>My script</string> <key>CFBundlePackageType</key> <string>APPL</string> <key>CFBundleShortVersionString</key> <string>1.0</string> <key>CFBundleSignature</key> <string>????</string> <key>CFBundleVersion</key> <string>1</string> <key>LSMinimumSystemVersion</key> <string>10.4</string> </dict> </plist> And the PkgInfo file only contains APPL????. I tested the Script with a simple echo "ok" and echo "ok" >/tmp/test (plus #!/bin/sh header). So my questions are: Is there some kind of validity caching for applications ? based on what ? how do I flush it ? Where does this message come from ? I tried to google it but all I get is a page talking about 32/64 bits Java…

    Read the article

  • My 2009 MacBook Logic board failed - options to proceed and how difficult?

    - by user181061
    Scannerz just gave my MacBook logic board a big fat F! I upgraded from Snow Leopard to Mountain Lion about 3 weeks ago. The system was running short of memory so I upgraded it. The system was running fine for about 2 weeks. Yesterday the thing started acting erratic. A lot of spinning beach balls, delays, and then some errors saying files couldn't be read to or from the drive. I figured the drive was going because the system is over 3 years old. I ran Scannerz on it and it indicated a lot of errors and irregularities. I rescanned it in cursory mode, and none of them were repeatable, just showing up all over the place in different regions of the scan. I went through the docs and they implied either an I/O cable was bad, a connection was damaged, or the logic board was bad. I tossed on my backup of Snow Leopard that I cloned from the original hard drive because I figured Mountain Lion was to blame and booted from the USB drive with the clone on it. It wasn't. I performed scans on every single port, and errors and irregularities that couldn't be repeated were showing up on every single one of them. I then, for kicks, put a CD into the CD player. Scannerz doesn't test optical drives but I figured surely that will work. No it won't. More spinning beach balls and messages telling me it can't be read. It was working fine 3 days ago. I know a lot of people don't like MacBook's, but mine's been great, at least until now. It was working great even with Mountain Lion after the upgrade. The system is a mid-2009 MacBook. In my opinion, it's a complete waste to toss this system. The display is too good, the keyboard works great, and it still looks good, plus this type of MacBook still uses the FireWire 400 port and I use that for Time Machine backups. I've tried reseating the RAM, it didn't do anything. I shut the system down and put in the old RAM, booted to Snow Leopard, and the problems persist. Here are my questions: The Scannerz documentation somewhere said something about the Airport card not being seated properly, but when I go to iFixit, it's apparent, at least I think it's apparent, that this isn't a slot type Airport card that the user can easily install or remove. If the cables or connections to the Airport card are bad, could they be causing this problem. How about any other connections that can be intermittent, failing or erratic? Any type of resets that I could possibly do to get rid of this? For any of those that have replaced a logic board on a MacBook, if this really is the culprit, are there any "gotcha's" I need to be aware of? As an FYI, I replaced the hard drive on an old iBook @500MHz that I had a long time ago, and I replaced the drive on a 1.33GHz PowerBook about 6 years ago. You have to be careful, but using some of the info on web sites like iFixit it's not that hard. Time consuming, but not that hard. The Intel based MacBook's to me look like they're easier to service than either of those. I'm thinking about getting a unit off of eBay that matches mine but has something else wrong with it, like a busted display. I REFUSE to buy a new system. A guy at my office has a 2007 Mac Pro and he can't upgrade to Mountain Lion because his system is "obsoleted." That's ridiculous. If you pay nearly $7,500 for a system it shouldn't be trash just because Apple decides they don't have enough money (sorry for the soap box, but it's true, IMO!) Any input is appreciated.

    Read the article

  • Reliable access to Internet but not local network (not DNS or proxy issues)

    - by Ian Goldby
    I'm looking for help with a Vista Home Premium laptop that has trouble accessing any resource on our home network, but accesses the Internet just fine. The set-up is this: The Vista laptop and a MacBook Pro connect wirelessly to the router-modem. A Synology DS212j NAS drive has a wired connection to the router-modem. Devices on the local network are always referred to by IP address, so this cannot be a DNS issue. The MacBook Pro connects reliably to the NA via AFP (network shared folders), SMB (network shared folders) and HTTP. The Vista laptop connects to and browses sites on the Internet without any problems. It can log into the NAS via SMB and list the shared folders (so there is nothing wrong with the log-in credentials), but when it tries to open any of the folders Explorer just hangs with the spinning cursor for several minutes and then says "\192.168.1.64\shared\Photos is not accessible. You might not have permission to use this network resource. Contact the administrator of this server to find out if you have access permissions. The specified network name is no longer available." It can ping the NAS successfully. If I try to open the NAS drive's web interface, the browser just hangs. This is the same with IE, Firefox and Chrome. (There is no proxy.) I can log into the NAS drive with FTP and navigate directories, but when I try to list the contents of a directory with more than a handful of entries, the ftp client hangs. I set up a website on the MacBook. The Vista laptop was able to load some of the pages, but loading any of the images was very hit and miss. Images embedded in HTML pages never worked no matter how many times I reloaded the page, but when I linked directly to the image it did load (though several attempts were sometimes needed). I tried all of this with the Windows Firewall turned off, and with AVG turned off. That made no difference. I'd really appreciate any suggestions anyone can make. The fact that the Vista laptop has trouble with HTTP and FTP as well as SMB connections suggests to me that this is a problem at the TCP level or below. But don't forget it accesses sites outside the LAN with no problems.

    Read the article

  • Network update solutions for a company of ~20 (5 local, 15 remote)?

    - by Margaret
    Hi all This is probably going to be a bit up in the air, because we're still in the "reaching towards solutions" phase, but I figured I'd see what you guys had to say. Plus I honestly know very little about systems and what is good and bad pratice. My organisation has always more or less worked on the concept of local machines; since it primarily employed contractors who were working from home, each of those people was largely responsible for their own machine and backup procedures and the like. We're now expanding, though we're still reasonably small (we're up to about 20 staff members). Most people still work remotely, but we have a central office where about five people are working. But we're getting large enough that we're starting to think it would be a good idea to have a central file server, and things like that - if someone gets hit by a bus, we want someone else to know where to look for the files to continue their work. A lot of the people who work for us remotely work on projects for other companies as well, so I don't want to force them to log in to our server whenever they're on a network. But I do want to make connection to be as painless as possible to do so, to improve utilisation. The other thing is that we're getting more people who would like to remote into the office server and do their work there. Our current remote connection application is an SSH install that allows people access to the network; the problem is, it's a black box to me, and I've never understood how to even connect to it (despite supposedly being de facto sysadmin). Thus far I've been able to bounce questions about how to get it working to the guy who does know it well, but he's leaving the company soon. So we probably need a solution for this that I actually understand. We were knocking around the idea of implementing a VPN with some form of remote desktop, and someone mentioned that this was largely a matter of purchasing a router capable of it; I'm not sure of the truth of that statement. This is what we have in the office: Two shiny new i7 servers, each running Windows Server 2008. Precise eventual layout is still being debated, a little, but the current suggestion is that one is primary database crunching, while the other is a warm backup of the databases, along with running Reporting Services. They currently have SQL Server 2008 installed on them, which is being connected to via the 'sa' account. We're hoping to make each person use their own account (preferably one tied to the 'central' password we set up, so we can use Windows Authentication). An older server, running XP Pro, that we are currently using as a test bed for a project that requires access to older versions of software. This machine is also being used to take backups, but I'm thinking of moving that functionality elsewhere. A spare desktop from a guy who left the company (XP Pro). We're thinking of bumping up the hard disk space and using it as the magical file server that's going to solve one particular everything. Assorted desktops, laptops, etc, at least one for each person in the office (mix of Win XP and Win 7; occasionally a person who normally works remotely might drop in to the office and bring a laptop bearing Vista, but it's pretty rare). All are set up as local user accounts at the moment; I don't know if it's the best arrangement. Purchasing more hardware is not a big problem, but we figure we might as well make use of what we've got first. Is Active Directory a big magic wand that's going to solve all the world's problems? Is there some other arrangement we should be looking to instead?

    Read the article

  • Set source address to use tun device does not work (Debian Squeeze)

    - by A. Donda
    there have been similar questions on StackExchange but none of the answers helped me, so I'll try a question of my own. I have a VPN connection via OpenVPN. By default, all traffic is redirected through the tunnel using OpenVPN's "two more specific routes" trick, but I disabled that. My routing table is like this: 198.144.156.141 192.168.2.1 255.255.255.255 UGH 0 0 0 eth0 10.30.92.5 0.0.0.0 255.255.255.255 UH 0 0 0 tun1 10.30.92.1 10.30.92.5 255.255.255.255 UGH 0 0 0 tun1 192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 0.0.0.0 10.30.92.5 0.0.0.0 UG 0 0 0 tun1 0.0.0.0 192.168.2.1 0.0.0.0 UG 0 0 0 eth0 And the interface configuration is like this: # ifconfig eth0 Link encap:Ethernet HWaddr XX-XX- inet addr:192.168.2.100 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::211:9ff:fe8d:acbd/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:394869 errors:0 dropped:0 overruns:0 frame:0 TX packets:293489 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:388519578 (370.5 MiB) TX bytes:148817487 (141.9 MiB) Interrupt:20 Base address:0x6f00 tun1 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.30.92.6 P-t-P:10.30.92.5 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:64 errors:0 dropped:0 overruns:0 frame:0 TX packets:67 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:9885 (9.6 KiB) TX bytes:4380 (4.2 KiB) plus the lo device. The routing table has two default routes, one via eth0 through my local network router (DSL modem) at 192.168.2.1, and another via tun1 through the VPN's gateway. With this configuration, if I connect to a site, the route chosen is the direct one (because it has less hops?): # traceroute 8.8.8.8 -n traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets 1 192.168.2.1 0.427 ms 0.491 ms 0.610 ms 2 213.191.89.13 17.981 ms 20.137 ms 22.141 ms 3 62.109.108.48 23.681 ms 25.009 ms 26.401 ms ... This is fine, because my goal is to send only traffic from specific applications through the tunnel (esp. transmission, using its -i / bind-address-ipv4 option). To test whether this can work at all, I check it first with traceroute's -s option: # traceroute 8.8.8.8 -n -s 10.30.92.6 traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets 1 * * * 2 * * * 3 * * * ... This I take to mean that connection using the tunnel's local address as source is not possible. What is possible (though only as root) is to specify the source interface: # traceroute 8.8.8.8 -n -i tun1 traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets 1 10.30.92.1 129.337 ms 297.758 ms 297.725 ms 2 * * * 3 198.144.152.17 297.653 ms 297.652 ms 297.650 ms ... So apparently the tun1 interface is working and it is possible to send packets through it. But selecting the source interface is not implemented in my actual target application (transmission), so I would like to get source address selection to work. What am I doing wrong?

    Read the article

  • VPC SSH port forward into private subnet

    - by CP510
    Ok, so I've been racking my brain for DAYS on this dilema. I have a VPC setup with a public subnet, and a private subnet. The NAT is in place of course. I can connect from SSH into a instance in the public subnet, as well as the NAT. I can even ssh connect to the private instance from the public instance. I changed the SSHD configuration on the private instance to accept both port 22 and an arbitrary port number 1300. That works fine. But I need to set it up so that I can connect to the private instance directly using the 1300 port number, ie. ssh -i keyfile.pem [email protected] -p 1300 and 1.2.3.4 should route it to the internal server 10.10.10.10. Now I heard iptables is the job for this, so I went ahead and researched and played around with some routing with that. These are the rules I have setup on the public instance (not the NAT). I didn't want to use the NAT for this since AWS apperantly pre-configures the NAT instances when you set them up and I heard using iptables can mess that up. *filter :INPUT ACCEPT [129:12186] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [84:10472] -A INPUT -i lo -j ACCEPT -A INPUT -i eth0 -p tcp -m state --state NEW -m tcp --dport 1300 -j ACCEPT -A INPUT -d 10.10.10.10/32 -p tcp -m limit --limit 5/min -j LOG --log-prefix "SSH Dropped: " -A FORWARD -d 10.10.10.10/32 -p tcp -m tcp --dport 1300 -j ACCEPT -A OUTPUT -o lo -j ACCEPT COMMIT # Completed on Wed Apr 17 04:19:29 2013 # Generated by iptables-save v1.4.12 on Wed Apr 17 04:19:29 2013 *nat :PREROUTING ACCEPT [2:104] :INPUT ACCEPT [2:104] :OUTPUT ACCEPT [6:681] :POSTROUTING ACCEPT [7:745] -A PREROUTING -i eth0 -p tcp -m tcp --dport 1300 -j DNAT --to-destination 10.10.10.10:1300 -A POSTROUTING -p tcp -m tcp --dport 1300 -j MASQUERADE COMMIT So when I try this from home. It just times out. No connection refused messages or anything. And I can't seem to find any log messages about dropped packets. My security groups and ACL settings allow communications on these ports in both directions in both subnets and on the NAT. I'm at a loss. What am I doing wrong?

    Read the article

  • Django | Apache | Deploy website behind SSL

    - by planet260
    So here are my requirements. I have a website built in Django. I deployed it on Apache Ubuntu. Before there was no SSL involved so the deployment was pretty simple. But now the requirements are changed. Now I have to take a few actions like signup and login behind SSL and present the admin panel and other normally via HTTP. By following the this tutorial I have set-up Apache and SSL and generated certificates for SSL communication. But I am not sure how to proceed, ie. how to serve only a few of my actions through SSL. Below is my configuration. The normal actions are working fine but I don't know how to configure SSL calls. WSGIScriptAlias / /home/ubuntu/myproject/src/myproject/wsgi.py WSGIPythonPath /home/ubuntu/myproject/src <VirtualHost *:80> ServerName mydomain.com <Directory /home/ubuntu/myproject/src/myproject> <Files wsgi.py> order deny,allow Allow from all </Files> </Directory> Alias /static/admin/ "/home/ubuntu/myproject/src/static/admin/" <Directory "/home/ubuntu/myproject/src/static/admin/"> Order allow,deny Options Indexes Allow from all IndexOptions FancyIndexing </Directory> <Location "/login"> RewriteEngine on RewriteRule /admin(.*)$ https://mydomain.com/login$1 [L,R=301] </Location> </VirtualHost> <VirtualHost *:443> ServerName mydomain.com SSLEngine on SSLOptions +StrictRequire SSLCertificateFile /etc/apache2/ssl/apache.crt SSLCertificateKeyFile /etc/apache2/ssl/apache.key <Directory /home/ubuntu/myproject/src/myproject> <Files wsgi.py> order deny,allow Allow from all </Files> </Directory> Alias /static/admin/ "/home/ubuntu/myproject/src/static/admin/" <Directory "/home/ubuntu/myproject/src/static/admin/"> Order allow,deny Options Indexes Allow from all IndexOptions FancyIndexing </Directory> </VirtualHost> Can you please help me out on how to achieve this? What am I doing wrong? I have read a lot of tutorials but honestly I am not really good at configurations. Any help is appreciated.

    Read the article

  • How to unmangle PDF format into a usable text or spreadsheet document?

    - by Chuck
    Upon requesting some daily/hourly sales data from a coworker who is responsible for such requests, I was given a series of PDF files. The point of sale program that is used, for some reason, answers requests for this type of information in the form of PDF files. The issue: The PDF files look to be in a format that should easily be copy and pasted into a spreadsheet. There are three columns that look to be neatly organized across two pages. When copy/pasting the first page, all three columns from the PDF's first page are dumped into a single column consisting of the Date followed by the Hours for the transactions on that day. The end of this Date/Time information is followed by all of the Total Sales values that should be attached a Date and Time of the transaction. (NOTE: There are no duplicated Dates in the Date column, ie, Multiple transactions for a day only have one yyyy/mm/dd listed for the first row but not the following rows.) While it was a huge pain, it was possible to, in about four or five steps, get the single column of data broken out into three columns that matched the PDF. The second page of the PDF file, when attempting to copy/paste into a spreadsheet, creates a single column with the first third of the cells being the Dates from the PDF, the second third of the cells being the Hours of the transactions and the final third of the cells being filled with the Total Sales. After the copy/paste there is no way to figure out which Hours belong to which Dates or Total Sales due to the lack of the duplicated Dates in the Date column as mentioned above. My PDF-fu is next to non-existent. I've just now started to work with PDF editors and some www.convertmyPDFforfree.com websites, so far, with absolutely nothing remotely coming anywhere near usable output. (Both methods have so far done nothing but product blank documents.) Before I go back and pester my co-worker into figuring out a way to create a report in some other format than PDF, is there any method by which to take the data that looks to be formatted correctly in a PDF and copy/paste it into a spreadsheet that will look the same? I appreciate any help that can be made available. The sales data isn't so sensitive that I couldn't part with a bit to let somebody actually see what it is that needs to be dealt with, just let me know. The PDF's are less than 100kb each so sending them shouldn't be a burden to any interested party.

    Read the article

  • Internet Pings but Does Not Load

    - by t3techcom18
    From what I've been seeing and been doing my research for the past two days, many people have been having the same issues throughout the years, however, this is the first time I've encountered this issue and many of the specific workarounds or fixes have not worked for me. I've been trying to work through this for 24 hours straight now, but to no avail so many thanks to those that can help. On Monday night, got home from work; surfing the internet for half an hour, everything was fine as always. Just after half an hour, my Internet got very sluggish and then it died completely. I thought it might have been the an update I just put through in terms of Windows Update that said was a critical update for MSE, as the same thing happened a few years ago. I did a System Restore to two different dates that were in the past two weeks, nothing. Uninstalled MSE and disabled Windows Defender and the Windows Firewall: Nothing. Reset IE Options, Reset Winsock, Dumping DNS, many of the other command prompt screens to reset items: Nothing. Reset the modem: Nothing. What DID work, however, was a ping test to Yahoo. The ping test worked, saying all four packets was recieved, yet nothing else popped up. LAN and CenturyLink said everything worked on their end and that everything was connected properly, as well as the speeds working fine. CenturyLink said in their notes that they thought Port 80 was blocked. I went and put in the Firewall to allow Port 80 but it didn't make any difference whatsoever. I remembered I had a spare modem laying around and I switched them up, both modem and the cords - nothing. I then hooked it up to my netbook to see if that would work, as it usually does - connection didn't work there either. Like I said, it's been about 24 hours now and this is increasingly frustrating, as I've tried all solutions (While browsing through 10 search results pages on my phone) suggested and still nothing. Any suggestions and tricks would be greatly appreciated! Here's my specs: Windows 7 32-bit Home Premium Intel Core 2 Duo 3.14 Ghz 4 GB Kingston DDR2 RAM eVGA nForce 750i SLI eVGA GeForce GTX 560 Ti FPB ISP: CenturyLink No router Modem: CenturyLink 660 Series Hardwired connection PLEASE NOTE: This is the only computer I have (Like I said, the netbook solution didn't work), so downloading programs and such is not an option til I get to other computers somewhere else, like right now. Unless someone knows of a way of copying/pasting a file in Windows and then transferring said info to an Android smartphone, this is gunna take a while haha. Patience is requested.

    Read the article

  • WPF Focus In Tab Control Content When New Tab is Created

    - by Phil Sandler
    I've done a lot of searching on SO and google around this problem, but can't seem to find anything else to try. I have a MainView (window) that contains a tab control. The tab control binds to an ObservableCollection of ChildViews (user controls). The MainView's ViewModel has a method that allows adding to the collection of ChildViews, which then creates a new tab. When a new tab is created, it becomes the active tab, and this works fine. This method on the MainView is called from another ViewModel (OtherViewModel). What I am trying to do is set the keyboard focus to the first control on the tab (an AutoCompleteBox from WPFToolkit*) when a new tab is created. I also need to set the focus the same way, but WITHOUT creating a new tab (so set the focus on the currently active tab). (*Note that there seem to be some focus problems with the AutoCompleteBox--even if it does have focus you need to send a MoveNext() to it to get the cursor in its window. I have worked around this already). So here's the problem. The focusing works when I don't create a new tab, but it doesn't work when I do create a new tab. Both functions use the same method to set focus, but the create logic first calls the method that creates a new tab and sets it to active. Code that sets the focus (in the ChildView's Codebehind): IInputElement element1 = Keyboard.Focus(autoCompleteBox); //plus code to deal with AutoCompleteBox as noted. In either case, the Keyboard.FocusedElement starts out as the MainView. After a create, calling Keyboard.Focus seems to do nothing (focused element is still the MainView). Calling this without creating a tab correctly sets the keyboard focus to autoCompleteBox. Any ideas? Update: Bender's suggestion half-worked. So now in both cases, the focused element is correctly the AutoCompleteBox. What I then do is MoveNext(), which sets the focus to a TextBox. I have been assuming that this Textbox is internal to the AutoCompleteBox, as the focus was correctly set on screen when this happened. Now I'm not so sure. This is still the behavior I see when this code gets hit when NOT doing a create. After a create, MoveNext() sets the focus to an element back in my MainView. The problem must still be along the lines of Bender's answer, where the state of the controls is not the same depending on whether a new tab was created or not. Any other thoughts?

    Read the article

  • ActiveDirectoryMembershipProvider and ADAM (or AD LDS) and SetPassword

    - by Iulian
    By the subject line it seems to be a rather broad subject and I need some help here. Basically what I want is to use ActiveDirectoryMembershipProvider with an ADAM instance to authenticate users in an ASP.NET web application. My development environment is a windows 7 machine with an AD LDS instance on it whilst the QA server is a Windows 2003 server with an ADAM instance on it. I have all the required users on both instances plus one with adminsitrator role (CN=Admin,CN=xxx,DC=xxx,C=xx) which I want to use as the connection user. Using connectonProtecton="None" connectionUsername="CN=Admin,CN=xxx,DC=xxx,C=xx" connectionPassword="xxx" I am able to authenticate on both environments (dev & qa). If I change to the connectionProtection to "Secure" I am not able to authenticate anymore; the error I get is "Parser Error Message: Unable to establish secure connection with the server" To me it sounds wrong to use connectionProtection="None" although I found on the net a lot of samples using this setting. Can I use connectionProtection="Secure" to connect to an ADAM instance using an account defined on that instance having Administrator role? What other choices do I have (like using an domain account)? What if my machine where I am to deploy the application is not a part of the domain, will this affect in any way the behavior? I am novice in the respect so I would really appreciate some clear answers or some directions as where to look? Now beside the "signing in" feature of the ActiveDirectoryMembershipProvider I also want to add an extra one, which is setting the password without knowing the old one (something that will be used by a "reset password" feature). So I added a couple of extension methods to the provider, and used System.DirectoryServices classes like DirectoryEntry and the like. When creating a directory entry I use the same credentials provided in web.config for the provider minus the AuthenticationType as I don't know what is right combination of the flags that corresponds to None/Secure. I am able to use Invoke "SetPassword" with ADS_OPTION_PASSWORD_METHOD option as ADS_PASSWORD_ENCODE_CLEAR on my dev machine (w/ AD LDS instance); nevertheless on qa environment (w/ ADAM instance) I am getting an error like "Exception Details: System.DirectoryServices.DirectoryServicesCOMException: An operations error occurred. (Exception from HRESULT: 0x80072020)" I am quite sure it is not about AD LDS vs ADAM but probably another configuration / permission issue. So can anyone help me with some hints on how to use this SetPassword feature? And as a general question what are the best practices when it comes to using ADAM regarding security, programming etc? Thanks in advance Iulian

    Read the article

  • InteropServices COMException when executing a .net app from a web CGI script on Windows Server 2003

    - by Kurt W. Leucht
    Disclaimer: I'm completely clueless about .net and COM. I have a vendor's application that appears to be written in .net and I'm trying to wrap it with a web form (a cgi-bin Perl script) so I can eventually launch this vendor's app from a separate computer. I'm on a Windows Server 2003 R2 SE SP1 system and I'm using Apache 2.2 for the web server and ActivePerl 5.10.0.1004 for the cgi script. My cgi script calls the vendor's app that resides on the same machine using the Perl backtick operator. ... $result = "Result: " . `$vendorsPath/$vendorsExecutable $arg1 $arg2`; ... Right now I'm just running IE web browser locally on the server machine and accessing "http://localhost/cgi-bin/myPerlScript.pl". The vendor's app fails and logs a debug message that includes the following stack trace (I changed a couple names so as to not give away the vendor's identity): ... System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.Runtime.InteropServices.COMException (0x80043A1D): 0x80040154 - Class not registered --- End of inner exception stack trace --- at System.RuntimeType.InvokeDispMethod(String name, BindingFlags invokeAttr, Object target, Object[] args, Boolean[] byrefModifiers, Int32 culture, String[] namedParameters) at System.RuntimeType.InvokeMember(String name, BindingFlags invokeAttr, Binder binder, Object target, Object[] args, ParameterModifier[] modifiers, CultureInfo culture, String[] namedParameters) at VendorsTool.Engine.Core.VendorsEngine.LoadVendorsServices(String fileName, String& projectCommPath) ... When I run the vendors app from the Windows command line on the server machine with the exact same arguments that the cgi script is passing it runs just fine, so there's something about invoking their app via the web script that is causing a problem. This problem is likely security related because the whole thing runs just fine on a Windows XP Pro machine (both command line and web invocation). I actually developed my web script there and got it completely working there before I tried moving it to the Windows Server 2003 machine. So what's different about the Windows Server 2003 machine that would keep the vendor's .net app from being executed successfully by a web cgi script? Can I fix this problem somehow to make it work on my server or will the vendor have to make a change to their .net app and ship out a new version? I'm probably the only person in the world who is trying to execute this vendor's app from a separate program, so I hate to bother the vendor with the issue if there's a workaround that I can implement myself here on my server machine. Plus, I'm in kind of a hurry and I don't want to wait 4 or 6 months for the vendor to put in a fix and deploy a new version. Thanks for any advise you can give.

    Read the article

  • ASP.NET MVC 2: Updating a Linq-To-Sql Entity with an EntitySet

    - by Simon
    I have a Linq to Sql Entity which has an EntitySet. In my View I display the Entity with it's properties plus an editable list for the child entites. The user can dynamically add and delete those child entities. The DefaultModelBinder works fine so far, it correctly binds the child entites. Now my problem is that I just can't get Linq To Sql to delete the deleted child entities, it will happily add new ones but not delete the deleted ones. I have enabled cascade deleting in the foreign key relationship, and the Linq To Sql designer added the "DeleteOnNull=true" attribute to the foreign key relationships. If I manually delete a child entity like this: myObject.Childs.Remove(child); context.SubmitChanges(); This will delete the child record from the DB. But I can't get it to work for a model binded object. I tried the following: // this does nothing public ActionResult Update(int id, MyObject obj) // obj now has 4 child entities { var obj2 = _repository.GetObj(id); // obj2 has 6 child entities if(TryUpdateModel(obj2)) //it sucessfully updates obj2 and its childs { _repository.SubmitChanges(); // nothing happens, records stay in DB } else ..... return RedirectToAction("List"); } and this throws an InvalidOperationException, I have a german OS so I'm not exactly sure what the error message is in english, but it says something along the lines of that the entity needs a Version (Timestamp row?) or no update check policies. I have set UpdateCheck="Never" to every column except the primary key column. public ActionResult Update(MyObject obj) { _repository.MyObjectTable.Attach(obj, true); _repository.SubmitChanges(); // never gets here, exception at attach } I've read alot about similar "problems" with Linq To Sql, but it seems most of those "problems" are actually by design. So am I right in my assumption that this doesn't work like I expect it to work? Do I really have to manually iterate through the child entities and delete, update and insert them manually? For such a simple object this may work, but I plan to create more complex objects with nested EntitySets and so on. This is just a test to see what works and what not. So far I'm disappointed with Linq To Sql (maybe I just don't get it). Would be the Entity Framework or NHibernate a better choice for this scenario? Or would I run into the same problem?

    Read the article

  • Automatically extracting inline XSD from WSDL into XSD file(s)

    - by Steven Geens
    I am using a third party Web Service whose definition and implementation are beyond my control. This web service will change in the future. The Web Service should be used to generate an XML file which contains some of the same data (represented by the same XSD types) as the Web Service plus some extra information generated by the program. My approach: create my own XSD referring to the XSD definitions of the WSDL of the called web service (This XSD also includes XSD types for the extra information obviously.) use a Java XML databinding framework (like ADB or JiXB) to generate the databinding classes from my own XSD file from step 1 use a Java SOAP framework (like Axis2 or CXF) with the same databinding framework to generate the databinding classes from the WSDL (This would enable me to use the objects retrieved by the web service directly in the generation of the XML.) The XSD types I am going to use in my own XSD file, but are defined in the WSDL, are subject to change. Whenever they change, I would like to automatically process the XSD and WSDL databinding again. (If the change is significant enough, this might trigger some development effort.(But usually not.)) My problem: In step 1 I need an XSD referring to the same types as used by the Web Service. The WSDL is referring to another WSDL, which is referring to another WSDL etc. Eventually there is an WSDL with the needed inline XSD types. As far as I know there is no way to directly reference the inline XSD types of a WSDL from an XSD. The approach I would think most viable, is to include an extra step in the automatic processing (before the databinding) that extracts the inline XSD from the WSDL into other XSD file(s). These other XSD file(s) can then be referred to by my own XSD file. Things I'd like to avoid: Manually copy pasting the inline XSD into an XSD file (I am looking for an automatic process.) Any manual steps.(Like the determining the WSDL that contains the inline types manually.(The location of that WSDL does change as well.)) Using xsd:any in my own XSD. I would like my own XSD file to be correct. Using a non-Java technology(like .NET) Huge amounts of implementation (but hints on how you would implement such an extraction are welcome anyway) PS: I found some similar questions, but they all had responses like: WTH would you want to do that? That is the reason for my rather large background story.

    Read the article

  • Resizing QT's QTextEdit to Match Text Height: maximumViewportSize()

    - by Aaron
    I am trying to use a QTextEdit widget inside of a form containing several QT widgets. The form itself sits inside a QScrollArea that is the central widget for a window. My intent is that any necessary scrolling will take place in the main QScrollArea (rather than inside any widgets), and any widgets inside will automatically resize their height to hold their contents. I have tried to implement the automatic resizing of height with a QTextEdit, but have run into an odd issue. I created a sub-class of QTextEdit and reimplemented sizeHint() like this: QSize OperationEditor::sizeHint() const { QSize sizehint = QTextBrowser::sizeHint(); sizehint.setHeight(this->fitted_height); return sizehint; } this-fitted_height is kept up-to-date via this slot that is wired to the QTextEdit's "contentsChanged()" signal: void OperationEditor::fitHeightToDocument() { this->document()->setTextWidth(this->viewport()->width()); QSize document_size(this->document()->size().toSize()); this->fitted_height = document_size.height(); this->updateGeometry(); } The size policy of the QTextEdit sub-class is: this->setSizePolicy(QSizePolicy::MinimumExpanding, QSizePolicy::Preferred); I took this approach after reading this post. Here is my problem: As the QTextEdit gradually resizes to fill the window, it stops getting larger and starts scrolling within the QTextEdit, no matter what height is returned from sizeHint(). If I initially have sizeHint() return some large constant number, then the QTextEdit is very big and is contained nicely within the outer QScrollArea, as one would expect. However, if sizeHint gradually adjusts the size of the QTextEdit rather than just making it really big to start, then it tops out when it fills the current window and starts scrolling instead of growing. I have traced this problem to be that, no matter what my sizeHint() returns, it will never resize the QTextEdit larger than the value returned from maximumViewportSize(), which is inherited from QAbstractScrollArea. Note that this is not the same number as viewport()-maximumSize(). I am unable to figure out how to set that value. Looking at QT's source code, maximumViewportSize() is returning "the size of the viewport as if the scroll bars had no valid scrolling range." This value is basically computed as the current size of the widget minus (2 * frameWidth + margins) plus any scrollbar widths/heights. This does not make a lot of sense to me, and it's not clear to me why that number would be used anywhere in a way that supercede's the sub-class's sizeHint() implementation. Also, it does seem odd that the single "frameWidth" integer is used in computing both the width and the height. Can anyone please shed some light on this? I suspect that my poor understanding of QT's layout engine is to blame here.

    Read the article

< Previous Page | 460 461 462 463 464 465 466 467 468 469 470 471  | Next Page >