Search Results

Search found 11111 results on 445 pages for 'fahnz mode'.

Page 346/445 | < Previous Page | 342 343 344 345 346 347 348 349 350 351 352 353  | Next Page >

  • How to stop Firefox on an SSD from freezing when using the search box or submitting a form?

    - by sblair
    Firefox usually freezes for about a second whenever I search for something from the toolbar search box, when submitting a form, or when clearing the search box history. I suspect it has something to do with the auto-complete feature. Using Windows 7's Resource Monitor, the problem seems to be from the file: C:\Users\<username>\AppData\Roaming\Mozilla\Firefox\Profiles\<profile>\formhistory.sqlite-journal I believe this is a temporary file which caches database writes. The following screenshot shows the very high response times from six different searches, and that the queue length on drive C shoots off the scale: My Firefox profile is on an Intel X25-M G2 SSD. The problem doesn't seem to occur if I create a new profile on a hard disk drive. However, I'd like to know why the problem exists on the SSD in the first place (because it's an annoying problem which contradicts the reason I bought an SSD, and it might happen with other applications too), and how to prevent it. It still occurs if Firefox is started in safe mode, and with the recent beta versions. Updates: VACUUMing the Firefox profile databases does not help with this problem. The SSD Optimizer in the Intel SSD Toolbox does not help either.

    Read the article

  • Virtualbox port forwarding with iptables

    - by jverdeyen
    I'm using a virtualmachine (virtualbox) as mailserver. The host is an Ubuntu 12.04 and the guest is an Ubuntu 10.04 system. At first I forwarded port 25 to 2550 on the host and added a port forward rule in VirtualBox from 2550 to 25 on the guest. This works for all ports needed for the mailserver. The guest has a host only connection and a NAT (with the port-forwarding). My mailserver was receiving and sending mail properly. But all connections are comming from the virtualbox internal ip, so every host connection is allowed, and that's not what I want. So.. I'm trying to skip the VirtualBox forwarding part and just forward port 25 to my host only ip of the guest system. I used these rules: iptables -F iptables -P INPUT ACCEPT iptables -P OUTPUT ACCEPT iptables -P FORWARD ACCEPT iptables -t nat -P PREROUTING ACCEPT iptables -t nat -P POSTROUTING ACCEPT iptables -A INPUT --protocol tcp --dport 25 -j ACCEPT iptables -A INPUT -i lo -j ACCEPT iptables -A INPUT -s 192.168.99.0/24 -i vboxnet0 -j ACCEPT echo 1 > /proc/sys/net/ipv4/ip_forward iptables -t nat -A PREROUTING -p tcp -i eth0 -d xxx.host.ip.xxx --dport 25 -j DNAT --to 192.168.99.105:25 iptables -A FORWARD -s 192.168.99.0/24 -i vboxnet0 -p tcp --dport 25 -j ACCEPT iptables -t nat -A POSTROUTING -s 192.168.99.0 -o eth0 -j MASQUERADE iptables -L -n But after these changes I still can't connect with a simple telnet. (Which was possible with my first solution). The guest machine doesn't have any firewall. I only have one network interface on the host (eth0) and a host interface (vboxnet0). Any suggestions? Or should I go back to my old solution (which I don't really like). Edit: bridge mode isn't an option, I have only on IP available for the moment. Thanks!

    Read the article

  • VLAN Tagging Traffic on Cisco Switch

    - by David W
    I have a situation where I'm setting up multiple VLANS on a pfSense firewall on the same physical interface for a client. So in pfSense, I now have VLAN 100 (employees) and VLAN 200 (students - student computer lab). Downstream from pfSense, I have a Cisco SG200 switch, and coming off of the SG200 is the student lab (running on a Catalyst 2950. Yes, that's old, but it works, and this is a poor nonprofit we're talking about). What I'd like to do is tag everything on the network as VLAN 100, except for the student computer lab. Earlier today when I was on-site with the client, I went into to the old Catalyst 2950, and assigned all of its ports to access VLAN 200 (switchport mode access vlan 200) without setting up a trunk on the Catalyst or on the SG200. Looking back on it, I now understand why internet in the lab broke. I reverted the lab back to the default VLAN1 (we're still running on a different firewall - we haven't deployed pfSense -, and the traffic is still separated physically). So my question is, what do I need to do in order to properly deploy this scenario? I believe the correct answer is: Ensure VLANs 100 and 200 are setup in pfSense, and that DHCP is operating correctly (on separate subnets) Setup a trunkport VLAN that allows both 100 & 200 traffic, and plug that port directly into pfSense. Setup a VLAN 200 trunkport on the SG200 (It's not running iOS, but if it were, the command would be switchport trunk native vlan 200), which will then plug into the Catalyst 2950. Setup a VLAN 200 trunkport on the Catalyst 2950 (that is plugged into the SG200 VLAN200 port with the same command - switchport trunk native vlan 200) Setup the rest of the ports on the old Catalyst 2950 in the lab to be access ports on VLAN200. Is there anything that I'm missing, or do I need to tweak any of these steps, in order to properly segment the network traffic?

    Read the article

  • Debian Bluetooth headphones not working

    - by cYrus
    Hardware Headphones Bluetooth dongle Maybe not exactly these models. Setup I tried to follow some guides, here's what I've done so far: Install software: sudo apt-get install bluez-utils bluez-alsa Reboot (just to be sure): $ dmesg | grep -i bluetooth [ 20.268212] Bluetooth: Core ver 2.16 [ 20.268230] Bluetooth: HCI device and connection manager initialized [ 20.268233] Bluetooth: HCI socket layer initialized [ 20.268235] Bluetooth: L2CAP socket layer initialized [ 20.268239] Bluetooth: SCO socket layer initialized [ 20.284685] Bluetooth: RFCOMM TTY layer initialized [ 20.284692] Bluetooth: RFCOMM socket layer initialized [ 20.284693] Bluetooth: RFCOMM ver 1.11 [ 20.335375] Bluetooth: BNEP (Ethernet Emulation) ver 1.3 [ 20.335378] Bluetooth: BNEP filters: protocol multicast The deamon is running: $ /etc/init.d/bluetooth status [ ok ] bluetooth is running. Plug the dongle: $ dmesg | tail [...] [23108.352034] usb 5-2: new full-speed USB device number 2 using ohci_hcd [23108.571131] usb 5-2: New USB device found, idVendor=0a12, idProduct=0001 [23108.571136] usb 5-2: New USB device strings: Mfr=0, Product=0, SerialNumber=0 [23108.629042] usbcore: registered new interface driver btusb Put the headphones in pairing mode, and try scanning: $ hcitool scan Scanning ... Found nothing. What's next? What should I try? I'll update this answer as soon as you provide me hints.

    Read the article

  • Sync desktop Mac environment to laptop

    - by Andrew Vit
    I spend the majority of my time working at my desktop Mac, which I have configured for my web development environment. My spouse has a MacBook for casual use, and I occasionally steal it back when I need to work off-site, or when travelling. The question is how to best synchronize the two so I can switch between them more readily. I've solved a few obvious things by using online services: Email is hosted on IMAP. Working files are in Dropbox. Source code is managed in git. However, the following are things I always miss when jumping on the laptop: Installed Applications (current versions) Installed libraries & utilities (/usr/local) Apache VirtualHosts & other configurations (/etc) Disk image files for VMs My current method is to connect the MacBook via Firewire target mode and rsync the /Users/me home directory, and then cherry-pick the other items I need from Applications, /etc and /usr/local. The problem with this method is that it can be very time consuming due to things like my virtual machine image files, cached emails, etc. How can I make this faster & easier? Can you recommend a solution for configuration management (so I can repeatably install & configure the same software on both), or synchronization (so I can bring the MacBook up to date nightly, over our home network)?

    Read the article

  • HP Laserjet 1320 drivers for Windows 7

    - by RedGrittyBrick
    Background I replaced som XP computers with Win 7 computers. We have a HP Laserjet 1320dn printer attached to the LAN. The XP computers could print to it from Word in duplex mode without any issues. Problem The Win 7 computers downloaded a full set of drivers including a "HP Laserjet 1320 PCL5" driver. However using this, Word's page footers cause extra pages to be printed with just a letter or so at a time from the footer. Some other apps have similar issues. I also have an ancient Laserjet 1200 attached to the LAN via a Jetdirect box and that works just fine. So I don't think the problem is with Word (the computers came with Word 2010 starter). What I tried I wrangled the control-panel new printer dialogue into using a "HP Laserjet 2100 PS" driver for the HP Laserjet 1320dn. Now my Word documents print as they should. However I don't have a duplex option on the print dialogue. I'd really like to be able to use duplex printing. Question Windows used to have a Universal Postscript driver that read a Postscript Printer Definition (PPD) file to make the printer's features available (tray choice, duplex etc). I can't see any way to do this in Windows 7. Is there a way? Is there any other way I can get Windows 7 Home Premium 64-bit to print properly to a HP Laserjet 1320dn and have access to all it's major features? Addendum: My page in Word looks like this: +--------------------+ | aa bb cc | | | | lorem ipsum dolor | | | ... | | | pp qq rr | +--------------------+ The headers and footers were inserted using Insert - header - blank (3 column). When printed I get 1 page with aa, bb, cc, pp in correct position (no other text) 1 blank page 1 page with qq in correct position (no other text) 1 blank page 1 page with rr in correct position (no other text) 1 blank page 1 page with lorem ipsum dolor in correct position (no headers or footers) If I use a Laserjet 2100 driver I get 1 correct page.

    Read the article

  • How to take screenshots of WPF applications in correct size and content

    - by Thomas W.
    I usually take screenshots of single windows via the built-in key combination Alt+Print. Unfortunately this does not work well for more and more applications - all of them are WPF applications. Usually the screen shots have at least one of the following properties: the screenshot is larger than expected and contains parts of the screen around the actual window the screenshot has the correct size but includes parts of other windows, e.g. the Windows task bar. Of course the task bar might be in front of the window, but taking screen shots of "normal" programs works fine. How do I take screenshots of WPF application which are correct in size and content? I'd like to avoid the extra effort of checking all the screenshots for correctness, reproducing the situation, taking them again in case of issues or repairing/faking them manually in any pixel manipulation program (e.g. Paint.NET). I observe this on Windows 7 x64 SP 1, all official updates installed, but it might apply to other Windows versions as well (not tested yet). .NET 4.5 is installed. The application itself might only need the built-in .NET 3.5.1. It's reproducible on a virtual machine with the same settings. Examples: Screenshot of an application running in maximized mode. The screen shot includes parts of the task bar. Screenshot of a progress dialog which is behind the task bar. The screenshot also includes the task bar, while it doesn't for non WPF applications.

    Read the article

  • convert a pdf/djvu file to png's under Linux how? [closed]

    - by user66732
    Imagemagick doesn't work (Fedora 14) on one PDF file: $ convert -density 300 INPUT.PDF out.png Error: /ioerror in --showpage-- Operand stack: 1 true Execution stack: %interp_exit .runexec2 --nostringval-- --nostringval-- --nostringval-- 2 %stopped_push --nostringval-- --nostringval-- --nostringval-- false 1 %stopped_push 1878 1 3 %oparray_pop 1877 1 3 %oparray_pop 1861 1 3 %oparray_pop --nostringval-- --nostringval-- 141 1 319 --nostringval-- %for_pos_int_continue --nostringval-- --nostringval-- 1761 0 9 %oparray_pop --nostringval-- --nostringval-- Dictionary stack: --dict:1157/1684(ro)(G)-- --dict:1/20(G)-- --dict:75/200(L)-- --dict:75/200(L)-- --dict:108/127(ro)(G)-- --dict:288/300(ro)(G)-- --dict:22/25(L)-- --dict:6/8(L)-- --dict:22/40(L)-- Current allocation mode is local Last OS error: 27 GPL Ghostscript 8.71: Unrecoverable error, exit code 1 convert: Postscript delegate failed INPUT.PDF': @ error/pdf.c/ReadPDFImage/645.<br> convert: missing an image filenameout.png' @ error/convert.c/ConvertImageCommand/2953. $ And it doesn't work on a djvu file: $ convert -density 300 INPUT.DJVU out.png convert: no decode delegate for this image format INPUT.DJVU' @ error/constitute.c/ReadImage/532.<br> convert: missing an image filenameout.png' @ error/convert.c/ConvertImageCommand/2953. $ an extra: the output filenames. out-0.png out-1.png ... out-9.png out-10.png out-11.png .. out-123.png out-124.png is there a way to be like this?: out-000.png out-001.png ... out-009.png out-010.png out-011.png .. out-123.png out-124.png because they would be in wrong order: out-0.png out-1.png out-10.png out-11.png out-123.png out-124.png out-9.png thank you :\

    Read the article

  • Slowdown upon router/modem setup change

    - by Ollie Saunders
    I’ve been using a Belkin FSD7632-4 modem router to connect to my TalkTalk provided ADSL internet connection for some time and been pretty happy with it. Recently, however, the connection has been failing and I decided to get a ASUS RT-N16 instead, which is also a much more capable router generally. The ASUS RT-N16 doesn’t come with a modem built-in so I purchased as Zoom modem as well. I’ve set them both up and am using them to post this message. But I’m a bit miffed to find that I get a significantly and consistently slower downstream rate from the new configuration than with the old Belkin. Belkin modem router: downstream: 3.45 mbps upstream: 0.73 mbps ASUS router + Zoom modem: downstream: 2.71 mbps upstream: 0.66 mbps Any ideas why this is? The really weird thing about this is that the Zoom supports ADSL2 and ADSL2+ but I don’t think the old Belkin does. At first I thought it might be due to the Zoom modem being limited to PPPoE instead of PPPoA, which my ISP supports, but then I tried using PPPoE with the Belkin and that still gave a high speed. I’m using VC-Mux encapsulation with both. VPI of 0 and VCI of 38. I pulled this data off the Zoom: Mode: ADSL2 Line Coding: Trellis On Status: No Defect Link Power State: L0 Downstream Upstream SNR Margin (dB): 12.3 11.8 Attenuation (dB): 43.0 24.9 Output Power (dBm): 12.9 0.0 Attainable Rate (Kbps): 3936 844 Rate (Kbps): 3194 840 MSGc (number of bytes in overhead channel message): 59 10 B (number of bytes in Mux Data Frame): 99 14 M (number of Mux Data Frames in FEC Data Frame): 2 16 T (Mux Data Frames over sync bytes): 1 8 R (number of check bytes in FEC Data Frame): 8 8 S (ratio of FEC over PMD Data Frame length): 1.9833 9.0594 L (number of bits in PMD Data Frame): 839 219 D (interleaver depth): 32 2 Delay (msec): 15 4 Super Frames: 15808 14078 Super Frame Errors: 0 4294967232 RS Words: 513778 111753 RS Correctable Errors: 126 4294967238 RS Uncorrectable Errors: 0 N/A HEC Errors: 0 4294967279 OCD Errors: 0 0 LCD Errors: 0 0 Total Cells: 1920175 237597 Data Cells: 205993 392 Bit Errors: 0 0 Total ES: 0 0 Total SES: 0 0 Total UAS: 34 0

    Read the article

  • Getting 404 error on MVC web-site

    - by RB
    I have an IIS7.5 web-site, on Windows Server 2008, with an ASP.NET MVC2 web-site deployed to it. The website was built in Visual Studio 2008, targeting .NET 3.5, and IIS 5.1 has been successfully configured to run it as well, for local testing. We've installed the world's simplest MVC application (the one which is created when you create a new MVC2 project in Visual Studio), and we are getting 404s on any page we try and access - e.g. <my_server>/Home/About will generate a 404. I've asked this question on StackOverflow as well, but that was before I knew it was a server issue. I have checked the following things: There are 404 entries in the IIS log, corresponding to each request. The application pool for the web-site is set to use the Integrated pipeline. The "customErrors" mode is set to off. .NET 3.5 SP1 is installed ASP.NET MVC 2 is installed I've used MVC Diagnostics to confirm all MVC DLLs are being found. ASP.NET is enabled in IIS, which we've demonstrated by running the MVC Diagnostics page. KB 2023146 did highlight that HTTP Redirection was off, so we've turned it on, but no joy. Any ideas will be greatly appreciated! Someone did suggest that there might be problems running it caused by Windows Server 2008 being 64-bit - does anyone know anything about this?

    Read the article

  • Node.js Build failed: -> task failed (error#2)?

    - by Richard Hedges
    I'm trying to install Node.js on my CentOS server. I run ./configure and it runs perfectly fine. I then run the 'make' command and it produces the following: [5/38] libv8.a: deps/v8/SConstruct - out/Release/libv8.a /usr/local/bin/python "/root/node/tools/scons/scons.py" -j 1 -C "/root/node/out/Release/" -Y "/root/node/deps/v8" visibility=default mode=release arch=ia32 toolchain=gcc library=static snapshot=on scons: Reading SConscript files ... ImportError: No module named bz2: File "/root/node/deps/v8/SConstruct", line 37: import js2c, utils File "/root/node/deps/v8/tools/js2c.py", line 36: import bz2 Waf: Leaving directory `/root/node/out' Build failed: - task failed (err #2): {task: libv8.a SConstruct - libv8.a} make: * [program] Error 1 I've done some searching on Google but I can't seem to find anything to help. Most of what I've found is for Cygwin anyway, and I'm on CentOS 4.9. Like I said, the ./configure went through perfectly fine with no errors, so there's nothing there that I can see. EDIT I've got a little further. Now I just need to upgrade G++ to version 4 (or higher). I tried yum update gcc but no luck, so I tried yum install gcc44, which resulted in no luck either. Has anyone got any ideas as to how I can update G++?

    Read the article

  • Multiheaded X.org with a single workspace-pool

    - by blauwblaatje
    I've got an idea for x.org/$randomwindowmanager in combination with a multiheaded setup, but I haven't figured out how it should work. Also I don't really know where to place the feature request. Now for the idea. I've been working with screen (wikipedia:GNU_Screen) for some years now. One thing I like about it, is the fact that I can get a multi-display mode (screen -x), so you can have multiple terminals all connected to the same screen. The fun thing about it, is that you can get 2 terminals with the same content and switch my onscreen layout, without moving the terminals. I admit, in screen it's not extremely useful, but I think for a wm it can be. Imagine this. You've got two monitors and 4 workdesks. On one workdesk I've got my IDE with code, on the second one I've got the output, on the third one I've got the documentation and on the forth one I've got my e-mail and IM clients. At one moment, I want my IDE and output on my monitors, another moment my code and documentation and Yet another moment my IM to consult a colleague and documentation or code. Finally my colleague comes to help me at my desk. I'd like it if we could both watch the same workdesk without him sitting on my lap, so I turn one monitor so he can see it better. It would be great if we could see the same thing that's on my monitor (exclude mousepointer). The thing with most WMs is that your workspaces on the two monitors are either separated or glued together. If they're separated, you can change workspaces on each monitor autonomous, but you can't exchange applications between monitors because they're different x-clients (iirc). If they're glued together (xinerama), you can exchange the applications, but when changing your workspace, the other monitors change too. So, what I'd like to know is this. Is this already possible or should I submit a feature request somewhere (and if so, where?)

    Read the article

  • Outlook 2010 not resolving SMTP address to Display Name

    - by Ben
    I have a weird problem where a user (director, naturally) has an odd problem with the way the to field shows in his Outlook. We are using Outlook 2010 (with RPC/HTTP) and Exch 2003 at the back end. Most of his mail shows in his mailbox with the To: field as normal. (e.g. Fred Bloggs in the to field.) However, some mails come in showing in the To: field as [email protected]. (Apparently this is an issue!) For example, most show as: but a few come in as: There doesn't appear to be a pattern to this. I have tried to replicate by: Sending from any specific senders to see if it recurs (it doesn't) Typing his full name in my Outlook and sending (it resolves as normal) Sending programatically (e.g. from script) - (it still resolves OK) Forcing a "[email protected]" entry in my Outlook. It resolves as soon as I hit enter Sending in cached mode Sending disconnected Anyone got any ideas how I can either replicate the problem or fix it! I can't tell at the moment whether it is a problem at his end or the sender's. EDIT This seems to be a global issue, following some more digging. Most people seem to have a few emails in their inbox addressed to their smtp address, rather than display name.

    Read the article

  • Windows Update and lsass.exe

    - by David
    I have a brand new installation of Windows XP (SP1 or older). I installed Norton AntiVirus, Firefox, Putty, and Cygwin. No other software is present. Windows Update finds the following 64 updates: KB905760, KB978262, Internet Explorer 8, KB71961, KB954155, KB968816, KB923561, KB950762, KB949402, KB950974, KB951376, KB951748, KB952004, KB952954, KB955069, KB956572, KB956802, KB956803, KB956844, KB958470, KB958869, KB959426, KB960803, KB960859, KB961501, KB969059, KB970238, KB970238, KB971032, KB971468, KB971657, KB972270, KB973507, KB973815, KB973904, KB974112, KB974318, KB974392, KB975025, KB975560, KB975561, KB975713, .......) When these updates are applied, the system reboots to a black screen with two error messages. The first error message says: lsass.exe - Application Error The application failed to initialize properly (0xc00000142). Click on OK to terminate the application. The second error message says: services.exe - Application Error The application failed to initialize properly (0xc00000142). Click on OK to terminate the application. I then proceed to boot into Safe Mode, use System Restore, and everything works fine again until the 64 updates re-appear in Windows Update. I can see two options: disable Auto-Updates or install each of the 64 updates one at a time until finding the troublesome update. Does anyone have any better ideas?

    Read the article

  • Slackware - Assigning routes (IP address ranges) to one of many network adapters

    - by Dogbert
    I am using a Slackware 13.37 virtual machine within VirtualBox (current). I currently have a number of Ubuntu VMs on a single server, along with this Slackware VM. All VMs have been set up to use "Internal Network" mode, so they are all on a private LAN, and can see each other (ie: share files amongst themselves), but they remain private from the outside world. On on the these VMs (the Slackware one), I need to be able to grant it access to both this private network, and the internet at large. The first suggestion I found for handling this is to add another virtual network adapter to the VM, then set it to NAT. This results in the Slackware VM having the following network adapter setup: -NIC#1: Internal Network -NIC#2: NAT I want to set up the first network adapter (NIC#1) to handle all traffic on the following subnets: 10.10.0.0/255.255.0.0 192.168.1.0/255.255.255.0 And I want the second virtual network adapter (NIC#2) to handle everything else (ie: internet access). May I please have some assistance in setting this up on my Slackware VM? Additionally, I have searched for similar questions on SuperUser and Stackoverflow, but they all seem to pertain to my situation (ie: they all refer to OSX, or Ubuntu via the use of some UI-based tool). I'm trying to do this on Slack specifically via the command-line. Thanks!

    Read the article

  • Slow Local Network, Windows 7, Snow Leopard, WiFi/Wired

    - by WerkkreW
    I am experiencing really poor local network performance in my home. I was recently using a Linksys WRT54G Router with DD-WRT on it, and a couple comparable Linksys-G PCI cards for connectivity but decided to upgrade hoping it would help with my performance issues. The computers in my house are connected as follows: Comcast Business Class Commercial 25mbps/10mbps (Verified) D-Link DGL-4500 Wireless N Router Windows 7x64 - D-Link DWA-552 Wireless-N Windows 7x64 - D-Link DWA-552 Wireless-N Mac Mini 10.6.2 - AirPort Extreme N Playstation 3, Hard Wired Xbox 360, Hard Wired Essentially the problem is very specific. Web browsing and uploading/downloading files from the internet is fine, more than fine. But if I want to say, Stream a video from one of my Windows 7 computers to my PS3, or copy a large video file between either of the PC's or the Mac, I get a consistent 500-900Kbps throughput at the high end. If I open my network browser, or try to browse my homegroup the response time is horrible. Both of my Windows computers are showing Strong wireless signals with a connection speed of 300Mbps. I know I can never expect to achieve anything near those speeds, but 500Kbps? Here is what I have tried so far: Enabled Single mode N-only and N/G Only on router WPA2 with AES Encrpytion Disabled "Remote Differential Compression" in Windows 7 Disabled TCP "Auto-Tuning" Used other software for file copies such as "Teracopy" I am at the end of my rope. Unfortunately I live in a 75 year old home with plaster walls, so hard-wiring my entire house isn't really an option I can handle right now. Any ideas to help me get decent speed when transferring files across my network would be greatly appreciated.

    Read the article

  • Permission Denied for FTP User

    - by Alasdair
    I have an FTP user whose default is /root/ftpuser This user can login fine. The user is the owner of the directory & the directory is even set to 777 permissions. But the user can't upload anything, the display is: Status: Connecting to xx.xxx.xxx.xx:21... Status: Connection established, waiting for welcome message... Response: 220---------- Welcome to Pure-FTPd [privsep] [TLS] ---------- Response: 220-You are user number 2 of 50 allowed. Response: 220-Local time is now 05:12. Server port: 21. Response: 220-This is a private system - No anonymous login Response: 220-IPv6 connections are also welcome on this server. Response: 220 You will be disconnected after 15 minutes of inactivity. Command: USER ftpuser Response: 331 User ftpuser OK. Password required Command: PASS ********* Response: 230 OK. Current restricted directory is / Command: OPTS UTF8 ON Response: 200 OK, UTF-8 enabled Status: Connected Status: Starting upload of test.html Command: CWD / Response: 550 Can't change directory to /: Permission denied Command: MKD / Response: 550 Can't create directory: Permission denied Command: CWD / Response: 550 Can't change directory to /: Permission denied Command: SIZE /btn.png Response: 550 Can't check for file existence Command: TYPE I Response: 200 TYPE is now 8-bit binary Command: PASV Response: 227 Entering Passive Mode (66,232,106,33,52,218) Command: STOR /test.html Response: 553 Can't open that file: Permission denied Error: Critical file transfer error It's a Linux CentOS 6 server. Any ideas?

    Read the article

  • web.config file changings guide

    - by Student
    Hi experts how are you all? i am student, and learning asp.net c# visual studio 2010 with using sql server 2005. I have developed a website which has database. I developed this website with self studies taking help from internet. the website is completed and working perfectly in my computer. I have hosting server and domain name registered already. the problem is when I upload my website it doesn't work there the following error displays: Server Error in '/' Application. Configuration Error Description: An error occurred during the processing of a configuration file required to service this request. Please review the specific error details below and modify your configuration file appropriately. Parser Error Message: Unrecognized attribute 'targetFramework'. Note that attribute names are case-sensitive. Source Error: Line 11: <system.web> Line 12: <customErrors mode="Off" /> Line 13: <compilation debug="false" targetFramework="4.0"/> Line 14: </system.web> Line 15: </configuration> Source File: C:\Inetpub\vhosts\urdureport.com\httpdocs\web.config Line: 13 Version Information: Microsoft .NET Framework Version:2.0.50727.5472; ASP.NET Version:2.0.50727.5474 I don't know what should I do to get it work on hosting server please help me in this regard that what should I do with this. Thank you in advance

    Read the article

  • Cloned Win7: Keyboard doesn't work

    - by Marc
    I cloned my old Windows7 hard disk to a shiny new Seagate Momentus XT 500GB using the free EaseUs Disk Copy tool on my laptop. After the clone process I used the Windows 7 installation disc to start the automatic startup repair. This took maybe 15 minutes and then my cloned disk was able to start. Now the cloned disk boots until the login screen and then I can't do anything because my keyboard just doesn't work. I tried connecting an external USB keyboard but this didn't help. The mouse is working fine. Note that the keyboard works fine in BIOS and in the Windows startup options menu. I booted into safe mode and again the keyboard is not working at all. I also noticed that the letters "Press CTRL+ALT+Delete to login" are now shown in italic font but they used to be shown non-italic on the original disk. I have now replaced the clone with the original disk again and from here everything works fine. Doesn't anybody have an idea how I can get my keyboard back?

    Read the article

  • How to get data out of a Maxtor Shared Storage II that fails to boot?

    - by Jonik
    I've got a Maxtor Shared Storage II (RAID1 mode) which has developed some hardware failure, apparently: it fails to boot properly and is unreachable via network. When powering it on, it keeps making clunking/chirping disk noise and then sort of resets itself (with a flash of orange light in the usually-green LEDs); it then repeats this as if stuck in a loop. In fact, even the power button does nothing now – the only way I can affect the device at all is to plug in or pull out the power cord! (To be clear, I've come to regard this piece of garbage (which cost about 460 €) as my worst tech purchase ever. Even before this failure I had encountered many annoyances about the drive: 1) the software to manage it is rather crappy; 2) it is way noisier that what this type of device should be; 3) when your Mac comes out of sleep, Maxtor's "EasyManage" cannot re-mount the drive automatically.) Anyway, the question at hand is how to get my data out of it? As a very concrete first step, is there a way to open this thing without breaking the plastic casing into pieces? It is far from obvious to me how to get beyond this stage; it opens a little from one end but not from the other. If I somehow got the disks out, I could try mounting the disk(s) on one of the Macs or Linux boxes I have available (although I don't know yet if I'd need some adapters for that). (NB: for the purposes of this question, never mind any warranty or replacement issues – that's secondary to recovering the data.)

    Read the article

  • Java Deployment and Configuration (1.6.0_21)

    - by user125137
    Sofware: Java Runtime Environment 1.6.0_21 OS: Windows XP Professional 32-Bit, SP3 Situation: a new piece of web based software is being deployed this week and prior to this all the company desktops need to be set up to meet the requirements of this software. One of these requirements is JRE 1.6.0_21. I have successfully scripted the removal of all other Java versions and the installation of the required version, however I cannot get it configured properly. One of the requirements is that the Java console be set to disabled - if it is not it can cause an issue with a particular function. I have pushed out a deployment.config and deployment.properties but the console just will not disable itself.. I know the config is being read correctly because the update tab is being correctly disabled and removed. deployment.config: deployment.system.config=file\:C\:/WINDOWS/Sun/Java/Deployment/deployment.properties deployment.system.config.mandatory=true deployment.properties: #deployment.properties #Fri Jun 15 09:34:31 EST 2012 deployment.version=6.0 deployment.console.startup.mode=DISABLE deployment.javaws.autodownload=NEVER deployment.javaws.autodownload.locked= There is no change if I set the console to ENABLE either - it remains on the default of hidden. I'm sure I can disable the console with a registry change of some form but my preference is to have it done via the deployment files as it gives the option of centralising the properties file to a network share if we wish. If anyone has any suggestions it would be appreciated.

    Read the article

  • Internet Explorer / Windows 7 does not want to show HTML file from local network drive

    - by Jaanus
    Setup: I have Windows 7 running inside VirtualBox on Mac OS X host. I have a shared drive with some HTML files, that I am mounting as a local drive W: in Windows, from the VirtualBox server \VBOXSVR. I want to look at them with a browser in Windows. Chrome in Windows 7 opens and shows those HTML files just fine (file:///W:/welcome.html). But Internet Explorer does not, and shows this error instead of the files: Internet Explorer cannot display the web page What you can try: [button Diagnose Connection Problems] More information This problem can be caused by a variety of issues, including: Internet connectivity has been lost. The website is temporarily unavailable. The Domain Name Server (DNS) is not reachable. The Domain Name Server (DNS) does not have a listing for the website's domain. If this is an HTTPS (secure) address, click Tools, click Internet Options, click Advanced, and check to be sure the SSL and TLS protocols are enabled under the security section. For the internet zone in the status bar, it shows: Internet | Protected Mode: On IE settings are a mystery to me, and I could possibly get it to work by tweaking IE settings, but I don't know which ones. How do I make IE show the same files that Chrome is happy to show? (Chrome showing them means that the files themselves are fine, there is something about the setup that just makes IE be a diva.)

    Read the article

  • Can't connect to vsftpd on Ubuntu 10.04

    - by Johnny
    I started the vsftpd on Ubuntu 10.04, but can't connect to it. The error says(FTP Client): Status: Connecting to 124.205.xx.xx:21... Error: Connection timed out Error: Could not connect to server I've checked the server status, and vsftpd is running: $ ps ax | grep vsftpd 23646 ? Ss 0:00 /usr/sbin/vsftpd 23650 pts/1 S+ 0:00 grep --color=auto vsftpd port 21 is under listening as well: $ netstat -tlnp | grep 21 (No info could be read for "-p": geteuid()=1000 but you should be root.) tcp 0 0 0.0.0.0:21 0.0.0.0:* LISTEN - I can connect to localhost: $ ftp localhost Connected to localhost. 220 (vsFTPd 2.2.2) Name (localhost:jlee): 331 Please specify the password. Password: 230 Login successful. Remote system type is UNIX. Using binary mode to transfer files. ftp> Here is iptables output $ sudo iptables -vL Chain INPUT (policy ACCEPT 191 packets, 144K bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 124 packets, 28502 bytes) pkts bytes target prot opt in out source destination What's the problem here?

    Read the article

  • Varnish does not start properly (crashes after startup) with no error messages

    - by Matthew Savage
    I am running Varnish (2.0.4 from the Ubuntu unstable apt repository, though I have also used the standard repository) in a test environment (Virtual Machines) on Ubuntu 9.10, soon to be 10.04. When I have a working configuration and the server starts successfully it seems like everything is fine, however if, for whatever reason, I stop and then restart the varnish daemon it doesn't always startup properly, and there are no errors going into syslog or messages to indicate what might be wrong. If I run varnish in debug mode (-d) and issue start when prompted then 7 times out of time it will run, but occasionally it will just shut down 'silently'. My startup command is (the $1 allows for me to pass -d to the script this lives in): varnishd -a :80 $1 \ -T 127.0.0.1:6082 \ -s malloc,1GB \ -f /home/deploy/mysite.vcl \ -u deploy \ -g deploy \ -p obj_workspace=4096 \ -p sess_workspace=262144 \ -p listen_depth=2048 \ -p overflow_max=2000 \ -p ping_interval=2 \ -p log_hashstring=off \ -h classic,5000009 \ -p thread_pool_max=1000 \ -p lru_interval=60 \ -p esi_syntax=0x00000003 \ -p sess_timeout=10 \ -p thread_pools=1 \ -p thread_pool_min=100 \ -p shm_workspace=32768 \ -p thread_pool_add_delay=1 and the VCL looks like this: # nginx/passenger server, HTTP:81 backend default { .host = "127.0.0.1"; .port = "81"; } sub vcl_recv { # Don't cache the /useradmin or /admin path if (req.url ~ "^/(useradmin|admin|session|sessions|login|members|logout|forgot_password)") { pipe; } # If cache is 'regenerating' then allow for old cache to be served set req.grace = 2m; # Forward to cache lookup lookup; } # This should be obvious sub vcl_hit { deliver; } sub vcl_fetch { # See link #16, allow for old cache serving set obj.grace = 2m; if (req.url ~ "\.(png|gif|jpg|swf|css|js)$") { deliver; } remove obj.http.Set-Cookie; remove obj.http.Etag; set obj.http.Cache-Control = "no-cache"; set obj.ttl = 7d; deliver; } Any suggestions would be greatly appreciated, this is driving me absolutely crazy, especially because its such an inconsistent behaviour.

    Read the article

  • Family server setup [closed]

    - by Manny
    Hi all, I really hope some of you can give me some direction. I have setup a linux server at home and through samba I can access files from different computers in my home. I would like to use this server as a file-server for my family (brothers, sisters and parents who all live in their own homes). I really like the way it is set up right now with user and permission controls, but I've read that it is bad idea to open up the samba port to the world. The requirements are simple: 1) it should be easy to access, by using standard web browsers or mounting the drive (shouldn't have to use any VPN setup or use putty etc) 2) should be somewhat secure. We just want to share family pictures instead of putting them on facebook or picasa or other web server, nothing top secret. Here is what I've looked into: 1)Webdav. It seems decent but seems like it windows7 doesn't like it very much, even with digest mode authentication. User controls and permissions are not as flexible as samba (or at least to my knowledge). I really like the user and group permissions in samba, but if I could live with webdav if it worked seamlessly with windows, it should just work shouldn't it? 2) I read somewhere to stay away from ftp as it is outdated and that there are newer and better internet file-server setups? Was that a reference to webdav? I am so confused, please help... Manny

    Read the article

< Previous Page | 342 343 344 345 346 347 348 349 350 351 352 353  | Next Page >