Search Results

Search found 16809 results on 673 pages for 'nothing 2 lose'.

Page 501/673 | < Previous Page | 497 498 499 500 501 502 503 504 505 506 507 508  | Next Page >

  • How can I make AdobePS recognize my printer in Win98se?

    - by Jeff
    I am trying to use the adobe product: "PostScript Printer Driver AdobePS 4.2.6 for Windows 95 and Windows 98" to connect a new HP laser printer (CP1025nw) via USB to a machine running win98SE. I am using the PPD file for "HP color LaserJet PS". The Adobe utility installs the printer, and it shows up on the printer list ... but it doesn't print. I get an error message (sometimes) that an error occurred writing to USB001.. Evidently, AdobePS cannot "See" the printer. I suspect there is a problem wrt the printer port. Nothing I pick works. I have two virtual printer ports, USB001 and USB002 which were created during the installation of an HP inkjet printer. The inkjet will work on either of these, but the laser printer works on neither. When I connect the laserjet printer, the windows New Hardware Wizard activates, but I don't see how to point it at the AdobePS Postscript driver. HP has no support whatsoever for win98, so there's no hope of getting help from them. The printer works great with the several winXP computers connected through either the network connection or the USB. So, how do I convince Win98se to use the adobe postscript printer driver? Is this printer even a postscript printer? Is there a better way to get win98se to talk to this printer? Any suggestions?

    Read the article

  • Is my "Generic" USB Flash Drive broken?

    - by Jesse J.
    So here is the situation. I find myself technological knowledgeable about many things ( I love to code, whether it's websites, C#, C++ or so on). However: My 2 toddlers (my wife actually) bought me a "Generic" 128 GB USB Storage Device (Usb Flash Drive) for Father's Day. I thought awesome at first..... WRONG! Nothing but problems with it. 3-4mb/s MAX transfer speed. I can bear with it. BUT! When I went to reformat my computer I transferred my save files from my games over to the stick and then the USB Stick managed to become corrupted. Not just a simple format would work either. It's screwed. I tried to use (Manually changed usb drive letter troubleshooting it to X) "chmod X: /X /F /R" with administrator rights, I did this after a long session to make it work with no errors (had to delete the log) and I finally recovered the files, however when I go to use it (transfer to or from) it transfers a couple kb to the stick or from it and then freezes, It says (Windows 7): Name: From: Folder (X:\File\Location) To: Folder C:\Users\Username\Desktop) Items Remaining: 0 (0 bytes) Speed: 0 bytes/second It does this forever... and ever... and ever... It transfered 3 files atleast, and then stopped. This is a new USB Stick bought from a "High" reputation company on eBay. Is the USB Stick screwed?

    Read the article

  • Windows Server 2003 (w/Exchange) move to new machine

    - by James Booker
    I have an ageing domain controller (the only one on a 10-pc network) which needs rebooting often. I have a Dell Poweredge 2850 server doing nothing, so I'd like to move the DC to that, but here's the catch - I don't have Win2k Server Std install media any more as it's been lost. I purchased "Easus Todo Backup Advanced Server" which claims to be able to recover to dissimilar metal, but it's not quite working (although I don't think it's the product's fault) I know the server and PERC RAID card are good because I installed Ubuntu on the logical drive (4 x 72GB disks RAID 5) no problems. I've booted frmo the Easus Todo backup CD (which is WinPE based) and recovered to the logical disk on the RAID (after installing driver inside the WinPE environment from a NAS drive) The problem is when I boot the server, I can get the OS selection menu, but any option results in a blank screen, with no errors. I figure this is probably because the driver wasn't installed on the old machine (which is IDE-based (i know, i know!) and doesn;t have a RAID controller) I've booted from the CD and copied the mraid35x.sys file to the c:\windows\system32\drivers folder on the recovered system, but it makes no difference. I made a boot.ini with rdisks 0-10 defined, and booting from each of these resulted in a file error (i.e. 'this isn't a real disk') - the only disk that gets any response (the blank screen) is multi(0)disk(0)rdisk(0)partition(1) which just gives me the blank black screen and no disk activity. Is there any way I can force the drvier to be installed on the source system (so i can do a full backup again), i've tried right-clicking the oemsetup.inf and clicking install, but it didn't actually do anything. I attempted to force it with the 'Add new hardware' wizard and forcing with the 'have disk' option but it still gave me no hardware to select. Also I've got an identical machine running WinXP which uses the PERC driver successfully (which was obviously done at install time) and the boot.ini settings are the same : multi(0)disk(0)rdisk(0)partition(1) Any ideas would be appreciated.

    Read the article

  • FTP Server upload and filesystem questions

    - by Alex
    I'm a photographer who mainly does event photography. A while ago I bought myself a Nikon WT-4 wireless transmitter, a small device which connects via USB to my Nikon D700 DSLR, and then establishes a WiFi connection to an existing WLAN. It can then upload any pictures I take via FTP to an FTP server somewhere in the network. On my laptop I then have a piece of software which will check a given folder on the disk regularly, this software is smart enough to look at the modified file timestamp, if this timestamp is less than 10 seconds ago, it will not attempt to import the folder and skip the file in this iteration of the import scan. The problem I've discovered seems to be inherent to the FTP protocol, as I have the same problem with Windows 7 built in IIS server, as I do with FileZilla FTP server. When the transmitter starts to upload a file, the FTP server will create a small 300-500 KB file with the correct filename on the disk, but then do nothing with the file until it has completely received the file via FTP. So it seems to create this small dummy file, and then buffer the remainder of the FTP upload until it's finished, and then dump the rest of the file into the dummy file making it the correct size. Problem is, these uploads take about 15-30 seconds depending on reception, but since the folder watch tool will already try to import any file older than 10 seconds, it will always try to import the small dummy files which obviously fails as they're not copmlete yet. Is there any way to 'disable' this behaviour? Ideally I would like my file only to show up once it's been completely uploaded. Or perhaps someone knows another FTP server application (it has to run on win7) which does not show this behaviour?

    Read the article

  • Sendmail slow to accept emails

    - by Rich
    I have a PHP web app which is using SMTP to sendmail on localhost to send email. I would like sendmail to accept the mail request immediately and queue it for later sending, as I don't want to have user-facing request threads blocked on emails. Sendmail is installed with the default settings on RHEL web servers. Sometimes sendmail is blocking for a long time after the MAIL command is sent -- sometimes taking 60 or 90 seconds to accept the mail. The time take is usually very close to 60 or 90 sec, which makes me think this is some kind of timeout. I have looked in the sendmail logs, and there are plenty of "deferred" emails, but nothing which looks responsible for this delay. How can I diagnose what is slowing down sendmail? How can I configure sendmail to always accept the mail immediately and to queue the mail for later sending? Update: I'm not sure, but it looks like this might be linked to aol.com addresses. I strongly suspect that sendmail is doing some kind of blocking receipient address verification at the accept-email-for-sending stage. How can I disable that, so that sendmail doesn't block my UI threads? Update 2: This only seems to happen at busy times. Perhaps I am running out of sendmail threads or something? How can I check that?

    Read the article

  • Why does my ftp(e)s server fails like half of the time

    - by user1092608
    I have this discussion at work regarding our ftp server running via vsftpd. Initially, we have opted to serve ftpes instead of sftp because this seemed the most flexible and straightforward solution for our server to have secure file transmission. Afterwards, our ftp server seems to be a source of issues for our end users. Half of the time, users complain about not working ftp connections. I must say, i tested our FTP trough different infrastructures (=in the field, at random times at random places) and indeed, sometimes behind some configurations (=no idea how they are configured, because the 'field' testing), i recieve errors. Some of the are: Error: Failed to retrieve directory listing (filezilla) Furthermore, behind my basic home configuration, everything seems to be running fine. I (think I) did all the basic configuration checks (passive mode?, firewall for all ports?, ...) and can't seem to find the source. Being a bunch of techies at our small office, yet knowing nothing about infrastructure, some start suggesting that ftps protocol could be the source of issues. ("No, i only knew sftp so far" "Ftps is not widespread"). I, however, strongly doubt this hypothesis, since reading around on the www, asking questions on serverfault, everyone seems to deny this. So, as I would like to avoid reconfiguring, since this involves messing around in our SSH service, our virtual user setup and ftp service, i would need some advice on 1) what could be potentially the general cause? 2) do you have some general tips? 3) would you mind having a look at my configuration file? ----- General Settings ----- write_enable=YES dirmessage_enable=YES nopriv_user=ftpsecure ftpd_banner="Welcome to XXXX FTP!" hide_ids=YES hide_file=.* max_per_ip=10 max_clients=10 local_enable=YES local_umask=022 chroot_local_user=YES secure_chroot_dir=/usr/share/empty userlist_enable=NO userlist_deny=YES userlist_file=/etc/vsftp_deny_users guest_enable=YES guest_username=ftpvirtual virtual_use_local_privs=YES user_sub_token=$USER local_root=/srv/ftp/ftpvirtual/$USER anonymous_enable=NO syslog_enable=NO xferlog_enable=YES xferlog_file=/var/log/vsftpd_xfer.log connect_from_port_20=YES pam_service_name=vsftpd listen=YES listen_port=21 pasv_enable=YES pasv_min_port=30000 pasv_max_port=30030 pasv_address=foo ssl_enable=YES rsa_cert_file=/etc/vsftpd.pem rsa_private_key_file=/etc/vsftpd.pem force_local_data_ssl=YES force_local_logins_ssl=YES ssl_tlsv1=YES ssl_sslv2=YES ssl_sslv3=YES ssl_ciphers=HIGH anon_mkdir_write_enable=NO anon_root=/srv/ftp anon_upload_enable=NO idle_session_timeout=900 log_ftp_protocol=NO dsa_cert_file=/etc/vsftpd.pem Thanks

    Read the article

  • Server 2012 - transparent SMB failover without shared discs, possible?

    - by TomTom
    here is the scenario - there is a small set (200gb around) of data that I HAVE to keep available. Those are basically shared VHD images that serve as master images for a lot of our VM's - they then run in differential discs off those. The whole set is "mostly read only". In more detail: A file that IS there and IS used will NEVER change. I may delete files (when absolutely not in use) and add new files, but a file that is there once gets read protection set and that it is until it is retired. Obviously, I need as much uptime as possible. SO FAR we run that by having this directory local on every Hyper-V server. Now I think moving this into our storage fabric. Due to the "it HAS to be there" I pretty much want a share nothing architecture. DFS would be perfect for this - a file never changes, so replication would work nicely. Folders could be replicated to a number of servers, all would reference them from there. Now, that hyper-V supports SMB that could be a good idea to isolate these on a number of servers - we try to move into a scenario where the storage is more centralized. Server 2012 supports always on shares, but it seems that this only works with a clustered disc behind. Is there any way around this for read only file stores? All documentation points to stuff like a shared JBOD - but that would leave me open for file system corruption. I really plan to go quite separately here, vertically - 2 servers, both with SSD only for this, both with their own 2000W separate USV, both with enough bandwidth to handle everything thrown at them (note to everyone tinking this is 10G - this would be SLOW and EXPENSIVE compared to a nice Infiniband backbone). The real crux is that this is an edge case obviously - as the files are read only once in use.

    Read the article

  • Computer freezes after watching Youtube videos

    - by Roberts
    I had Windows 7 installed all september. But I installed Windows XP Professional back because my computer couldn't handle the new OS. After first boot I tried to install newest flash player (from Adobe website), but it failed. I had my old setup on USB drive and it worked. I don't know is it important or not. I am watching Youtube videos in my free time (almost every hour). After few days the computer started to freeze when I open a page with the video or close the page with video, not while I watch a video. No BSODs. Nothing in Event viewer. I use Firefox only. When computer freezes the sound wont. If iTunes is playing a radio station or is it another video playing in background, the sound wont freeze. Last few days the mouse wont freeze. Its a strange symptom. If I click few times then the cursor will actually freeze. I just want to know where does this problem come from (hardware - graphics card, old motherboard or it's just some glitch in setups). If it's not graphics card then I will be happy. The graphics card is ATI Radeon HD 4650 - brand new. Catalyst 11.8 installed. Things I have tried: Installed newest flash player after a week (the setup didn't fail this time) Installed latest video drivers Deleting cookies Defragmenting hard drive Using TuneUp utilities for computer cleenup Installed latest Mozilla Firefox Cleaned the PC Changed CPU Fan speed almost to max (just to be sure) Things I haven't tried yet: Didn't try playing videos on other browsers What can I do now?

    Read the article

  • Diff bios - corrupt video driver

    - by sfonck
    Hi, I'm using an Dell M90 Precision Laptop which has a NVidia Quadro FX 2500M graphics card and is running Windows XP. Laptop has been running fine - but a few weeks ago screen went 'white' - restarted computer- bios and startup screens show weird green dots and stripes, normal startup only shows a black screen... only VGA mode works to display something. I've been trying to remove and reinstall the correct drivers downloaded from Dell's website - no solution. I gave up and reinstalled XP - everything was working perfect again. 2 weeks later - again the white screen - tried everything again (flashin new bios also - nothing works) Reinstalled XP - everyhting was working again, so I made a DriveSnapShot of the partition. Today - again the 'white screen'. Ok, no problem ...I was thinking all I needed to do was to restore the DriveSnapShot backup... Few minutes later the backup is restored ... but guess what: video driver does not work correctly... As the DriveSnapShot restored the complete partition, as it was at the time everything was working perfectly, this would mean my driver problems are due to 'settings' in the bios or on the graphics-card itself + these 'settings' can get overridden by doing a new XP-install.... I'm out of options, can somebody help me to find a solution for this problem: Is there some way to backup and restore a bios after seeing some problems? Is there some way to know what is causing this problem like a bios diff utility? Thanks!

    Read the article

  • Apache intermittently aborting requests

    - by Adam Phillips
    I have just been dealing with a problem whereby http requests are being aborted, seemingly at random. On any particular page in the website, when you opened a page, a number of the assets (img, css, etc) failed to load. If you refreshed, the page may work fine, the same set of assets may fail to load or different assets may fail to load. The net tab in firefox was returning 'Aborted' in the HTTP status code column for the failed assets, even tho in the case of images, the image previews were still working. There was nothing in any of the apache logs about the requests that failed, however since it seemed to point to an apache issue, we restarted apache. The first time we tried, it made no difference but about 10 minutes later, in the absence of a better solution we tried again. Bizarrely, the problem disappeared immeadiately. So now the site seems to be running fine again but its rather unsettling, both the intermittent nature of the problem and the lack of an explanation for its resolution. Has anyone seen anything like this before and if so did you find out the reason behind it? Many Thanks

    Read the article

  • reaching constant video bitrate by using ffmpeg

    - by sebastian
    We use ffmpeg and a transcoding script for transcoding and want to make some batch files which we can use for transcoding. For example I use a parameter called video_kbit and if I am writing in 30000 it should reach 30 Mbit. Of course if I use 6000 as parameter it should reach 6 MBit as well, so I have one script which reaches every video bitrate I want. As my settings are now, I only reach 18.1 Mbit. Only when I use 15000 as a parameter I am reaching my goal for a constant video bitrate of 15 MBit. If I use 8000 as parameter I get 10.1 MBit as a result. So under 15000, I get a higher bitrate and over 15000, I get a lower bitrate than I want. My presettings are: ffmpeg -threads "4" -i "$2" -f mp4 -c:v libx264 -crf 1 \ -bufsize 30000k -maxrate ${FC_PARAM_video_kbit}k \ -acodec libfaac -ac 2 -ab ${FC_PARAM_audio_kbit}k -ar 44100 \ -pix_fmt yuv420p -vf scale=${FC_PARAM_width}:${FC_PARAM_height} -y "$3" And I am using these parameters: FC_PARAM_video_kbit = 30000 FC_PARAM_audio_kbit = 192 FC_PARAM_width = 1920 FC_PARAM_height = 1080 I have tried using a higher bufsize and using profile:v and level settings, but nothing got me near the constant video bitrate of 30000 Mbit. Do you guys have any ideas or suggestions for a better way to reach my goal?

    Read the article

  • Is this DVD drive broken? Brand new, i need help convincing

    - by acidzombie24
    I am asking bc i know dell is going to give me a problem. How do i know if my DVD is broken on my laptop? i burnt 4 DL disc and they ALL failed, i called and dell suggested roxio. I used it and burnt 1 disc without error and the 2nd disc with an error. With both apps there were no 'problems' during the burning process only failed on the verification process. Some of these bad disc dont work on other PCs and one locks up windows when i click a specific file. Does that sound like a broken burner to you guys? when i called dell they told me since it can read disc properly 100% of the time and software doesnt fail in the burning process its not a broken drive _. They forward me to software support who demand a fee (i think $100) to help me fix my software. I am annoyed bc i dont want to be on the phone for them to watch me burn a dvd and since i burned it once correctly i dont want to happen to burn correctly again to have them say they solved my problem (doing nothing) and charge me refusing to refund. -edit- The errors i got were 1) the request could not be performed because of an I/O device error 2) Windows locking up when opening 1 specific file 3) Cannot copy : Data error (crc) NOTE: the file that causes the problems are random every disc

    Read the article

  • Hibernate & Sleep broken after IE 9 RTM installation in Windows 7 x64.

    - by AKa
    I have a question about hibernation. I installed Internet Explorer 9 RTM x64 on my Windows 7 x64 SP1 desktop machine. After this, computer don't entry the hibernation or (hybrid) sleep state properly. After I hibernate the computer the monitor become blank screen and the keyboard and mouse are inactive. But the machine is still running and there isn't any possibility to switch them off as only with power button. But this is recognized on next start as ineligible because of log entry with message “The previous system shutdown at xx:xx:xx on ?xx.?xx.?x was unexpected“ and menu with safe mode option. I’m clearly not sure if it has something to do with Internet Explorer installation, but I’m definitely guaranteed that before of this I never had some problems with hibernation or (hybrid) sleep. In Windows logs isn’t something suspect. I switched the hibernation off and on, installed new drivers for mainboard, graphic and network card, checked the hard disk, nothing was helpful. This is really sad, beacuse I don't like to switch the computer completely off because it takes longer to boot. Any suggestions?

    Read the article

  • Cisco AnyConnect VPN client - prevent connecting as work network

    - by Opmet
    From Windows 7 I'm using "Cisco AnyConnect Secure Mobility Client 3.0" to connect to our corporate network. Every time I establish the VPN connection Windows will set the type as "work network". I don't want this. So I go to "network and sharing center" and manually / interactively change it to "public network". But I have to repeat it for every new VPN connection. Is there any way to make Windows remember / persist this configuration? Can it be configured in the VPN client? Do our IT admins need to change something at server end? Motivation: A "work network" per default uses different firewall settings that allows for stuff like "network discovery" and "file shares". But I just need "remote desktop" (mstsc). Additional info: Our IT admins claimed this would be Windows default behaviour and there was nothing we could do about it: Windows would always initiate a VPN connection as "work network". Based on this statement I assume this is a "general" issue and went ahead posting here (at superuser.com).

    Read the article

  • Windows preventing running of Telnet client

    - by palswim
    At first, I had issues because Windows 7 doesn't install the Telnet client by default (also, SuperUser has a thread). So, after installing it (and restarting, like Windows asked, though completely unnecessary), I opened a command prompt, and went to run my new Telnet program. I enter telnet, and receive: C:\Users\[USER]>telnet 'telnet' is not recognized as an internal or external command, operable program or batch file. "That's odd," I think to myself. So, in Windows explorer, I navigate to \Windows\System32 and see telnet.exe sitting in that folder. If I double-click on the executable file, the Telnet command prompt opens for me without a problem. So, I return to my Windows Command Prompt, and enter: C:\Users\[USER]>\Windows\System32\telnet.exe '\Windows\System32\telnet.exe' is not recognized as an internal or external command, operable program or batch file. And then (grep comes from cygwin): C:\Users\ryan\Desktop>dir \Windows\System32 | grep telnet Nothing. I've disabled UAC and have no idea why my Command Prompt is lying to me. Anyone experience something similar? To recap: In Windows 7, I have installed Telnet and can see it in my System32 folder, but cannot run it via a Command Prompt.

    Read the article

  • How can one restrict network activity to only the VPN on a Mac and prevent unsecured internet activity?

    - by John
    I'm using Mac OS and connect to a VPN to hide my location and IP (I have the 'send all traffic over VPN connection' box checked in teh Network system pref), I wish to remain anonymous and do not wish to reveal my actual IP, hence the VPN. I have a prefpan called pearportVPN that automatically connects me to my VPN when I get online. The problem is, when I connect to the internet using Airport (or other means) I have a few seconds of unsecured internet connection before my Mac logs onto my VPN. Therefore its only a matter of time before I inadvertently expose my real IP address in the few seconds it takes between when I connect to the internet and when I log onto my VPN. Is there any way I can block any traffic to and from my Mac that does not go through my VPN, so that nothing can connect unless I'm logged onto my VPN? I suspect I would need to find a third party app that would block all traffic except through the Server Address, perhaps Intego Virus Barrier X6 or little snitch, but I'm afraid I'm not sure which is right or how to configure them. Any help would be much appreciated. Thanks!

    Read the article

  • Google MAIL not arriving - relay not allowed

    - by renevdkooi
    I have a server with sendmail, hosting my domain mind-zone.nl, i changed the MX records to point to the server. When I use Hotmail or any other client the email arrives and everything is fine. ONLY mail from GMAIL server is bounced and gmail returns "relay denied". I have set all the virtual server host settings etc, from command line I can send mails as well, hotmail works, etc. Just not gmail. The strange thing is, this is what gmail returns: Look at the lower part: "Received by" it returns some IP address which is not mine and has absolutely nothing with my domain. While when I do a NSLOOKUP and change to google's DNS server it will state that the IP Address for my domain is correctly pointing at my server. Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 554 554 5.7.1: Relay access denied (state 14). ----- Original message ----- MIME-Version: 1.0 Received: by 10.14.37.138 with SMTP id y10mr3421504eea.43.1297665573901; Sun, 13 Feb 2011 22:39:33 -0800 (PST) Received: by 10.14.29.75 with HTTP; Sun, 13 Feb 2011 22:39:33 -0800 (PST)

    Read the article

  • php extensions & apache mods gone/not working after server restart?

    - by user1782359
    I was wondering if anyone has ever come across this before, as I'm pretty stumped to be honest, and my server admin knowledge isn't particular good so I'm not sure what could even be wrong, let alone how to fix it. Basically, Thursday last week everything was fine on our server. I come in on Friday and it's a mess: php extensions are missing/not working, apache modules are gone. (e.g. oci_* was gone completely, odbc_ not working but still there, the apache ntlm_auth for single sign on was gone and so the website wasn't even loading in IE). I'm ruling out anything deliberate because it's just incredibly unlikely. The only thing that really happened between thursday & friday is that on thursday evening one of the network guys did a RAM upgrade on the server and restarted it. That's it, nothing else. Now I'm wondering if somehow those extensions and such which we installed months ago were somehow only saved in a local memory of sorts, and a restart has wiped them? But we installed them all as root, so I don't see why it should be any different from installing anything else. It makes little/no sense to me. To expand on an example of something that's gone very wrong, the php odbc_ extension: It's still on the server, it doesn't return undefined function or anything. But it just cannot connect to the datasource any more. I've tested it through the command line and it's working perfectly fine with that datasource and login details, but all of a sudden having it in the php odbc_connect() function and it just can't connect. ( [S1000][unixODBC][FreeTDS][SQL Server]Unable to connect to data source. ) But unixODBC is set up fine. Like I say i've tested it all through the terminal and it can connect, and we've not changed anything, it's just now all of a sudden not working through the PHP function. Anyone have any ideas whatsoever as to what could be going on? This is on CentOS 5.x by the way.

    Read the article

  • What is the --daemon option?

    - by Pascal Dimassimo
    I was installing Solr with Jetty using these instructions. Basically, those instructions made you download the Jetty startup script and copy it to /etc/init.d/jetty. But it was not working. Each time I was starting Jetty, I had a "FAILED" message and nothing to understand why it was happening. I decided to open up the /etc/init.d/jetty script to understand what was happening. I saw that this script was using start-stop-daemon to launch jetty. After a couple of time of debugging, I discovered that removing the --daemon option at the end of the start-stop-daemon call was fixing my problem. I did a couple of research and discovered that this guy had the same problem and resolved it like I did: my removing the --daemon option. What is weird is that the switch does not seem to be specific to start-stop-daemon, because it is not documented in the man page. Also, I've seen it used for other commands. So what is that --daemon option doing? And why removing it resolved my problem? Note that I am working on Ubuntu 10.04.2 LTS.

    Read the article

  • Arduino IDE "launch 4j" error

    - by John
    I have a computer running Windows XP. I am trying to run the Arduino IDE 0022. I double-click on arduino.exe, it waits about 30 seconds on the load up title screen, and then it gives me this error: Launch 4j: an error occurred while starting the application My only choice is to click "OK"; the error goes away, and the Arduino IDE closes. If I try to delete the Arduino files (to try overwriting with some different files), I get an error that doesn't allow me to do so: Cannot delete awt.dll: Access denied Make sure the disk is not full or write protected and that the file is not currently in use. The only way to delete the file is by restarting the computer. So something must still be trying to run after that first error. I have noticed in Task Manager that some Java programs are still running: javaw.exe (3 processes) I think this is a problem with Java, but I checked and updated all of my Java software and it is all up to date. I have looked on other forums for this issue and none of them seemed to help. From the forums I have tried: Different Arduino IDE versions Updating Java Opening arduino.exe as Administrator Nothing has worked. Anyone have any suggestions?

    Read the article

  • Redhat Linux password fail on ssh

    - by Stephopolis
    I am trying to ssh into my linux machine from my mac. If I am physically at the machine I can log in with my password just fine, but if I am sshing it refuses. I am getting: Permission denies (publickey,keyboard-interactive) I thought that it might be caused by some changes that I recently made to system-auth, but I restored everything to what I believe was the original format: #%PAM-1.0 # This file is auto-generated. # User changes will be destroyed the next time authconfig is run. auth required pam_env.so auth sufficient pam_fprintd.so auth sufficient pam_unix.so nullok try_first_pass auth requisite pam_succeed_if.so uid >= 500 quiet auth required pam_deny.so account required pam_unix.so account sufficient pam_localuser.so account sufficient pam_succeed_if.so uid < 500 quiet account required pam_permit.so password requisite pam_cracklib.so try_first_pass retry=3 password sufficient pam_unix.so md5 shadow nullok try_first_pass use_authtok password required pam_deny.so session optional pam_keyinit.so revoke session required pam_limits.so session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid session required pam_unix.so But I still could not ssh in. I tried removing my password all together and that didn't seem to help either. It still asks and even entering an empty string (nothing) it still fails me out. Any advice?

    Read the article

  • Isn't a hidden volume used when encrypting a drive with TrueCrypt detectable?

    - by neurolysis
    I don't purport to be an expert on encryption (or even TrueCrypt specifically), but I have used TrueCrypt for a number of years and have found it to be nothing short of invaluable for securing data. As relatively well known free, open-source software, I would have thought that TrueCrypt would not have fundamental flaws in the way it operates, but unless I'm reading it wrong, it has one in the area of hidden volume encryption. There is some documentation regarding encryption with a hidden volume here. The statement that concerns me is this (emphasis mine): TrueCrypt first attempts to decrypt the standard volume header using the entered password. If it fails, it loads the area of the volume where a hidden volume header can be stored (i.e. bytes 65536–131071, which contain solely random data when there is no hidden volume within the volume) to RAM and attempts to decrypt it using the entered password. Note that hidden volume headers cannot be identified, as they appear to consist entirely of random data. Whilst the hidden headers supposedly "cannot be identified", is it not possible to, on encountering an encrypted volume encrypted using TrueCrypt, determine at which offset the header was successfully decrypted, and from that determine if you have decrypted the header for a standard volume or a hidden volume? That seems like a fundamental flaw in the header decryption implementation, if I'm reading this right -- or am I reading it wrong?

    Read the article

  • Help in recovering partition

    - by goshopedero
    Okay so i had one NTFS partition and i wanted to resize it, but while resizing it with partition magic some error occurred and now i am not able to enter in my partition anymore. I have slackware 13 also and i tried mounting the partition from there but it didn't succeed. One friend of mine came to my house with some live-cd os called backtrack3 and when he booted from cd, he was able to mount the damaged partition - and was able to read/write on it anywhere. I saw my files, they are all there, so nothing's erased just the partition is somehow damaged. But strange thing was that from backtrack we weren't able to mount some of the working partitions of my comp, and we could mount the damaged one. So i am asking for some help here: My files are all there, and i saw them from backtrack. What can i do to fix the partition so it would be usable from windows/slackware again ? Please tell me anything you've got because i have some important data on it. Thank you.

    Read the article

  • D-Link wireless router losing outbound data

    - by gsteinert
    I have a Linux box running the Apache web server behind a D-Link wireless router (nothing fancy, just standard kit that comes with Virgin Media broadband). My issue is that when requesting web pages (from within the network or via the web), the back end of the page seems to be being dropped. For example, I tried to display a text-only file, and all I could get was the first 40-70% of the file (it changed slightly with each refresh). The apache access logs show that only part of the data was being sent (~6000 bytes instead of the 12000+ bytes of the file). Removing my router from the equation fixes the issue and I can download any files no matter the size with no problems. My theory is that the uploaded packets are either being dropped or held up by the config of the router. Is there anything I can do to alleviate the problem? (Perhaps a way of reconfiguring the router to upload packets harder/better/faster/stronger or an option in apache that provides a workaround) As a last resort I will get a second NIC for my Linux box and turn it into a router, but that would mean the box will be on 24/7... not the most ideal of circumstances. Gary

    Read the article

  • What's Keeping My Computer Awake?

    - by phantomdata
    Hey guys, First the question; How do I figure out what is preventing my Windows 7 computer from going into sleep mode? Second; some background... I've been struggling with this for a few days and am utterly perplexed. I setup sleep mode on my Windows 7 PC a few weeks ago, and all was well. The PC would sleep as expected and I was snuggly in knowing that my computer was saving power and some wear and tear on the components (we'll leave the 'is it better to sleep' debate for another thread/day, please don't start it). Well, I noticed the other night that my system stopped ever going to sleep. I set the sleep time down to 1 minute and wandered fully away from the PC (ensuring that no errant mouse or keyboard movements would occur) and the PC never went to sleep. I've also observed this over longer intervals as well, such as overnight. I have sleep mode enabled, of course "multimedia settings - When Sharing Media" is set to allow the computer to sleep. "powercfg -lastwake" show nothing of interest, since it never goes to sleep and can't wake up. "powercfg /requests" shows 3 entries - all "[DRIVER] ?". I assume that 2 of these are my mouse and keyboard - as I've recently used them to run the powercfg command. I'm at a loss for the third though. I've unhooked all USB peripherals save for my keyboard and mouse. Wake on LAN is disabled in my BIOS. I know that you can disable all apps from waking/preventing sleep - but I want the ability to remain for those apps that do legitimately need to keep the system awake. So; does anyone know of a way to figure out what the 3rd phantom "[DRIVER] ?" is in powercfg /requests?

    Read the article

< Previous Page | 497 498 499 500 501 502 503 504 505 506 507 508  | Next Page >