Search Results

Search found 18648 results on 746 pages for 'left join'.

Page 595/746 | < Previous Page | 591 592 593 594 595 596 597 598 599 600 601 602  | Next Page >

  • Is it possible to download extremely large files intelligently or in parts via SSH from Linux to Windows?

    - by Andrew
    I have a ~35 GB file on a remote Linux Ubuntu server. Locally, I am running Windows XP, so I am connecting to the remote Linux server using SSH (specifically, I am using a Windows program called SSH Secure Shell Client version 3.3.2). Although my broadband internet connection is quite good, my download of the large file often fails with a Connection Lost error message. I am not sure, but I think that it fails because perhaps my internet connection goes out for a second or two every several hours. Since the file is so large, downloading it may take 4.5 to 5 hours, and perhaps the internet connection goes out for a second or two during that long time. I think this because I have successfully downloaded files of this size using the same internet connection and the same SSH software on the same computer. In other words, sometimes I get lucky and the download finishes before the internet connection drops for a second. Is there any way that I can download the file in an intelligent way -- whereby the operating system or software "knows" where it left off and can resume from the last point if a break in the internet connection occurs? Perhaps it is possible to download the file in sections? Although I do not know if I can conveniently split my file into multiple files -- I think this would be very difficult, since the file is binary and is not human-readable. As it is now, if the entire ~35 GB file download doesn't finish before the break in the connection, then I have to start the download over and overwrite the ~5-20 GB chunk that was downloaded locally so far. Do you have any advice? Thanks.

    Read the article

  • Port 5357 TCP on Windows 7 professional 64 bit?

    - by Registered
    Is there a reason this port is open, a quick Nmap scan and Nessus scan reveal it's open, why? Are there any ramifications if I close this port via the firewall rule set? Or does anyone here now more info about this port besides Google? WTF? 1)http://www.symantec.com/connect/blogs/who-left-tunnel-door-open-windows-firewall-vista-0 I know the talk is about Vista, but I am pretty sure it's the same port on 7, also. 2)Port 5357 common errors:The port is vulnerable to info leak problems allowing it to be accessed remotely by malicious authors. (Web Services for Devices) I am blocking this crap, if I have issues will just re-enable. Damn windows. Inbound rule for Network Discovery to allow WSDAPI Events via Function Discovery. [TCP 5357] You just got blocked, until I break something, will see. Time to re-Nmap and re-Nessus. Nmap scan 0 open ports after closing Port 5357,Win7 still works for now, one more scan with Nessus just to make sure all is well.

    Read the article

  • Samba with Active Directory - shares are readonly, NT_STATUS_MEDIA_WRITE_PROTECTED

    - by froh42
    I've set a samba server that seems to work, all shares are seemingly exported as readonly, however. The machine is called "lx". When I'm on lx I can run the following command: froh@lx:~$ smbclient //lx/export -UAdministrator Enter Administrator's password: Domain=[CUSTOMER] OS=[Unix] Server=[Samba 3.5.4] smb: \> mkdir wrzlbrmpf NT_STATUS_MEDIA_WRITE_PROTECTED making remote directory \wrzlbrmpf smb: \> ls . D 0 Fri Dec 3 19:04:20 2010 .. D 0 Sun Nov 28 01:32:37 2010 zork D 0 Fri Dec 3 18:53:33 2010 bar D 0 Sun Nov 28 23:52:43 2010 ork 1 Fri Dec 3 18:53:02 2010 foo 1 Sun Nov 28 23:52:41 2010 gaga D 0 Fri Dec 3 19:04:20 2010 How can I troubleshoot this? What I did: First I set up a fresh install of Ubuntu 10.10 x64. Second I got kerberos working with the following krb5.conf file: [libdefaults] ticket_lifetime = 24000 clock_skew = 300 default_realm = CUSTOMER.LOCAL [realms] CUSTOMER.LOCAL = { kdc = SB4.customer.local:88 admin_server = SB4.customer.local:464 default_domain = CUSTOMER.LOCAL } [domain_realm] .customer.local = CUSTOMER.LOCAL customer.local = CUSTOMER.LOCAL #[login] # krb4_convert = true # krb4_get_tickets = false I also added winbind to group, passwd and shadow in nsswitch.conf. Seemingly Kerberos works: root@lx:~# net ads testjoin Join is OK root@lx:~# wbinfo -a 'Administrator%MYSECRETPASSWORD' plaintext password authentication succeeded challenge/response password authentication succeeded wbinfo -u and wbinfo -g also spit out a list of users and a list of groups respectiveley. I noted that domain accounts did NOT include a domain and they are in german (as on the SBS 2003 that is the domain server). So I get a "Domänenbenutzer" in wbinfo -u's output not a "CUSTOMER+Domain User" or something similar. I'm not sure anymore what I did to the PAM configuration, but here is what I currently have: root@lx:/etc/pam.d# cat samba @include common-auth @include common-account @include common-session-noninteractive root@lx:/etc/pam.d# grep -ve '^#' common-auth auth [success=3 default=ignore] pam_krb5.so minimum_uid=1000 auth [success=2 default=ignore] pam_unix.so nullok_secure try_first_pass auth [success=1 default=ignore] pam_winbind.so krb5_auth krb5_ccache_type=FILE cached_login try_first_pass auth requisite pam_deny.so auth required pam_permit.so root@lx:/etc/pam.d# grep -ve '^#' common-account account [success=2 new_authtok_reqd=done default=ignore] pam_unix.so account [success=1 new_authtok_reqd=done default=ignore] pam_winbind.so account requisite pam_deny.so account required pam_permit.so account required pam_krb5.so minimum_uid=1000 root@lx:/etc/pam.d# grep -ve '^#' common-session-noninteractive session [default=1] pam_permit.so session requisite pam_deny.so session required pam_permit.so session optional pam_krb5.so minimum_uid=1000 session required pam_unix.so session optional pam_winbind.so At some point I joined the linux box into the AD domain. After (manually) creating a home directory on the linux box I can log in using the Adminstrator user with the password taken from AD. Now I run samba with the following setup: [global] netbios name = LX realm = CUSTOMER.LOCAL workgroup = CUSTOMER security = ADS encrypt passwords = yes password server = 192.168.20.244 #IP des Domain Controllers os level = 0 socket options = TCP_NODELAY SO_RCVBUF=16384 SO_SNDBUF=16384 idmap uid = 10000-20000 idmap gid = 10000-20000 winbind enum users = Yes winbind enum groups = Yes preferred master = no winbind separator = + dns proxy = no wins proxy = no # client NTLMv2 auth = Yes log level = 2 logfile = /var/log/samba/log.smbd.%U template homedir = /home/%U template shell = /bin/bash [export] path = /mnt/sdc1/export read only = No public = Yes Currently I don't care whether export is exported to everyone or just one user, I want to see somebody WRITING to that directory before I start fiddling with the authentication settings. (Who may access it). As mentioned, accessing the share from smbclient results in this NT_STATUS_MEDIA_WRITE_PROTECTED . Accessing it from windows shows ACLs that look correct (The user may write) - but it does not work, I can only read files not write. The directory to be exported looks like this: root@lx:/etc/pam.d# ls -ld /mnt/ drwxr-xr-x 5 root root 4096 2010-11-28 01:29 /mnt/ root@lx:/etc/pam.d# ls -ld /mnt/sdc1/ drwxr-xr-x 4 froh froh 4096 2010-11-28 01:32 /mnt/sdc1/ root@lx:/etc/pam.d# ls -ld /mnt/sdc1/export/ drwxrwxrwx+ 5 administrator domänen-admins 4096 2010-12-03 19:04 /mnt/sdc1/export/ root@lx:/etc/pam.d# getfacl /mnt/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/ # owner: root # group: root user::rwx group::r-x other::r-x root@lx:/etc/pam.d# getfacl /mnt/sdc1/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/sdc1/ # owner: froh # group: froh user::rwx group::r-x other::r-x root@lx:/etc/pam.d# getfacl /mnt/sdc1/export/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/sdc1/export/ # owner: administrator # group: domänen-admins user::rwx group::rwx group:domänen-admins:rwx mask::rwx other::rwx default:user::rwx default:group::rwx default:group:domänen-admins:rwx default:mask::rwx default:other::rwx My, oh my what am I overlooking? What am I to blind to see?

    Read the article

  • How does Tunlr work?

    - by gravyface
    For those of you not in the US, Tunlr uses DNS witchcraft to allow you to access US-only (and UK-only stuff like BBC radio online) services and Websites like Hulu.com, etc. without using traditional methods like a VPN or Web proxy. From their FAQ: Tunlr does not provide a virtual private network (VPN). Tunlr is a DNS (domain name system) unblocking service. We’re using sophisticated technologies (a.k.a. the Tunlr Secret Sauce ©) to re-adress certain data envelopes, tricking the receiver into thinking the envelope originated from within the U.S. For these data envelopes, Tunlr is transparently creating a network tunnel from your location to our U.S.-based servers. Any data that’s not directly related to the video or music content providers which Tunlr supports is not only left untouched, it’s also not even routed through Tunlr. In order to use Tunlr, you will have to change the DNS address. See Get started for more information. I can't really wrap my head around how this works; I have always assumed that these services performed a geolocation lookup via your client IP. Just really curious as to how this works. EDIT 2 I believe they're only proxying the initial geo check and then modifying the data stream request to include your real IP address so that the streaming is direct, not proxied.

    Read the article

  • Dell PowerEdge 860 won't boot but gets power

    - by fierflash
    I have a Dell PowerEdge 860. Got an 2008 R2 running on it. Here is the scenario I'm currently in: I can boot it up and work as usuall. But when I restart it, it just shuts down, doesn't reboot, and it won't boot up again. Then it stays in this "mode", On/off button blinking green(standby mode?) and System Indicator LED blinking amber. If I try to press the On/Off button on the frontpanel it won't boot. It's like you can hear the fan from the PSU running but nothing else is trying to boot. If I unplug the powercable and then put it back in again it's the same, the extremley noisy fans won't start and it's just sitting there. If I let it "rest" for like 1 to 24 hours while the powercable is unplugged and I plug it in again, it boots directly, without me having to push the On/Off button. What I have tried: Run diagnostics(not the one in POST phase) and everything is fine Booting without any cables except the powercable Boot without memory installed Boot without the raidcable plugged in Cleared NVRAM Reseated all the components and cables Googling like a freak for any possible solution Tried Dell Support but I don't have any warranty left so they can't help me unless i pay some ridiculous amount of money I suppos Do anyone have some experience with the same issue? Do I need to replace something or is there any other way to determine the problem? Any help will be greatly appreciated.

    Read the article

  • 2 Printers 1 Queue

    - by Shazburg
    My issue: When an order is processed, the same document needs to be printed on two printers. My proposed solution: Create a single queue in CUPS with a backend script that spits the job out to the two real printers queues. My problem: Documentation. Maybe I'm looking at every ring around the bullseye, but I can't find anything that lays out the rules for writing a CUPS backend script. In the end, I have several questions: Is there already an option to do this in CUPS that I've missed? The line I use to add my queue is "lpadmin -p MultiPass -E -v multipass -P Generic PostScript Printer". But DeviceURI is bad unless I specify a directory like "-v multipass:/tmp". Why is this? For testing, my script does nothing but capture ARGV and write it out to a text file one line per argument. Problem is, I'm getting nothing. Logs show the job as successful, but I'm pretty sure my meager attempt at a backend isn't even being run. I've tried to keep this question brief, so please ask for more info as I'm sure I've left out the most important part in all this. Honestly, I'm just done chasing my own tail. Thank you for your time.

    Read the article

  • How to encourage Windows administrators to pick up scripting

    - by icelava
    When i worked as an administrator in my first job, I was frustrated our administration processes with Windows servers were a series of point-and-clicks; we could never match the level of efficiency with the Unix servers which had a group of shell scripts to automate a lot of the work. I soon read about WSH and ADSI and wasted no time learning just how much automation I was able to achieve with scripting. There was a huge problem though - almost none of my Windows colleagues were really interested in learning scripting. They seemed happy with the manually mouse-clicking chores and were never excited at the prospect of using scripts to do the work on their behalf. I struggled to convince them to pick up scripting skills despite the evident increases in efficiency. I left that job in pursuit of a full-time software development career thereafter. Almost a decade on working in various environments and different customers, I still encounter Windows administrators mainly possessing this general "mood" where they would avoid scripting as much as possible. Despite the increasing level of accessibility Windows server technologies are opening up for scripting and automation. I am almost certain the majority of administrators are administrators precisely because they absolutely hate performing any kind of programming duties. What are some means to encourage and motivate administrators that scripting can really help them in the long run?

    Read the article

  • Upgrade SQLServer 2008 hardware

    - by John
    Forgive me if I'm not able to be totally clear here. It is not intentional, I'm a senior level developer in a very small company having to act like a manager at the moment. Anyway, the story is that we have 2 older dell servers with SQL Server 2008 Standard in a "cluster". I put that in quotes because I'm still not 100% clear what that means. We have 2 brand new blade servers and want to move the existing databases to the new hardware. Ok, so here is the gotcha. We need to do this with little or no down time. I'm being told that we can evict the passive node, then pull in one of the new servers. But I'm also being told that this is a dangerous step because something could go wrong that would cause the cluster to fail and then we would be left with nothing because the active server would not be able to come back up. Does anyone have any thoughts on how to handle this? I'm being told that the only way to ensure success is to have at least a day of down time where we bring up a new cluster on the new hardware and then migrate the databases 1 by 1.

    Read the article

  • How does Windows 7 taskbar "color hot-tracking" feature calculate the colour to use?

    - by theyetiman
    This has intrigued me for quite some time. Does anyone know the algorithm Windows 7 Aero uses to determine the colour to use as the hot-tracking hover highlight on taskbar buttons for currently-running apps? It is definitely based on the icon of the app, but I can't see a specific pattern of where it's getting the colour value from. It doesn't seem to be any of the following: An average colour value from the entire icon, otherwise you would get brown all the time with multi-coloured icons like Chrome. The colour used the most in the image, otherwise you'd get yellow for the SQL Server Management Studio icon (6th from left). Also, the Chrome icon used red, green and yellow in equal measure. A colour located at certain pixel coordinates within the icon, because Chrome is red -indicating the top of the icon - and Notepad++ (2nd from right) is green - indicating the bottom of the icon. I asked this question on ux.stackoverflow.com and it got closed as off-topic, but someone answered with the following: As described by Raymond Chen in this MSDN blog article: Some people ask how it's done. It's really nothing special. The code just looks for the predominant color in the icon. (And, since visual designers are sticklers for this sort of thing, black, white, and shades of gray are not considered "colors" for the purpose of this calculation.) However I wasn't really satisfied with that answer because it doesn't explain how the "predominant" colour is calculated. Surely on the SQL Management Studio icon, the predominant colour, to my eyes at least, is yellow. Yet the highlight is green. I want to know, specifically, what the algorithm is.

    Read the article

  • Inkscape: Copying an object, retaining transparency

    - by dpk
    I'm looking for a way to copy objects from one window to another without losing the surrounding transparency. I have two Inkscape windows. The setup is pretty simple. In the first window I draw a filled circle and a filled rectangle in it, with the circle set on top of the rectangle to show that the area around the circle is transparent (that is, you can see the rectangle "under" the circle, see screenshot 1, left). In the second window I just drew a filled rectangle (screenshot 1, right). When I copy the circle from window 1 to window 2 the transparency around the circle is lost (screenshot 2). I've verified that the backgrounds of the documents are 0% alpha/white. This is a rather contrived example but is readily reproducible. The real graphics I am working with have a bunch of objects all in a single group, but I have the same results. I feel like I'm missing something. The circle no longer behaves like a circle at its destination. Instead, it acts kind of like a bitmap. I'm definitely not using the bitmap copy feature.

    Read the article

  • Apache2 doesn't serve PHP-scripts correctly

    - by cmbrnt
    I've run into a problem with my Apache 2.2.16 configuration, running on Debian Squeeze. The problem is that it stopped serving PHP5-scripts completely. When I try to access the sites with Google Chrome, it instead downloads a file called "download", which contains the contents of the script. This is of course not a good thing. It does serve common html-files perfectly... I've been at this for quite a while now, and after all the googling and troubleshooting, I thought it would be a good time to ask you guys. Here's what I've got: The php5 and libapache2-mod-php5 packages are installed /etc/apache2/mods-available contains both php5.load and php5.conf, and these are symlinked from the mods-enabled directory The /etc/php5/ directory is left untouched since the installation. Here's the contents of /etc/apache2/mods-available/php.load: LoadModule php5_module /usr/lib/apache2/modules/libphp5.so And /etc/apache2/mods-available/php.conf: <IfModule mod_php5.c> <FilesMatch "\.ph(p3?|tml)$"> SetHandler application/x-httpd-php </FilesMatch> <FilesMatch "\.phps$"> SetHandler application/x-httpd-php-source </FilesMatch> <IfModule mod_userdir.c> <Directory /home/*/public_html> php_admin_value engine Off </Directory> </IfModule> </IfModule> What am I missing? This is a server with modified virtual hosts and the like, so I might have changed some settings which causes this problem, but simply purging and reinstalling is not an option so far, since the configuration is quite extensive. Any help would be great. Thanks.

    Read the article

  • How can laptop keyboard keys be removed and replaced?

    - by Lord Torgamus
    I'm trying to fix a laptop keyboard that has issues with keys on its left side. Just by feel, it's clear that something sticky got under there. There could be something crunchy too, but that might just be the sound of the key's spring releasing itself from the sticky. I don't know the cause because it's not my computer and the owner isn't sure, but I'm guessing soda spill for now. The computer is an HP dv2500. I've removed the keyboard and blown under it but that hasn't helped. I didn't use compressed air because I just don't have any available, but I suspect it wouldn't help with sticky. So, I'd like to pop they keys off and clean with damp cotton swabs or similar. Is there a proper way to remove the keys? I've found some instructions via Google for non-laptop keyboards, but they don't seem like they'd work for me. Alternate solutions to the problem also welcome, but I've been curious about how to remove the keys for some time for other reasons.

    Read the article

  • Alfa AWUSO36H 1W dysfunctional driver

    - by BrainStorm
    I recently purchased an Alfa AWUSO36H 1W wireless USB adapter for my notebook, in order to improve signal strength and quality. I'm currently using Linux Mint 11, and the it uses the RTL8187 driver for this adapter, I'm also using a 4dbi antenna, though I have others. The problem is that this adapter does exactly the opposite of what it should, actually my internal Broadcom BCM4313 adapter works way better than the alfa. Browsing is slow, some network applications don't even work, pings against Google.com on the internal adapter runs smooth, while in the alfa it gets like 25% packets lost or more! I'm less them 50 feet from my AP, the internal adapter gets 44/70 link quality, and the alfa gets around 60/70 (iwconfig output). Also the system always sets alfa power to 20dbm(100mw), then I have to do sudo iw set reg B0 to make it 30dbm(1000mw), but apparently no significant change. I've installed wireless-compat drivers, no change either. And worst of all, in Windows 7 it works way more smoothly for browsing, though I couldn't test it properly there. I hope its a driver problem, even if it's a pain to find/compile Linux drivers for a starter, I prefer it to a hardware problem where I would need to buy another adapter, since I have no money left (except for the cantenna pieces).

    Read the article

  • Should I keep my ex-employer's data?

    - by Jurily
    Following my brief reign as System Monkey, I am now faced with a dilemma: I did successfully create a backup and a test VM, both on my laptop, as no computer at work had enough free disk space. I didn't delete the backup yet, as it's still the only one of its kind in the company's history. The original is running on a hard drive in continuous use since 2006. There is now only one person left at the company, who knows what a backup is, and they're unlikely to hire someone else, for reasons very closely related to my departure. Last time I tried to talk to them about the importance of backups, they thought I was threatening them. Should I keep it? Pros: I get to save people from their own stupidity (the unofficial sysadmin motto, as far as I know) I get to say "I told you so" when they come begging for help, and feel good about it I get to say nice things about myself on my next job interview Nice clean conscience Bonus rep with the appropriate deities Cons: Legal problems: even if I do help them out with it, they might just sue me for keeping it anyway, although given the circumstances I think I have a good case Legal problems: given the nature of the job and their security, if something leaks, I'm a likely target for retaliation Legal problems: whatever else I didn't think about I need more space for porn. Legal problems. What would you do?

    Read the article

  • VNC Address Book - Window Invisible? Working as intended?

    - by user1445967
    I have VNC Server and VNC Viewer on the same PC with Windows 7 Ultimate x64. Is VNC Address Book supposed to have a main application window? I can't tell if this is bugged or working as intended. When I start VNC Address book, it creates an item in the task list and also an icon in the notification area. If I click the task list item once, nothing happens. If I click again, it disappears from the task list but remains in the notification area**. If I right-click the icon on the notification area, there is an option 'open address book', which if chosen, creates the item in the task list again, with the exact same behavior. If I left or right click on the address book icon, I have ways to connect to servers in the address book. However I have no way to add/remove/edit servers in the address book. **Note that this behavior is identical to that of an application with a main window where there is an option to 'minimize to system tray / notification area'. Only here, no main window is visible, ever. If fixing this problem is beyond the scope of this site, that's fine, but at this point I have no real way to tell if the program is malfunctioning on my pc or if it is far more minimalist than I expected. There appears to be no forum on realvnc.com where I can ask about this, perhaps this is to prevent me from getting any help unless I pay to register?

    Read the article

  • tmpreaper, --protect and a non-root user

    - by nsg
    Hi, I'm a little confused. I have a download directory that I want to remove all files older then 30 days with tmpreaper. Just one problem, the directory in question is a separate partition with a lost+found directory, of course I need to keep it so I added --protect 'lost+found', the problem is that tmpreaper outputs: error: chdir() to directory 'lost+found' (inode 11) failed: Permission denied (PID 30604) Back from recursing down `lost+found'. Entry matching `--protect' pattern skipped. `lost+found' I have tried with other pattern like lost* and so on... I'm running tmpreaper as a non-root user because there is no reason for superuser privileges because I own all files (except lost+found). Are I'm forced to run tmpreaper as root? Or are my shell-skills not as good as I thought? I guess the problem is: tmpreaper will chdir(2) into each of the directories you've specified for cleanup, and check for files matching the <shell_pattern> there. It then builds a list of them, and uses that to protect them from removal. Any thought and/or advice? Edit: The command I'm trying to run is something like $ /usr/sbin/tmpreaper -t --protect 'lost+found' 30d /mydir 1> /dev/null error: chdir() to directory `lost+found' (inode 11) failed: Permission denied Edit 2: I read the source code for tmpreaper-1.6.13 and found this if (safe_chdir (dirname)) exit(1); and if (chdir (dirname)) { message (LOG_ERROR, "chdir() to directory `%s' (inode %lu) failed: %s\n", dirname, (u_long) sb1.st_ino, strerror (errno)); return 1; } So it seems tmpreaper needs to be able to chdir in to all directories, ignored or not. I see two options left Run tmpreaper as root Move the download directory Find a alternative tool (tmpwatch?) I will give it some more research before i make a choice.

    Read the article

  • How do you notice that the batteries of you wireless mouse are dying out?

    - by hkBattousai
    I have a Logitech M705 wireless mouse. I'm first time using a wireless mouse, so I don't have much experience with the hardware features and behavior. It is rated that it runs for 3 years with the same batteries. I think this "3 year" rating is calculated for a very low usage and activity; like 2 hours a day. I'm using it for about 12 hours a day, so I expect it to run out of batteries in a much shorter time in my case. I have been using it for about half a year. Recently (for the last two weeks), it started to make some peculiar behavior when clicking and drafging objects. - When I click something, it sometimes double click it. - When I drag something from one place to another (or selecting some text), it sometimes drops the object in the halfway (when selecting text, the text which had selected up to that time becomes unselected and it starts to select the rest of the text from that moment), but it goes on being in the "left-button-pressed" state. It is like, the pressed button switches to "unpressed" state for a moment, then returns back to the "pressed" state. When one of these faults occur, it occurs several times sequentially. There is no problem in pointer movement, scrolling or right-clicking. Since the batteries last for a very long time for this device, I don't expect it to stop working in an instance. I expect it to give these kind of syndromes of a time period. My question is; Is this how batteries run out for a wireless mouse? Or, is this another kind of hardware/software problem?

    Read the article

  • Arch Linux: How to handle patches which only you will use?

    - by user12932
    I'm using freerdp together with xmonad and it has been giving me a lot of trouble. The super key (or "windows key") is my mod key in xmonad and it has been interfering with my freerdp usage rather annoyingly. Whenever I switched workspaces (or did anything else in xmonad involving the super key), windows (controlled by the freerdp instance in focus) registered a keypress as well. This event combined with the loss of focus got the super key stuck in windows indefinitely: the press of the keys d and r would first show my desktop, then open the run dialog (as if I was pressing the windows key constantly). I've tried several versions of freerdp, but all exhibited this annoying behavior. So I resorted to patching freerdp myself to just ignore the left super key on my keyboard. I love free software for a lot of reasons (especially the ability to alter things like this myself), however I still find it annoying to patch and rebuild freerdp on all version (and dependency) changes. How do you deal with situations like this? Is there even a "right way" to resolve this issue?

    Read the article

  • need some help figuring out clamav & monit monitoring error...unixsocket...

    - by Ronedog
    I need a bit of help figuring something out. First off, I'm not very well versed with FreeBSD servers, etc. but with some direction hopefully I can get this fixed. I'm using FreeBSD and installed Monit so I could monitor some of the processes that run tomcat, apache, mysql, sendmail, clamav. So far, I'm only successful in getting apache & mysql to be monitored. I'm getting this error for clamav in the log file for /var/log/monit.log 'clamavd' failed, cannot open a connection to UNIX[/usr/local/etc/rc.d/clamav-clamd] My config file for clamav in /etc/monitrc is: #################################################################### # CLAMAV Virus Checks #################################################################### check process clamavd with pidfile /var/run/clamav/clamd.pid group virus start program = "/usr/local/etc/rc.d/clamav-clamd start" stop program = "/usr/local/etc/rc.d/clamav-clamd stop" if failed unixsocket /usr/local/etc/rc.d/clamav-clamd then restart if 5 restarts within 5 cycles then timeout Honestly, I really don't know much of what's going on here. My host who helped me get the box set up basically installed clamav, but doesn't offer this kind of detail in supporting me, so I'm left to figure this stuff out on my own as I own the box, but they provide the isp service. Is there anyone who can help me troubleshoot this? Thanks for your help in advance.

    Read the article

  • Two hosts on same subnet can't see each other

    - by Joey Hewitt
    I've got two routers with two separate public IP addresses on the same subnet, but I can't get them to talk to each other. Both are connected to the internet (ISP-provided gateway) via Ethernet ports provided by the landlord, but I don't have access to or knowledge of how those are physically connected or the protocols used to get back to the ISP. I can ping either from the outside, but they can't ping each other. Traceroutes in and out look the same, and they receive the same gateway over DHCP. I can ping other IPs on the subnet, so I assume this is not any sort of intentional isolation for security/privacy. Since I'm in a setup where my landlord provides internet and we don't have contact with the ISP, I can't really ask the ISP for help (doubt the landlord would know much either.) The situation is similar to the diagram at this question, but instead of the two servers, there's another router coming off the (presumed) switch, and I don't have access to the switch. I've tried giving them static routes to each other with the ISP internet gateway as the gateway, but that's not working. One is a Linksys WRT54GL running DD-WRT, the other is a Netgear WGR614v7, although I could get something more capable if necessary. I'd like to keep them each connected directly to the ISP on their WAN ports, but I can have an ethernet cable between them if necessary - I'm wondering if there's a way without that, and if there isn't, I'd appreciate advice on how to get that working. Sorry this is so nitpicky; there are reasons for all the constraints, but they don't apply to the real question, so I left them out. ;) Thank you!

    Read the article

  • Office 365 domain federation conversion failed

    - by Matt Bear
    We're doing things backwards, we have an established o365 domain, with 400+ users, and are just now deploying local AD, and ADFS for SSO. Last night, after configuring my servers, I ran the powershell command convert-MSOLdomaintofederated to convert the xxx.com vanity domain to federated, it errored out with an unspecified error(Microsoft ADFS support said the error has to do with the default password settings being changed.) And when I run convert-MSOLdomaintostandard, it comes back with the domain is already standard. Also in the o365 portal it shows the domain as standard, however it is trying to process login attempts as if it were a federated domain. I've spent 5 hours total on the phone with Microsoft, and it has been escalated to their engineering department for resolution, sometime within the next few days... I need it yesterday. From what we can gather, the conversion process started, error out, changed some of the internal configurations to federated, but left the description as standard.(if that makes since). So its in a weird limbo, where its in both modes but neither at the same time. Currently, the only way to fix it is to remove the vanity domain, and re-add it. I need a way to dissociate the user accounts from xxx.com domain to allow its removal. Removal of all the users themselves is not an option.

    Read the article

  • Looking for suggestions for hosting Windows 2000 Server in the cloud / VPS / etc?

    - by JohnyD
    I have a Windows 2000 Server, currently virtualized in Hyper-V, that I would like to get running off-site as a backup (cloud, VPS, etc). You can't virtualize in EC2 and I'm fairly certain there are no Server 2000 AMI's floating about (correct me if I'm wrong!). If anyone has a recommendation on how I can get a virtualized Windows 2000 Server running in a secure, remote environment I would be grateful. As far as locations go I'd be interested in both North America as well as Australia and Europe. In a nutshell, we're ploughing our way out of a legacy codebase and this server is the last that remains of the legacy apps. However, it is still very much used by our clients. Everything is backed up each night (data, images, etc) to tape which is then taken offsite. However, in the event of a fire I would love to have a backup legacy server to point DNS records to. So while I am rebuilding from the ashes our services would already be available. It would save a lot of time and make my managers all the more happy (and that's what it's all about, riighhtt? :D) Thank you all for your suggestions. Please let me know if I've left out any important information. Additional info: - the legacy codebase does not function properly in Server 2003

    Read the article

  • How to get data from a borked Windows Home Server

    - by harhoo
    Yesterday we had a power surge, followed by a power outage. This left my WHS borked: powering on just gives to a flashing blue light (the led on the power supply also flashes green) - no fan or boot activity, nothing. I urgently needed some files off there in the short term (and the 500GB of photos, music, personal video etc in the long term) so I took the hard drive out and put it in my computer. The files and folders showed up, but I couldn't access them - clicking on an image gave an invalid image error in Picasa, I couldn't play MP3s etc. I changed the ownership and permissions of the files, still nothing. I booted in with a LiveCD, the same: files appear, but won't open. Is there anything else I can do? I'm now wondering if it was just the power cable that's broken, but if so, why can;t I access my files from the hard drive? If it is the power cable, and I replace that and the hard drive, will I have done any harm messing around with ownership and file permissions?

    Read the article

  • Time Machine is getting stuck at "Preparing to Back Up" and my Trash isn't emptying

    - by zarose
    I have encountered two separate problems, but I am putting them in the same question in case they are related. First, my Trash would not empty. It seems to be getting stuck on certain files, because I will reset my Macbook and some of the files will be deleted, and then if I remove a file or two at random, more can be deleted. Some of these files had strange characters in their names. I tried changing the names to single characters, but this did not help. Next, I attempted to backup my Macbook using Time Machine. I plugged in the HDD I've been using for this, but every time I try to start the backup, Time Machine gets stuck at "Preparing to Back Up". I definitely need to know how to fix the Time Machine problem, but I am curious how to solve the trash problem as well, and whether or not these problems are related. EDIT: Console.app logged the following this morning before I left on a trip. I did not bring the HDD with me. 6/5/12 7:41:28.312 AM com.apple.backupd: Starting standard backup 6/5/12 7:41:46.877 AM com.apple.backupd: Error -35 while resolving alias to backup target 6/5/12 7:41:58.368 AM com.apple.backupd: Backup failed with error: 19 6/5/12 7:59:08.999 AM com.apple.backupd: Starting standard backup 6/5/12 7:59:10.187 AM com.apple.backupd: Backing up to: /Volumes/Seagate 3TB Mac/Backups.backupdb 6/5/12 7:59:13.308 AM com.apple.backupd: Event store UUIDs don't match for volume: Macintosh HD 6/5/12 7:59:13.331 AM com.apple.backupd: Event store UUIDs don't match for volume: Blank 6/5/12 7:59:13.683 AM com.apple.backupd: Deep event scan at path:/ reason:must scan subdirs|new event db| 6/5/12 8:23:31.807 AM com.apple.backupd: Backup canceled. 6/5/12 8:23:33.373 AM com.apple.backupd: Stopping backup to allow backup destination disk to be unmounted or ejected. 6/5/12 9:51:21.572 PM com.apple.backupd: Starting standard backup 6/5/12 9:51:22.515 PM com.apple.backupd: Error -35 while resolving alias to backup target 6/5/12 9:51:32.741 PM com.apple.backupd: Backup failed with error: 19

    Read the article

  • Samba with Active Directory - shares are readonly, NT_STATUS_MEDIA_WRITE_PROTECTED

    - by froh42
    I've set a samba server that seems to work, all shares are seemingly exported as readonly, however. The machine is called "lx". When I'm on lx I can run the following command: froh@lx:~$ smbclient //lx/export -UAdministrator Enter Administrator's password: Domain=[CUSTOMER] OS=[Unix] Server=[Samba 3.5.4] smb: \> mkdir wrzlbrmpf NT_STATUS_MEDIA_WRITE_PROTECTED making remote directory \wrzlbrmpf smb: \> ls . D 0 Fri Dec 3 19:04:20 2010 .. D 0 Sun Nov 28 01:32:37 2010 zork D 0 Fri Dec 3 18:53:33 2010 bar D 0 Sun Nov 28 23:52:43 2010 ork 1 Fri Dec 3 18:53:02 2010 foo 1 Sun Nov 28 23:52:41 2010 gaga D 0 Fri Dec 3 19:04:20 2010 How can I troubleshoot this? What I did: First I set up a fresh install of Ubuntu 10.10 x64. Second I got kerberos working with the following krb5.conf file: [libdefaults] ticket_lifetime = 24000 clock_skew = 300 default_realm = CUSTOMER.LOCAL [realms] CUSTOMER.LOCAL = { kdc = SB4.customer.local:88 admin_server = SB4.customer.local:464 default_domain = CUSTOMER.LOCAL } [domain_realm] .customer.local = CUSTOMER.LOCAL customer.local = CUSTOMER.LOCAL #[login] # krb4_convert = true # krb4_get_tickets = false I also added winbind to group, passwd and shadow in nsswitch.conf. Seemingly Kerberos works: root@lx:~# net ads testjoin Join is OK root@lx:~# wbinfo -a 'Administrator%MYSECRETPASSWORD' plaintext password authentication succeeded challenge/response password authentication succeeded wbinfo -u and wbinfo -g also spit out a list of users and a list of groups respectiveley. I noted that domain accounts did NOT include a domain and they are in german (as on the SBS 2003 that is the domain server). So I get a "Domänenbenutzer" in wbinfo -u's output not a "CUSTOMER+Domain User" or something similar. I'm not sure anymore what I did to the PAM configuration, but here is what I currently have: root@lx:/etc/pam.d# cat samba @include common-auth @include common-account @include common-session-noninteractive root@lx:/etc/pam.d# grep -ve '^#' common-auth auth [success=3 default=ignore] pam_krb5.so minimum_uid=1000 auth [success=2 default=ignore] pam_unix.so nullok_secure try_first_pass auth [success=1 default=ignore] pam_winbind.so krb5_auth krb5_ccache_type=FILE cached_login try_first_pass auth requisite pam_deny.so auth required pam_permit.so root@lx:/etc/pam.d# grep -ve '^#' common-account account [success=2 new_authtok_reqd=done default=ignore] pam_unix.so account [success=1 new_authtok_reqd=done default=ignore] pam_winbind.so account requisite pam_deny.so account required pam_permit.so account required pam_krb5.so minimum_uid=1000 root@lx:/etc/pam.d# grep -ve '^#' common-session-noninteractive session [default=1] pam_permit.so session requisite pam_deny.so session required pam_permit.so session optional pam_krb5.so minimum_uid=1000 session required pam_unix.so session optional pam_winbind.so At some point I joined the linux box into the AD domain. After (manually) creating a home directory on the linux box I can log in using the Adminstrator user with the password taken from AD. Now I run samba with the following setup: [global] netbios name = LX realm = CUSTOMER.LOCAL workgroup = CUSTOMER security = ADS encrypt passwords = yes password server = 192.168.20.244 #IP des Domain Controllers os level = 0 socket options = TCP_NODELAY SO_RCVBUF=16384 SO_SNDBUF=16384 idmap uid = 10000-20000 idmap gid = 10000-20000 winbind enum users = Yes winbind enum groups = Yes preferred master = no winbind separator = + dns proxy = no wins proxy = no # client NTLMv2 auth = Yes log level = 2 logfile = /var/log/samba/log.smbd.%U template homedir = /home/%U template shell = /bin/bash [export] path = /mnt/sdc1/export read only = No public = Yes Currently I don't care whether export is exported to everyone or just one user, I want to see somebody WRITING to that directory before I start fiddling with the authentication settings. (Who may access it). As mentioned, accessing the share from smbclient results in this NT_STATUS_MEDIA_WRITE_PROTECTED . Accessing it from windows shows ACLs that look correct (The user may write) - but it does not work, I can only read files not write. The directory to be exported looks like this: root@lx:/etc/pam.d# ls -ld /mnt/ drwxr-xr-x 5 root root 4096 2010-11-28 01:29 /mnt/ root@lx:/etc/pam.d# ls -ld /mnt/sdc1/ drwxr-xr-x 4 froh froh 4096 2010-11-28 01:32 /mnt/sdc1/ root@lx:/etc/pam.d# ls -ld /mnt/sdc1/export/ drwxrwxrwx+ 5 administrator domänen-admins 4096 2010-12-03 19:04 /mnt/sdc1/export/ root@lx:/etc/pam.d# getfacl /mnt/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/ # owner: root # group: root user::rwx group::r-x other::r-x root@lx:/etc/pam.d# getfacl /mnt/sdc1/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/sdc1/ # owner: froh # group: froh user::rwx group::r-x other::r-x root@lx:/etc/pam.d# getfacl /mnt/sdc1/export/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/sdc1/export/ # owner: administrator # group: domänen-admins user::rwx group::rwx group:domänen-admins:rwx mask::rwx other::rwx default:user::rwx default:group::rwx default:group:domänen-admins:rwx default:mask::rwx default:other::rwx My, oh my what am I overlooking? What am I to blind to see?

    Read the article

< Previous Page | 591 592 593 594 595 596 597 598 599 600 601 602  | Next Page >