Search Results

Search found 14443 results on 578 pages for 'desktop fun'.

Page 390/578 | < Previous Page | 386 387 388 389 390 391 392 393 394 395 396 397  | Next Page >

  • Technology mash: is this possible?

    - by Jon Story
    I'm in the process of setting up my own DNS+hosting on a couple of VPS and my home machines, mostly for academic/learning purposes, but also for convenient accessing of my files, hosting my personal websites, private git repositories etc. I've got a main web server with DNS, and a slave DNS server. I've also got a couple of machines at home doing file hosting, video streaming and all that fun stuff. I'm intending to use my VPS's to provide myself with a dynamic DNS system so that I can point mydomain.com at my DNS servers, with home.mydomain.com going into my home network via a raspberry pi. HOWEVER.... I've not got access to the network infrastructure at home (rented accommodation with managed internet), so I can't forward the ports on the router to my own machines. As such, I'm wondering if it's possible to route all the traffic via an SSH/HTTP tunnel through one of the VPS? My plan is to have the raspberry pi provide a VPN into my home network. The raspberry pi uses SSH to connect to the VPS, and the VPS forwards any traffic to home.mydomain.com via the tunnel to the raspberry pi. Is this even possible, and how do I go about it? I don't mind getting my hands dirty with coding and low level tools, I'm just not sure where to start or what the best way to go about it is.

    Read the article

  • Having trouble keeping a 1GB RAM Centos server running

    - by Josh
    This is my first time configuring a VPS server and I'm having a few issues. We're running Wordpress on a 1GB Centos server configured per the internet (online research). No custom queries or anything crazy but closing in on 8K posts. At arbitrary intervals, the server just goes down. From the client side, it just says "Loading..." and will spin more or less indefinitely. On the server side, the shell will lock completely. We have to do a hard reboot from the control panel and then everything is fine. Watching "top" I see it hovering between 35 - 55% memory usage generally and occasional spikes up to around 80%. When I saw it go down, there were about 30 - 40 Apache processes showing which pushed memory over the edge. "error_log" tells me that maxclients was reached right before each reboot instance. I've tried tinkering with that but to no avail. I think we'll probably need to bump the server up to the next RAM level but with ~120K pageviews per month, it seems like that's a bit overkill since it was running fairly well on a shared server before. Any ideas? httpd.conf and my.cnf values to add? I'll update this with the current ones if that helps. Thanks in advance! This has been a fun and important learning experience but, overall, quite frustrating! Edit: quick top snapshot: top - 15:18:15 up 2 days, 13:04, 1 user, load average: 0.56, 0.44, 0.38 Tasks: 85 total, 2 running, 83 sleeping, 0 stopped, 0 zombie Cpu(s): 6.7%us, 3.5%sy, 0.0%ni, 89.6%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%st Mem: 2051088k total, 736708k used, 1314380k free, 199576k buffers Swap: 4194300k total, 0k used, 4194300k free, 287688k cached

    Read the article

  • Cheap Solution for Routing a Toll Free Number to a Standard POTS Number

    - by VxJasonxV
    I do some technical work for an Internet Radio Show/Podcast, and need to fix something that has been broken for a while. The hosts have a Skype-In number to take listener calls, and for convenience sake, I bought and paid for a toll free number for a period of time. I used to use Asterlink for routing calls, but they folded and sent my number to OneBox, but they're ridiculously expensive by comparison. I'm looking for a cheap solution for this one simple task. Forward toll free calls to a skype-in number. The definition of cheap is as cheap or cheaper than Asterlink was. I paid something like $2 a month, and then the termination/call rate, which was a fraction of a sent for termination, and only whole cents after some serious time on the call. A $20 preload lasted me months at a time. I don't want to be upsold too, I want a simple web based management screen (CDR/stats are fun!), and obviously, it needs to be reliable. What vendors out there are you a fan of that solves this need?

    Read the article

  • Netgear CG3000D new Modem/Router - Random High Ping

    - by justin.chmura
    Cox just recently came out and looked at my internet and decided that the modem I had was causing high latency issue. The speed was fine but the ping would spike to around 100 and over when gaming or putting a load more than browsing on the line. After they replaced it, it seems like I get better latency, but when it spikes, I get upwards of over 300 ping with like 500 jitter. I figured I would hit the serverfault universe before sending another email to Cox. I opted not to do the Cox setup as it was an extra $20 which I thought would have just setup the wireless (which I can handle). Is it a setting or something that I missed that needs to be setup? The firmware for the CG3000D is awful and not fun to use. I did change some hidden settings on the RgServices.asp page (I'll attach a screenshot). I've also heard that the Router/Modem combos are awful and that I should go back and just ask for a modem stand-alone. Any input is helpful. All screenshots: http://imgur.com/a/JX6qu#0

    Read the article

  • Performance experiences for running Windows 7 on a Thin-Client?

    - by Peter Bernier
    Has anyone else tried installing Windows 7 on thin-client hardware? I'd be very interested to hear about other people's experiences and what sort of hardware tweaks they had to do to get it to work. (Yes, I realize this is completely unsupported.. half the fun of playing with machines and beta/RC versions is trying out unsupported scenarios. :) ) I managed to get Windows 7 installed on a modified Wyse 9450 Thin-Client and while the performance isn't great, it is usable, particularly as an RDP workstation. Before installing 7, I added another 256Mb of ram (512 total), a 60G laptop hard-drive and a PCI videocard to the 9450 (this was in order to increase the supported screen resolution). I basically did this in order to see whether or not it was possible to get 7 installed on such minimal hardware, and see what the performance would be. For a 550Mhz processor, I was reasonably impressed. I've been using the machine for RDP for the last couple of days and it actually seems slightly snappier than the default Windows XP embedded install (although this is more likely the result of the extra hardware). I'll be running some more tests later on as I'm curious to see particularl whether the streaming video performance will improve. I'd love to hear about anyone's experiences getting 7 to work on extremely low-powered hardware. Particularly any sort of tweaks that you've discovered in order to increase performance..

    Read the article

  • Vim: How to Install Airline?

    - by MunkyCheez
    I'm just getting started with Vim. It's a fun experience, but I've found it to be kind of overwhelming. I'm trying to get this plugin installed, vim-airline, but I'm having a lot of trouble. The Installation section on the Github page simply states: copy all of the files into your ~/.vim directory Presumably, this means download the .zip, extract it, and copy all of those files into ~/.vim/. I did this, but Vim just starts up like normal, and running :help airline just gives: Sorry, no help for airline I assume that this means it isn't getting installed. Also, the statusbar remains the same. I'm new to Vim and would really like to get this working. Thanks in advance! EDIT: I also tried putting the files into /usr/share/vim/vim73/. No dice. EDIT 2: I ran :helptags ~/.vim/doc and now the help-page displays when I type :help airline, but I'm still not getting the plugin itself (the status bar). Vim looks the same, but it can now display the help page.

    Read the article

  • What to do about old MacBook

    - by John
    I have a White MacBook version 10.5.8. I accidentally dropped it and about an inch of the most right side of the screen is blacked out. I can't see the clock and Trash Can because of this. I checked how much it would cost to repair, and the cost of repairing is a flat rate of $300. I thought that was kind of expensive, like a third of the cost of the MacBook itself. accidentally dropping a notebook is something that would happen sooner or later, so I think getting a Lenovo is the best. I ended up getting a MacBook pro 15 inch, but do you think it's worth the money to fix my old MacBook? I even saw some used MacBooks available on Ebay for about $400. Anyways I don't know what to do with it. I do like its screen and keyboard better than the MacBook Pro ones, because the screen is less glare-y and the keyboard is more crisp and fun to type on, and its more wieldy when I'm using it while lying down.

    Read the article

  • How does Google geo location service work?

    - by heaosax
    I dont use google maps much, but I was using it today and I clicked the "Show my location" button for the first time, then firefox asked for permission and I clicked "share my location", google maps showed my location pretty accurate. But, how does this system really works? I mean how can google know where I live? I am connecting to the internet with a VPN, so my "public IP" is not from my country, but from sweden, also I use linux and I change the mac of my wireless device, but google still show my location. I know I can disable this feature setting firefox about:config geo.enabled to false, but I am curious about how google can know where I live even when I dont have a real mac address and my IP is not from my real country. Basically I'd like to know if this feature works only because of code that exists in chrome and firefox (which spies my system)? I am worried about anyone knowing where I live, I mean... where is my privacy? Part of the fun about the internet is remaining anonymous.

    Read the article

  • Puppet: is it ok to "force" certname when you expect to shuffle nodes around?

    - by Luke404
    We all know (good example on SF) that Puppet hostname detection could be... fun. At our company (and I guess we're not alone at this) we usually pre-configure servers at our offices and test them before bringing the gear to a remote datacenter and rack them. Of course the reverse dns will change when doing that, even if we don't change the actual hostname of the system. We're slowly drafting our puppet setup and I'd like to be sure those moves won't create problems. My idea is to explicitly configure the desired full FQDN of the system as certname in puppet.conf at server provision time (before the very first puppet run). My process would look something like this: basic o.s. installation basic network configuration, enough to reach the internet and resolve dns install puppet and set up certname start puppet and let him manage the whole configuration test, fix problems in config (via puppet), re-test, and so on... manually stop puppet set up new network configuration for the datacenter network move the machine to DC turn it on puppet should automatically start and keep on doing its job The process is supported by detecting the environment in puppet's manifests (eg. based on subnet, like they do at Wikimedia) and modify configuration as needed (eg. resolv.conf contents appropriate for each network). Each node's certname will never change for the whole system life cycle. Is there any problem with this approach? Could it be improved?

    Read the article

  • 3-4 old computers = general purpose cluster?

    - by TheLQ
    I have 3 old computers lying around right now running a P2 at 800 MHz(?), Intel Mobile 1.6 GHz, AMD Athlon XP 2000+ at 1.66 GHz, and (might not use this) P4 at 2.7 GHz, all with 512 MB Ram, and am considering clustering them together for fun/knowledge. They would be running an undecided version of linux, preferably ubuntu based. The issue is what I want to use it for: general computing and occasional video encoding. By general computing I mean day to day tasks. However I'm not sure if every program started by a single X session is going to exist on the same machine, defeating the purpose of such a system. Will programs be split up or exist on one machine? Second, assuming this is running 100baseT ethernet (not sure if the PCI slot itself could handle Gigabit), would the speed of having a program exist over the network be an issue? It seems that the constant asking of various things in RAM would be quite slow. And before you say "buy another computer!", that's not the point of this question. I'm asking would it be usable, not necessarily practical. And yes I know, this is going to be extreamly power consuming.

    Read the article

  • Scaling a node.js application, nginx as a base server, but varnish or redis for caching?

    - by AntelopeSalad
    I'm not close to being well versed in using nginx or varnish but this is my setup at the moment. I have a node.js server running which is serving either json, html templates, or socket.io events. Then I have nginx running in front of node which is serving all static content (css, js, etc.). At this point I would like to cache both static content and dynamic content to memory. It's to my understanding that varnish can cache static content quite well and it wouldn't require touching my application code. I also think it's capable of caching dynamic content too but there cannot be any cookie headers? I do use redis at the moment for holding session data and planned to use it for other things in the future like keeping track of non-crucial but fun stats. I just have no idea how I should handle caching everything on the site. I think it comes down to these options but there might be more: Throw varnish in front of nginx and let varnish cache static pages, no app code changes. Redis would cache dynamic db calls which would require modifying my app code. Ignore using varnish completely and let redis handle caching everything, then use one of the nginx-redis modules. I'm not sure if this would require a lot of app code changes (for the static files). I'm not having any luck finding benchmarks that compare nginx+varnish vs nginx+redis and I'm too inexperienced to bench it myself (high chances of my configs being awful). I'm basically looking for the solution that would be the most efficient in terms of req/sec and scalable in the future (throw new hardware at the problem + maybe adjust some values in a config = new servers up and running semi-painlessly).

    Read the article

  • Is it possible to ack nagios alerts from the terminal on a remote workstation?

    - by cat pants
    I have nagios alerts set up to come through jabber with an http link to ack. Is is possible there is a script I can run from a terminal on a remote workstation that takes the hostname as a parameter and acks the alert? ./ack hostname The benefit, while seemingly mundane, is threefold. First, take http load off nagios. Secondly, nagios http pages can take up to 10-20 seconds to load, so I want to save time there. Thirdly, avoiding slower use of mouse + web interface + firefox/other annoyingly slow browser. Ideally, I would like a script bound to a keyboard shortcut that simply acks the most recent alert. Finally, I want to take the inputs from a joystick, buttons and whatnot, and connect one to a big red button bound to the script so I can just ack the most recent nagios alert by hitting the button lol. (It would be rad too if the button had a screen on the enclosure that showed the text of the alert getting acked lol) Make fun of me all you want, but this is actually something that would be useful to me. If I can save five seconds per alert, and I get 200 alerts per day I need to ack, that's saving me 15 minutes a day. And isn't the whole point of the sysadmin to automate what can be automated? Thanks!

    Read the article

  • How do I fix a permissions problem with MS Distributed File System?

    - by charlesrandall
    I have a computer that is new, Windows 7, that is supposed to have access to particular network resources on a Distributed File System. However, despite all permissions being set correctly, I have consistent trouble accessing them. For instance, I'm supposed to be able to reach \company.org\main\subdir. All the permissions have been granted, only when I try to access it by name, it tells me I don't have permission to access \main. This is where the fun starts. If I ping company.org, get the IP, replace company.org by the IP, I can then access \IP\main\subdir without any problems at all. However we have a ton of scripts and build tools that access the network resource by name. My sysadmin has found that using MS's dfsutil.exe, we can fix it temporary using this sequence of commands: C:\dfsutil.exe /pktinfo C:\dfsutil.exe /PktFlush C:\dfsutil.exe /SpcFlush C:\dfsutil.exe /PurgeMupCache C:\dfsutil.exe /pktinfo After that, everything is great... until I reboot, or until some unspecified time later where suddenly I don't have access to \main\ anymore. Hoping to find a more permanent solution than waiting for it to break and running a batch file.

    Read the article

  • Dealing with external processes

    - by Jesse Aldridge
    I've been working on a gui app that needs to manage external processes. Working with external processes leads to a lot of issues that can make a programmer's life difficult. I feel like maintenence on this app is taking an unacceptably long time. I've been trying to list the things that make working with external processes difficult so that I can come up with ways of mitigating the pain. This kind of turned into a rant which I thought I'd post here in order to get some feedback and to provide some guidance to anybody thinking about sailing into these very murky waters. Here's what I've got so far: Output from the child can get mixed up with output from the parent. This can make both outputs misleading and hard to read. It can be hard to tell what came from where. It becomes harder to figure out what's going on when things are asynchronous. Here's a contrived example: import textwrap, os, time from subprocess import Popen test_path = 'test_file.py' with open(test_path, 'w') as file: file.write(textwrap.dedent(''' import time for i in range(3): print 'Hello %i' % i time.sleep(1)''')) proc = Popen('python -B "%s"' % test_path) for i in range(3): print 'Hello %i' % i time.sleep(1) os.remove(test_path) I guess I could have the child process write its output to a file. But it can be annoying to have to open up a file every time I want to see the result of a print statement. If I have code for the child process I could add a label, something like print 'child: Hello %i', but it can be annoying to do that for every print. And it adds some noise to the output. And of course I can't do it if I don't have access to the code. I could manually manage the process output. But then you open up a huge can of worms with threads and polling and stuff like that. A simple solution is to treat processes like synchronous functions, that is, no further code executes until the process completes. In other words, make the process block. But that doesn't work if you're building a gui app. Which brings me to the next problem... Blocking processes cause the gui to become unresponsive. import textwrap, sys, os from subprocess import Popen from PyQt4.QtGui import * from PyQt4.QtCore import * test_path = 'test_file.py' with open(test_path, 'w') as file: file.write(textwrap.dedent(''' import time for i in range(3): print 'Hello %i' % i time.sleep(1)''')) app = QApplication(sys.argv) button = QPushButton('Launch process') def launch_proc(): # Can't move the window until process completes proc = Popen('python -B "%s"' % test_path) proc.communicate() button.connect(button, SIGNAL('clicked()'), launch_proc) button.show() app.exec_() os.remove(test_path) Qt provides a process wrapper of its own called QProcess which can help with this. You can connect functions to signals to capture output relatively easily. This is what I'm currently using. But I'm finding that all these signals behave suspiciously like goto statements and can lead to spaghetti code. I think I want to get sort-of blocking behavior by having the 'finished' signal from QProcess call a function containing all the code that comes after the process call. I think that should work but I'm still a bit fuzzy on the details... Stack traces get interrupted when you go from the child process back to the parent process. If a normal function screws up, you get a nice complete stack trace with filenames and line numbers. If a subprocess screws up, you'll be lucky if you get any output at all. You end up having to do a lot more detective work everytime something goes wrong. Speaking of which, output has a way of disappearing when dealing external processes. Like if you run something via the windows 'cmd' command, the console will pop up, execute the code, and then disappear before you have a chance to see the output. You have to pass the /k flag to make it stick around. Similar issues seem to crop up all the time. I suppose both problems 3 and 4 have the same root cause: no exception handling. Exception handling is meant to be used with functions, it doesn't work with processes. Maybe there's some way to get something like exception handling for processes? I guess that's what stderr is for? But dealing with two different streams can be annoying in itself. Maybe I should look into this more... Processes can hang and stick around in the background without you realizing it. So you end up yelling at your computer cuz it's going so slow until you finally bring up your task manager and see 30 instances of the same process hanging out in the background. Also, hanging background processes can interefere with other instances of the process in various fun ways, such as causing permissions errors by holding a handle to a file or someting like that. It seems like an easy solution to this would be to have the parent process kill the child process on exit if the child process didn't close itself. But if the parent process crashes, cleanup code might not get called and the child can be left hanging. Also, if the parent waits for the child to complete, and the child is in an infinite loop or something, you can end up with two hanging processes. This problem can tie in to problem 2 for extra fun, causing your gui to stop responding entirely and force you to kill everything with the task manager. F***ing quotes Parameters often need to be passed to processes. This is a headache in itself. Especially if you're dealing with file paths. Say... 'C:/My Documents/whatever/'. If you don't have quotes, the string will often be split at the space and interpreted as two arguments. If you need nested quotes you can use ' and ". But if you need to use more than two layers of quotes, you have to do some nasty escaping, for example: "cmd /k 'python \'path 1\' \'path 2\''". A good solution to this problem is passing parameters as a list rather than as a single string. Subprocess allows you to do this. Can't easily return data from a subprocess. You can use stdout of course. But what if you want to throw a print in there for debugging purposes? That's gonna screw up the parent if it's expecting output formatted a certain way. In functions you can print one string and return another and everything works just fine. Obscure command-line flags and a crappy terminal based help system. These are problems I often run into when using os level apps. Like the /k flag I mentioned, for holding a cmd window open, who's idea was that? Unix apps don't tend to be much friendlier in this regard. Hopefully you can use google or StackOverflow to find the answer you need. But if not, you've got a lot of boring reading and frusterating trial and error to do. External factors. This one's kind of fuzzy. But when you leave the relatively sheltered harbor of your own scripts to deal with external processes you find yourself having to deal with the "outside world" to a much greater extent. And that's a scary place. All sorts of things can go wrong. Just to give a random example: the cwd in which a process is run can modify it's behavior. There are probably other issues, but those are the ones I've written down so far. Any other snags you'd like to add? Any suggestions for dealing with these problems?

    Read the article

  • Proxmox VE cluster not working

    - by JJ56
    I have been using a Proxmox VE 2.0 cluster (2 servers) for months. Today, I had the "bright idea" to install a GUI on one of the servers. I used tasksel, and selected the graphical desktop environment (Gnome). When I rebooted the server, neither of the servers on the cluster could see each other (shows a red light by the server on the sidebar, no statistics). The VMs on each server are working fine, individually. pvecm status on the broken server shows cman_tool: cannot open connection to cman, is it running ?. Running it on the other server outputs a lot of lines (here: http://pastebin.com/HpQfUHTU), but I assume the important bit is expected votes:2 total votes: 1 Trying pvecm delnode (othernode) on either server outputs: cluster not ready - no quorum? Any suggestions as to how I can fix this? Thanks in advance!

    Read the article

  • Random Windows application crashes on Windows Server Hyper-V Core 2012

    - by Marlamin
    We're having some issues with our Hyper-V Core 2012 R2 installation on a HP DL360G8. We have an identical server with Hyper-V Core 2012 (not R2) that does not have these issues. When logging off from the physical server/via remote desktop, we sometimes get this error: Configure-SMRemoting.exe - Application Error : The application was unable to start correctly (0xc0000142). Click OK to close the application. We've also once or twice seen a "memory could not be read" error mentioning LoginUI.exe (another Windows app in System32) but have been unable to get an exact description. It's rather worrying to get such errors on a fresh install of Hyper-V 2012 R2. Is this even anything to worry about? Things we've done: Memtest86+, no memory errors Checksummed the file that is crashing with the one in the verified correct ISO, files match Server firmware upgrade to latest firmware of all present hardware, no visible changes Remade the RAID5 array , no change Reinstalled a few times, no change Reinstall without applying Windows updates after, no change

    Read the article

  • Zimbra vs. Kerio Connect

    - by rahum
    We've been a Kerio partner for years and have deployed much Kerio Connect. Now we're looking at beginning hosting groupware for our clients and are wrestling over the right backend. Kerio Connect is fantastic, but we have a couple gripes: Kerio is often weeks behind on releases to keep up with major Microsoft and Apple updates that break functionality or at least impede new Apple/Microsoft features in their desktop clients We've been worried that Kerio has a historical habit of corrupting Apple's Open Directory; we know this happened years ago, and have suspected it happening earlier this year There's always some one or two features that are buggy in Kerio, in every release. These usually aren't dealbreakers, just annoyances. Kerio does not have any kind of HA feature set How does Zimbra compare? What are your gripes? Thanks! noam

    Read the article

  • Prevent a partition on a USB drive auto-mounting in Linux

    - by nomount
    On Linux (Gnome desktop) how do you prevent one of the partitions on an external USB drive auto-mounting when it attached to the machine? I don't just want to prevent the Nautilus window from popping up -- I want that partition not to mount. Fiddling with /etc/fstab is not acceptable, as this is a removable drive that is attached to different machines. I seem to remember that you create a hidden file in the root of the file system, but I can't remember what it's called. Something like: touch /media/usbdisk/.no-mount How do you actually make this work?

    Read the article

  • 401 - Unauthorized On Server 2008 R2 IIS 7.5

    - by mxmissile
    I have a web application deployed to Server 2008 IIS 7.5 box. From remote it gives this error: 401 - Unauthorized: Access is denied due to invalid credentials. (remote = desktops on the same LAN) Have tried several remote clients using different browsers, all the same result. (IE, FF, and Chrome) Hitting the application from the desktop of the server itself works flawlessly. The application is using Anonymous Authentication. The application is written in .NET 4.0 Asp.Net using the MVC framework. Sysinternals procmon returns these 2 results for each request: FAST IO DISALLOWED and PATH NOT FOUND. I have 2 other MVC apps running fine on the same server. I have checked the security on the folders and they all match. App runs fine on a Server 2008 IIS 7.0 box. Nothing shows up in the Event log on the server related to this. Pulling my hair out here, any troubleshooting tips?

    Read the article

  • iPhone - Open excel from SSRS 2008

    - by milesmcgehee
    We're currently having a problem with our 2008 SSRS server sending excel reports to users with iPhones. They get the email with no problems, but when the XLS file is opened on the phone, it returns an error of: Invalid format. Everyone else can open the report with no problems (email/blackberry) The odd thing, is I can drag the file to my desktop from the message, open it, save it, and then email it again and it opens just fine on the phone. Does anyone know of hotfix that can be applied to the SSRS server to create the XLS files correctly? Or something I can change to make this work? I know we can send all the attachments in PDF but I'd like to keep them XLS if at all possible.

    Read the article

  • Can't install Windows 7 on Acer Aspire M1100

    - by r0ca
    When I install Windows 7, everything goes smooth but as soon as it's done and Windows needs to reboot for the last time before getting the desktop, the computer stucks to Verify DMI Pool Data............. and then, nothing. I change the CMOS battery, I tried so many setup in BIOS, even load default settings... Nothing worked. The HDD light is not flickering anymore, no HDD activity. CTRL-ALT-DEL doesn't work. It's just impossible to load Windows 7. I tried Windows XP and this works fine. I also tried the Acer (Futureshop) recovery CD and I get an Hexademical error message stating the install cannot continue. Is there a BIOS flash apps somewhere or a fix I can apply to have Windows 7 Ultimate installed on my computer. Any takers?

    Read the article

  • Cisco VPN Client connects but unable to ping or RDP

    - by bryanroth
    I'm a developer and don't have much networking expertise, so bare with me. I'm using the Cisco VPN Client 5.0.02.0090 to connect to my work's VPN that way I can RDP into my work computer. Once connected, I can't ping anything on the local network once connected to the VPN thus I am unable to access my work's network. This used to work about two weeks ago but abruptly stopped working today. However, I have the Cisco VPN Client installed on my laptop and I am able to ping and RDP into my work computer from there. Both my desktop and laptop computers are connected to the same router at home. I have tried the following so far: Rebooted my computer Reinstalled VPN client Updated NIC drivers Disabled firewall Opened up ports 500, 4500, and 10000 Any help would be much appreciated. Thanks!

    Read the article

  • Is there a way to make a zenity dialog modal?

    - by math
    How can I make them modal? With modal I mean: The dialog should block the desktop so the user has only two options: Either cancel the dialog or enter text into it. (I want this basically because new windows might popup and can steal focus and additionally that other programs can access configuration files inside that container) Background: I want to ask a passphrase after login for an encfs container. So either entering a pass, or continue with cancellation of this dialog. Note: This is not a duplicate of modal dialog popup alarm, as I am interested especially in a solution to Zenity dialogs.

    Read the article

  • How to change RDS licensing mode from 'per user/device' to 'Remote control for administrators' on Wi

    - by Prashant Mandhare
    We have installed windows 2008 R2 enterprise on a Dell server. This server is placed remotely in data center and only administrator is going to access it for maintenance purpose. No multiple users or client remote access is needed Now during 'remote desktop services' role installation network admin accidentally selected 'per user/device' licensing mode. Because of which now 120 days free try period is ticking. Since only administrator is going to access this server remotely we need to have 'Remote control for administrators' licensing mode (like windows 2003) on it. How we can change licensing mode from 'per user/device' to 'Remote control for administrators' on 2008 server? Also will it be possible to do this change remotely using RDC session itself? or do i need to change it using physical console (if remote access is gonna be disabled during switch)?

    Read the article

  • How to migrate from Natara DayNotez for Pocket PC / Windows Mobile

    - by piggymouse
    I've been using DayNotez as my notes manager since the old Palm PDA days. When I moved to Windows Mobile, I installed DayNotez there and migrated from the Palm version. Now I wish to move from DayNotez altogether (I currently consider Evernote as a decent cross-platform tool). Problem is, DayNotez doesn't let me export the notes (unless I want to transfer them one by one, which is a pain). Natara offers an export tool for Windows, but it only works for Palm HotSync (as it reads from the backed-up PDB file). DayNotez Desktop for Windows stores its local DB under "My Documents\Natara\DayNotez\" directory in a file named "[device name] DayNotez.dnz". Quick look within the file spots a string "Standard Jet DB" near the beginning, but I couldn't open it as a regular JET/MDB file. Any help would be greatly appreciated.

    Read the article

< Previous Page | 386 387 388 389 390 391 392 393 394 395 396 397  | Next Page >