Search Results

Search found 6869 results on 275 pages for 'tek systems'.

Page 222/275 | < Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >

  • Extend Linux Desktop to another X Windows Display

    - by unknown (google)
    Hello, I am a long time Linux user of the Xinerama and other technologies for extending a desktop to multiple monitors. However when I travel with my laptop I miss the multi-monitor support I enjoy at home. Recently I acquired a second laptop for a low price. Both laptops are running Fedora (versions 10 and 11 respectively). I use Gnome as my primary desktop environment. I know about synergy. I use synergy all the time to control the screen of other Windows / Linux systems I use. I would like to know, can I sit both my primary and secondary laptops together and achieve a Xinerama-like extended desktop environment? Ideally I would like to start a GNOME session on my primary laptop. And then start a X-Windows Desktop on my secondary laptop and extend my primary laptop's desktop onto it. I would like to be able to move Windows from the primary desktop to the secondary laptop desktop. Would I need to use synergy to do this with some other bit of X-Windows technology? Or is there X-Windows technology that will do all this for me? I am familiar with X Windows ability to display applications remotely. I am also familiar with Nomachine's NoX.

    Read the article

  • email attachments [closed]

    - by Alan Doolan
    My company currently use software on a local machine that will take an email from the email server, extract the attachment, rename it and then add it to a folder on a webserver using ftp. This works well but they are currently asking if it can be done 'in the cloud' or what they really mean, not local. Is there any thing that would do this on the server itself? I should clarify a bit. The attachements are various reports that are being sent to different email addresses (mostly google corporate and free accounts). We need the reports to be on a folder on a webserver so that internal pages can take the information in the reports (csv) and use it on the webpages or adds them to a separate database. The key part being that the files need to be in the particular folders. Though it does work to have a computer running software that will take the files, renames them to the required name and uploads them to the folder it relies too heavily on one computer working all the time. This is not something we can depend on at this point. I'll be honest, I'm a web developer and not strong with server systems past my particular standard requirements so this is beyond me. though yes, I am aware that my boss is not 100% sure what 'cloud' means but likes the word.

    Read the article

  • INACCESSIBLE_BOOT_DEVICE after installing Linux on same drive

    - by kdgregory
    History: My PC was configured with two drives: an 80G on IDE 0 Primary that was running Win2K, and a 320G on IDE 0 Secondary that was running Linux (Ubuntu). I decided to pull the 80Gb drive out of the system, so dd'd the entire 80 G drive (/dev/sda) onto the 320 (/dev/sdb) -- this included the MBR and partition table. Then I pulled the drive, plugged the 320 into IDE 0 Primary, and rebooted. The Windows partition worked at this point. Then I installed Ubuntu into the remaining space on the 320. It works. However, when I try to boot into Windows, I get a BSOD with the following message: *** STOP: 0x0000007B (0x89055030,0xC000014F,0x00000000,0x00000000) INACCESSILE_BOOT_DEVICE Before the BSOD I see the Win2K splash screen, and it claims to be "starting windows" for a couple of seconds -- so it appears that the first stage boot loader is working as expected. Ditto when I try booting in Safe Mode. After reading the Microsoft KB article, I booted into the recovery console and tried running chkdsk /r. It refused to run, claiming that the drive was corrupted (sorry, didn't write down the exact error message). However, I can mount the drive from Linux, and access all files. And for what it's worth, I can scan the drive using the Linux "Disk Utility" (this is Ubuntu, the menus don't show real program names), it claims the drive to be clean. The KB article mentioned that boot.ini could be the problem, so here it is: timeout=10 default=multi(0)disk(0)rdisk(0)partition(1)\WINDOWS [operating systems] multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="Microsoft Windows 2000 Professional" /fastdetect Any pointers on what to do next?

    Read the article

  • Pull network or power? (for contianing a rooted server)

    - by Aleksandr Levchuk
    When a server gets rooted (e.g. a situation like this), one of the first things that you may decide to do is containment. Some security specialists advise not to enter remediation immediately and to keep the server online until forensics are completed. Those advises are usually for APT. It's different if you have occasional Script kiddie breaches. However, you may decide to remediate (fix things) early and one of the steps in remediation is containment of the server. Quoting from Robert Moir's Answer - "disconnect the victim from its muggers". A server can be contained by pulling the network cable or the power cable. Which method is better? Taking into consideration the need for: Protecting victims from further damage Executing successful forensics (Possibly) Protecting valuable data on the server Edit: 5 assumptions Assuming: You detected early: 24 hours. You want to recover early: 3 days of 1 systems admin on the job (forensics and recovery). The server is not a Virtual Machine or a Container able to take a snapshot capturing the contents of the servers memory. You decide not to attempt prosecuting. You suspect that the attacker may be using some form of software (possibly sophisticated) and this software is still running on the server.

    Read the article

  • How can I determine the IP addresses allocated by DHCP on a router that I'm connected to?

    - by user234831
    This "router" is not a typical situation. I'm using my phone as a hotspot and can only configure a select number of DHCP options. I can manage the limit on how many devices/clients can use my phone as a hotspot. I have to select from a radio-button list with the options: 2,3,4,5, or 8 I can specify the DHCP starting IP address. In this case, it begins at 192.168.6.106 When I'm connected via WIFI to my phone, an ipconfig /all command shows me that the default gateway is 192.168.6.1 and my IPv4 address is 192.168.1.148. I have the luxury of connecting another device to the phone and that device was assigned 192.168.1.121. I've tried connecting to 192.168.6.1, hoping for some sort of router setup page that I'm used to seeing, but there is no such thing or maybe it's just a matter of incompatable operating systems. In summary, the "router" (phone) has an IP address of 192.168.6.1 and a DHCP server that begins at 192.168.6.106 and allows up to 8 connections. Normally, I would assume a range of 192.168.6.106 - 192.168.6.113, but connected clients are showing otherwise. How can I figure out which IP addresses are set aside by DHCP for clients?

    Read the article

  • SQL Server Replication Backup

    - by user18039
    Hi We have a new system that runs on SQL Server 2008 r2 64-bit. There is a primary on-line transactional processing (OLTP) database that accepts a high volume of updates from several thousand Point of Sale systems at stores around the country. In order to protect this vital function, I have decided to introduce a dedicated reporting database server - from which multiple users will run some pretty complex reports. I realise that there were a number of choices but I decided to use Transaction Replication as the mechanism for copying the data from the OLTP database to the new reporting database - one way replication. The solution has worked well in test. I'm now being asked what changes need to be made to the backup policy to cover the architectural changes. I have read pages such as MSDN:Strategies for Backing Up and Restoring Snapshot and Transactional Replication but I think these are overkill for my solution. In fact, my current thinking is that we simply need to continue making backups of the OLTP data and logs. If the Reporting db or any of the system replication (eg distribution) databases fail then it's no big deal - we can clear all down then re-create the replication. I realise that taking a complete snapshot of the OLTP would be time consuming (approx 5 hours) but I'd be more relaxed about this that trying to restore backups of the various data and log files in the correct sequence. My view is that the complex strategies set out in the MSDN article would only be the way to go for a more complex replication solution than I have, eg if there were multiple subscribers with 2-way replication. Would you agree? I'd be grateful for any advice. Many thanks, Rob.,

    Read the article

  • windows xp cannot access admin share

    - by barlop
    I have 3 systems. A,B,Compx all on xp. but comps A and B have an issue with Compx. Compx has network shares I can access. I can do \\compx and get some. But I cannot access the admin share c$ \\compx\c$ gives a login prompt, and I can't get any user/pass to work. I looked at permissions but don't see an issue. Nevertheless, I will describe what I see in the permissions. In the security tab of C, I have Administrators,creator owner,everyone,bob,system,users (6 things there) "creator owner" has nothing ticked, I can't seem to change that. If I tick so they all get ticked, and click apply, 2.5min and it's completed its opration and they all untick. Though this isn't the root of the problem. Since I get the same in the share I can access. In advanced, I see those 6 things, Administrators,creator owner,everyone,bob,system,users (6 things there) all "full control" all are "this folder, subfolders and files".. except creator owner, which is just subfolders and files only I look at the properties for the share I can see. looks the same, except in security..advanced, double clicking any of them the boxes are all ticked but greyed. That's not the problem though since I can access that share. So, I don't know what the problem is.

    Read the article

  • How to track things that SHOULD happen, but might not have

    - by Kamiel Wanrooij
    I am running into a couple of issues with some applications we've deployed and maintain. I have the feeling we have approached this with some anti-patterns up to now, but I would like to see how to make this more flexible and stable. In one situation, we have a server at a client which pushes data to us to parse every night (yes, Windows Task Scheduler). This is highly unstable however, so once every month this doesn't happen because of reasons out of our control. This heavily impacts our business since we run with stale data in that situation. In another scenario we have a lot of background job processes that should be running. We already keep them up using bluepill ( http://www.github.com/arya/bluepill ) but obviously restarts happen, both automatically and manually, and people forget things or systems mess up. What I would like to track is events that should occur or should be available. Like the existence of a process, the execution of a program, or the creation/age of a file, and track it when they don't happen or exist. We develop most things in Ruby on Rails, use NewRelic, Bluepill and Munin, and run on Ubuntu. I've been toying around with counting ps aux | grep processname | wc -l in Munin scripts, or capturing the age of a file and raising alerts over 24-26 hours, stuff like that. Is there better tooling to track things that should happen, and raise alerts if they don't? P.S. I know some things are suboptimal, like manually having to define bluepill for applications and then forgetting to do so. The same goes for the push based approach of the first application, a dedicated daemon that manages that on the client side that we control and can track its connection to us might be a much better solution.

    Read the article

  • Replicated filesystem and EC2 MySQL

    - by El Yobo
    I'm currently investigating migrating our infrastructure over to run on Amazon's EC2 and am trying to figure out the best way to set up a MySQL service. I'm leaning towards running our own MySQL instances, rather than going with Amazon's RDS, but am still considering the best approach for performance and cost on the instance itself. In order to have persistent data, the MySQL data needs to be on an EBS volume (with some form of striped RAID, e.g. RAID0 or RAID10) to improve persistence. However, EBS IO is limited by the network interface (gigabit, so a theoretical maximum of 128 MB/s), while the ephemeral volumes have no such problem. I did see a suggestion for running two MySQL servers on an instance, with a master running on the ephemeral disk (which we would also RAID) and a slave storing changes to an EBS volume, but this has some additional overhead and complexity (two servers). What I was imagining is using some form of replicated file system such that I could have a filesystem on top of a RAID0 of ephemeral volumes to maximise performance all changes from the above immediately replicated to another RAID1 volume backed by multiple EBS volumes to ensure no data loss The advantages of this would be best possible IO performance for the DB server; no network delay in IO decreased IO on EBS volumes (as all read IO will be done on the ephemeral volumes) so decreased cost good data security, as it's backed onto redundant EBS volumes However, I haven't seen an appropriate system to replicate all changes from one volume to the other; is there a filesystem, or any other approach, which will do this? The distributed file systems, e.g. GlusterFS, DRBD etc seem to focus on replicating disks between servers, can they be set up to do what I'm interested in here? I also haven't seen anything about other's taking this approach. Do I have a solution in need of a problem here (i.e. is performance good enough, so this whole idea is redundant)? Is there some flaw in the plan?

    Read the article

  • How to Monitor Network in Medium-Sized Company?

    - by Kyle Lowry
    I work at a medium sized company (100+ employees). An issue that has been cropping up is network performance, internet access in particular. We have about 70 or more computers, a mix of Mac OS X and Windows XP & 7 machines. We have several servers (Exchange server, PC file servers, MS SQL, Blackberry, FTP, Mac server, etc). There are four main switches, a SonicWall firewall, and probably a couple routers in the server room with a dozen or so more scattered around the building. The network structure has grown organically over a number of years; and, as far as I know, there really isn't a monitoring solution in place. When we experience network issues (slow connections, dropped packets, and so on), our general solution is to power cycle some hardware or go around to each employee and ask them if they are uploading/downloading any large files. This is really inefficient and time consuming, and it does not allow us to monitor the network, tackling potential problems proactively. I would like to find a solution that would allow me to monitor network usage company-wide in real time, with detail going down to the individual computer, ideally. Given the hodgepodge of equipment and operating systems, what would be the best way to set up some kind of monitoring solution? Hardware, software, restructuring our network architecture?

    Read the article

  • Ubuntu inside VirtualBox is slow

    - by Kapsh
    I am running an Ubuntu instance on VirtualBox inside XP. Here are the details: Host: Windows XP Pro Guest: Ubuntu 8.10 Total RAM: 3GB RAM For VM: 1GB Total Video Memory: 128MB Video Memory for VM: 40MB Hard Drive: 200GB Hard Drive for VM: 30GB Processor: 2.80GHz Core Duo The problem is that whenever I am inside the virtual machine, things seem so much slower in general. For example Firefox, Eclipse take longer to load, dragging windows show a lag etc. I have tried running Ubuntu before (not inside a VM) and it seemed fantastically fast. So I am disappointed to have to deal with this situation. But I need access to the XP partition without having to reboot and hence the attempt. I am surprised with the perceived slowness since the whole world seems to be doing virtualization and I cannot imagine everyone works on slow systems knowingly. My question is - is there something I should be doing to boost performance? Am I doing something wrong? This is my home machine and I am not sure if this is the right forum to ask. Thanks.

    Read the article

  • Fedora 19 no longer bootable

    - by Parisa
    I had fedora dual-booted with windows on my laptop for a while but with windows refresh grub was gone and my system directly booted windows. I booted fedora with my systems boot options and with this tutorial: https://fedoraproject.org/wiki/GRUB_2 I reinstalled grub2 but then had my system booted into an empty grub prompt: grub So I found the drive containing vmlinuz and initramfs (completely sure about thair location and versions) and tried to manually boot it but after the boot command it said: no suitable video mode found booting in blind mode and nothing happened. Such a tragedy... I have already tried to use live disks rescue system. Funny but troubleshooting options don't apear on my laptop while they do on my desktop pc. I cant even go to boot prompt on my lenovo idepad z400 laptop. I also tried EasyBCD so maybe I could boot it with windows but it comes up with this error: missing AutoNeoGrub().mbr Now I have removed the grub prompt (don't know why) and its really hard for me to reinstall my dearly customized fedora. If anyone knows a way to help boot it again or reinstall it keeping my files and installations I really need it. Thanks PS:I have already tried Boot-repair Disk but it asks me to enable the repo containing grub-efi on my fedora to reinstall the grub2 and fix the boot for me (how could i?).

    Read the article

  • Outbound ports to allow through firewall - core requirements

    - by dunxd
    This question was asked before, but in a rather general way. I'm asking more specifically based on my current requirements. We have a number of remote offices made up of a bunch of PCs and an ASA 5505 which is used as firewall and VPN termination point. In the offices we share the internet connection with one or more other organisations over whom we have very little control, asides from the config on the ASAs. For a bunch of reasons I'd like to lock down these ASA 5505s to only allow outbound traffic to ports used by applications we know we need. I'm putting a standard config to roll out to all the ASAs, and if we need to open up ports for the other orgs we can do it on request. But I want to leave open the most commonly required ports so we can get up and running without waiting on other folks technical staff to get back. I plan to allow the following TCP ports to support email and web access, which I know everyone will need: POP3 (110 and 995) HTTP (80 and 443) IMAP4 (143 and 993) SMTP (25 and and 465) The question really is, what other ports do I need to leave open to allow for "normal" working? I've seen UDP port 53 for DNS as one. Are there any others that would be worth opening up? Just to note - I'll also be setting up monitoring systems to keep an eye on the ports we do allow. Any of the above could be misused of course. We'll also back all this up with signed agreements. But I'm aiming for a technical solutions where I don't have to start out with the full requirements of everyone we share connections with. See also: outbound ports that are always open

    Read the article

  • New AD user request form and workflow

    - by user66390
    I'm wondering if anyone is providing a solid solution for creating New Network User Account Request forms, and attaching workflows to them to automate account creation? I'm currently investigating a number of options, but am surprised that such a ubiquitous task hasn't been solved a dozen times over and thoroughly documented. Or at least isn't integrated into current off-the-shelf change management and ticketing systems. Ideally, I'd like for our current ticketing system, ServiceDesk+ to present a standard 'New User' form to department heads, which they can fill in with the required new user details. This triggers a workflow that submits the request as a ticket that can be reviewed and actioned. Actioning the ticket triggers a workflow that creates a user in AD with the details provided, and notifies the department head upon completion. All told, a pretty standard requirement that I'm sure most organizations have. What are other people doing to accomplish this? Edit: I should add, I'm more looking for "supported" methods. As is, I've submitted a number of scripted solutions, none of which have met with manager approval.

    Read the article

  • Java in a browser on Ubuntu

    - by Peter
    So I am running an old version of Ubuntu and all of a sudden, my capabilities to run Java in a browser have stopped and it no longer executes properly. Someone else set this up for me, but I believe it's running icedtea plugin 6 and jre 6. What would be the best ways to troubleshoot this? I suppose I could try downloading icedtea 7 and jre 7 (sun or openjdK) and see if that works? Also, in general, is installing icedtea, installing a java runtime environment, and then symlinking something to the plugins directory of your browser the best way to get Java working on most Linux systems? Unfortunately, there are many guides all over the internet offering totally different solutions, but this is probably the most common I've seen (not sure how well it works). My main use is downloading an executing .jnlp files (browsers should open it automatically). Although, I've seen people using "javasw" to run the file outside of a browser. Is this something I could consider doing? I can't seem to find javasw packaged for my version of Ubuntu.

    Read the article

  • Are there tools available for trimming PDF margins?

    - by Charles Duffy
    I have an ebook I'm trying to read in PDF format on a Kindle. Unfortunately, the page headers and footers have some content (page number and copyright info, respectively) preventing the device from scaling the actual text to match its usable area viewing area, thus leaving the actual content too small to read. Various tools are available which will trim off whitespace, but the Kindle already does this; my goal, by contrast, is to remove printed matter outside of a defined bounding box, and the only tool I've found for the purpose is moderately expensive commercial software. I could probably generate a mask in Inkscape; split out the individual pages using pdftk, apply the mask to each page individually (outputting to postscript), and recombine the numerous postscript files into a single PDF. However, this decode/reencode steps would be pretty unfortunate in terms of document size; something able to operate with a bit more finesse would be ideal. I have all major operating systems handy (Windows, several modern Linux distros, a Mac, etc) so solutions don't need to be constrained by platform. Suggestions? (I've reported the issue to the author, who mentioned it to his editor, who hasn't done anything about the issue over the course of more than a month, making the zero-work approach evidently nonproductive).

    Read the article

  • Passive mode FTP file download hangs from specific machine

    - by chiptuned
    I have a server which is an AWS instance that just cannot download files from a specific FTP server. I can connect to the FTP server fine and run some commands, but when I request a file it just hangs. Here is the debug output of the base linux ftp client after login: ---> SYST 215 UNIX Type: Apache FtpServer Remote system type is UNIX. ftp> get outgoing/catalog.gz catalog.gz local: catalog.gz remote: outgoing/catalog.gz ---> PASV 227 Entering Passive Mode (64,156,167,125,135,191) ---> RETR outgoing/catalog.gz 150 File status okay; about to open data connection. Thats it. Then it just sits there and nothing transfers. I have verified that a data connection is made but the client gets no data. ? ss -nt dst 64.156.167.125 State Recv-Q Send-Q Local Address:Port Peer Address:Port ESTAB 0 0 10.185.147.150:41190 64.156.167.125:21 ESTAB 0 0 10.185.147.150:48871 64.156.167.125:48557 The FTP server is not in my control and downloads from other FTP servers in passive mode have worked. Active mode does not work as the system is behind a firewall. Every FTP client I've tried has the same problem. The download works from other systems, even from other AWS instances I have with the same Security Group. Not necessarily the same distro or config though. I understand it may be some issue on the server side, but I want to know what it is about my particular machine where the transfer hangs and where on every other machine I can get my hands on, it works. Please let me know what the culprit on the client side could be or ideas on what else to look at.

    Read the article

  • Scripting around the lack of user:password@domain url functionality in jscript/IE

    - by Idiomatic
    I currently have a jscript that runs a php script on a server for me, dead simple. But... I want to be atleast somewhat secure so I setup a login. Now if I use the regular user:password@domain system it won't work (IE decided it was a security issue). And if I let IE just remember the password then it pops up a security message confirming my login every time (which kills the point of the button). So I need a way to make the security message go away. I could lower security settings, which tbh I am fine with but nothing seems to make it fuck off (there might be some registry setting to change). Find a fix for jscript that will let me use a password in the url. There used to be a regedit that worked for older systems which allowed IE to use url passwords (not working on my 64bit windows7 setup) though I doubt that'd have helped jscript anyways (since it outright crashes). Use an app other than IE. Inwhich case I'm not sure how to go about it, I want it to be responsive and invisible so IE was a good choice. It is near instant. Use XMLHttpRequest instead of IE directly? May even be faster but I've no idea if it'd help or just have the same error. Use a completely different approach. Maybe some app that can script website browsing. var args = {}; var objIEA = new ActiveXObject("InternetExplorer.Application"); if( WScript.Arguments.Item(0) == "pause" ){ objIEA.navigate("http://domain/index.html?pause"); } if( WScript.Arguments.Item(0) == "next" ){ objIEA.navigate("http://domain/index.html?next"); } objIEA.visible = false; while(objIEA.readyState != 4) {} objIEA.quit();

    Read the article

  • How best to handle end user notification in the event of system failure incl. email?

    - by BrianLy
    I've been asked to research ways of handling end user notifications when systems such as email are experiencing problems. Perhaps an example will make this a little clearer. We have a number of sites in different countries. Recently email was impacted at one of the sites, but it could have been a complete network outage. Information was provided by phone to local IT managers at the site but onward communication was slower than some would have liked. It seems like almost everyone at the site has a personal mobile phone which could receive text messages, and perhaps access a remote website with postings on the situation. However managing and supporting a system to text people on these relatively infrequent occasions would be very costly to do internally. What are other people doing to handle situations like this? Some things I've thought of include: Database of phone numbers to text. Seems costly and not very easy to maintain for an already stretched IT group. Is there an external service that would let you do this policies? Send voicemail message to all phones on site. Maintain an external website. This would not work in all situations (network failure), and there is a limit on the amount of info that can be posted externally. A site outage could be sensitive information in some situations. How could the site be password protected? Maybe OpenId/Facebook connect would work. Use a site like Yammer.com which is publicly accessible but only by people with a company email address. Anyone using this for IT outage notifications? To me it looks like there is no clear answer, and that there are solutions for some subsets of users. To be comprehensive a number of solutions would need to be combined. Any additional thoughts or recommendations? What worked or didn't work for your organization?

    Read the article

  • What causes a switch port to receive data not destined for it?

    - by user1693454
    We are having an intermittent fault which is effecting one of our control systems on one of our HP Procurve switches. For some reason, this PLC (10mbit port - 192.168.6.56) which is attached directly to the HP Switch intermittantly start's receiving data which is not destined for it. The data is being sent from a Thecus NAS with latest firmware (192.168.6.218) to a physical IBM Server running Win2003R2 and SAP (192.168.6.225). The problem does not just send to this server, it has been to other physical servers in the past too, but always from the Thecus NAS. I am using a monitor port to wireshark what is going in/out of the PLC - normally there would be about 1mb in/out per 2 or 3 minutes - only a server asking the state of the coils. When the problem occurs, there is a flood of data being put onto the PLC line - in this captured instance, about 67mb in less than a minute. Due to this, there is no way that the PLC can be queried as the port is effectively DOSed, in turn killing part of our factory. I know that having Production on the same vlan as IT is not a good idea - I agree, however it cannot be changed at the moment (will have to wait 3 months), as well as the problem has only started happening in the last 3 months. Here is a screen cap of one of the packets being sent from the Thecus NAS which was captured from the PLC port on the HP Switch: And there are over 700 of these in this one 1024kb file. If anyone has any idea on what could be going on, some help would be greatly appreciated. If you need to know anything more, let me know! Cheers!

    Read the article

  • Trouble with Remote Desktop pulling through printers. Drive Redirection works, and the ports created but not the printers

    - by Windex
    I've run out of things to look into. All the support documents have been gone through and still provide no resolution. I've checked the service permissions, (sc sdshow spooler) they all match up with other systems and what is output on the support documents. I'm nearly positive that the issue can't be permissions anyway as the software requires all users to be an administrator, so all users are a local administrator. (I haven't looked into why yet but its on the list, I was just recently brought into this team and we've put procedures in place for quick recovery.) We've applied hot fixes relating to RDS and printing, though I'm not sure which ones they were. I've combed through group policy and no where is printer redirection disabled. It's setup with all default values regarding the use and redirection of printers and a quick install of W2k8 R2 shows that it works by default. This dev install was joined to the same domain, placed in the same OU, shows the same policies applied, etc, etc, etc, The server generates all the correct redirected ports but no printers are created. It will also redirect drives without issue, this would seem to rule out the usermode service that handles redirects being broken. No events are logged related to any of the events and there are no events from the TerminalServices-Printer source. There were local printers setup. I didn't think it would mattter but as I was running out of ideas I tried deleting them all with no change. The TS was configured for the software it will be running before we checked out the redirection of printers so the other team responsible to setting up new servers wants to find a fix instead of reloading a new server. I'm not sure where or what else to look for. Any ideas?

    Read the article

  • Putting and configuring grub on an external drive

    - by HisDudeness
    I want to put a bunch of minor emergency operating systems (such as GParted Live, DSL, Puppy Linux and so on) on a partitioned USB pen drive, with a dedicated grub boot loader on it, which I want to start when I select the drive in the BIOS to boot. The problem is: when writing grub boot options I must tell where kernel and initial ram files are located, but the USB drive can have different letters depending on when I did plug it in, if some others external drive were mounted. So, how can I write appropriate options which automatically refer to the drive grub is installed on without having to specify absolute paths, which might change (I mean, like (hd1,msdos1) ot /dev/sdb1)? And, while we are at it, can I have grub working on a device without an operating system on it to which it can refer? I mean, I want to address the command sudo grub-install /dev/sdb from the LMDE system I'm on right now, but I won't have LMDE on my pen drive. Is that a problem? And installing grub on another device, will I keep the grub I have right now on my HDD?

    Read the article

  • Cancelling Window 7 shutdown diables power button

    - by Jens
    Normally, pressing the power button once initiates a shut-down in Windows 7. If any programs are still running that will not quit (e.g. waiting for a dialogue response), Windows overlays the screen with a dialogue allowing the user to cancel the shut-down. I've just noticed that on two different systems here, using this cancel option disables the shut-down via power button. The power button can still be used to kill the system by holding it for a few seconds, using the Start menu button to shut the PC down still works as well. Steps to reproduce: Open Notepad, type a few characters. Do not save. Press the computer's power button. Wait until the dark screen appears. Press cancel. Press the power button again. Notice how nothing happens. What is the reason for this behaviour, and can it be disabled to always try and shut down the PC when the power button is pressed?

    Read the article

  • upstart scripts: run a task after networking goes up

    - by The Journeyman geek
    I'm working on moving my current server setup to newer hardware, and migrating from ubuntu karmic koala to lucid lynx. Currently i'm using gw6c (compiled from the gogo6 website, as opposed to the version from the repositories) to get ipv6 access for my systems. On the karmic koala system, i used simple init.d script to get the ipv6 client started #! /bin/sh /usr/local/gw6c/bin/gw6c -f /usr/local/gw6c/bin/gw6c.conf I figured since this runs at any runlevel, it should translate to respawn console none start on startup stop on shutdown script exec /usr/local/gw6c/bin/gw6c -f /usr/local/gw6c/bin/gw6c.conf emit free6_ipv6_started end script this works fine started from initctrl, but it apparently fails to start when it boots. - its status being stop/waiting. It works fine (and respawns) when started otherwise.Any ideas on where i'm going wrong, and what would be the appropriate 'start on' arguement? EDIT: the exact error is 'init: gw6c main process (xxx) ended with status 8' followed by the process respawning , with xxx being a PID i suspect. I'm also half suspecting this is cause gw6c starts before networking does, and i need my eth0 up before gw6c is

    Read the article

  • Install Peppermint OS three on Asus EeePC

    - by Kithoth
    I just had a new Asus EeePC R051CX. Out of the box, the installed OS is Ubuntu 12.04 LTS, but I am trying to install Peppermint OS three (as single boot). Problem. Once on live CD (well, live USB stick...), I'm in trouble in both following situations: Try Peppermint OS Live In this case, the first thing I get is a message reading The system is running in low-graphics mode Your screen, graphics card, and input device settings could not be detected correctly. You will need to configure these yourself. I can solely press "return" to accept, then I have a list of 4 options to answer the question "What would you like to do?". But I simply can't do anything at this moment, except switching to console mode or rebooting (keyboard / mouse controls don't allow me to do anything else). Install Peppermint OS Something I really don't understand... it launches the Ubuntu Recovery Media (which was already installed when I received the device)! Also, it says in the bottom ERROR: This recovery media only functions on Ubuntu systems. All I can do is quit (that is, reboot). One last important thing that comes to my mind: this stick worked just fine on the other computers I've tried it on. I really hope someone could bring me the light, a friend of mine told me how cool this OS is for EeePCs. Don't want to give up! Thanks. Edit I finally could install Peppermint, but not by understanding why I couldn't do it the logical way. Instead, I reinstalled Ubuntu myself (erasing the factory one). Then, I could simply boot on my live USB and perform a fresh install of Peppermint. So, I still don't know how and why the mentioned problem occurred.

    Read the article

< Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >