Search Results

Search found 2606 results on 105 pages for 'combination'.

Page 73/105 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • virtual web folder served by PHP script

    - by Martin
    I am trying to configure my apache to be able to display (virtual) pages like: mywebpage.com/something1 mywebpage.com/something2 mywebpage.com/folder/something3 I would like these "somethingX" and "folder" folders to be only virtual, not physical directories. For a start it would be great to send all requests to mywebpage to one PHP script which will somehow receive the original path information (there is some SERVER array as far as I know) and call necessary PHP functions (so far I use addresses like mywebpage.com/index.php?page=blabla&otherparameters=values...). Is that possible? I am struggling with different combination, currently I am with following file in /etc/apache2/conf.d/something.conf (not working of course). What is the correct way to proceed? Thanks. <Location /myweb> SetHandler my-handler Action my-handler /srv/www/htdocs/myweb/product.php virtual </Location> My pages are in /srv/www/htdocs/myweb. I tried with Location, with Directory, with Action and SetHandler, with AddHandler... ;-) Some configurations were ignored, some caused "object not found" with nothing relevant in error log.

    Read the article

  • High availability virtual machines

    - by Jeremy
    I've been reading a lot about high availability virtualization, either via Hyper-V or VMWare. In that context, essentially high availabliity means that the VM is hosted by a closter of physical servers (nodes), so if one of the physical servers goes down, the VM can still be served by other physical servers. So far so good, the physical cluster and the VM itself are highly available. However if the service being provided, let's say SQL server, MSDTC, or any other service, are actually being provided by the VM image and the virtualized operating system. So I imagine that there is still a point of failure at the virtual layer that isn't accounted for. Something could happen within the virtual machine itself that the physican cluster can not account for, correct? In that instance the physican failover cluster (Hyper-V) or VMWare host, can not fail over, because the issue is not with one of the servers in the physical cluster - failing over a physical node would not do any good. Does this necessitate building a virtual failover cluster on top of the physical one, or is this not necessary? Alternatively, I suppose you could skip the phsyical clustering, and just cluster at the virtual layer (Child based failover clustering), because that should still survive a physical failure. See image below showing parent based (left), child based (right) and a combination (center). Is parent based as far as you need to go, or is child based more appropriate?

    Read the article

  • Formatting pwd/ls for use with scp

    - by eumiro
    I have two terminal windows with bash. One is local on the client computer, another one has an SSH-session on the server. On the server, I am in a directory and seeing a file I would like to copy to my client using scp from the client. On the server I see: user@server:/path$ ls filename filename I can now type scp in the client shell, select and copy the user@server:/path from the server shell and paste to the client shell, then type slash and copy and paste the filename and append a dot to get: user@client:~$ scp user@server:/path/filename . to scp a file from the server to the client. Now I am searching for a command on the server, that would work like this: user@server:/path$ special_ls filename user@server:/path/filename which would give me the complete scp-ready string to copy&paste to the client shell. Something in the form echo $USER@$HOSTNAME:${pwd}/$filename working with relative/absolute paths. Is there any such command/switch combination or do I have to hack it myself? Thank you very much.

    Read the article

  • 7-Zip many files from different folders?

    - by mafutrct
    I would like to add a large number of files with different names from different folders to a single 7-Zip archive using 7za.exe. This should be simple, but it turned out to be a major pain. I created a file that contains the paths (7za a out.7z @list.txt), but once there are too many (~100) files, it fails. Apparently the content of the argument file is pushed onto the command line buffer [Edit: This was likely a misinformation on my part, either way it was not the reason], which is far too small (the number of files to add is more than one million). Splitting the process up by adding the files one by one is not feasible due to the way 7za works: When adding the next file, it creates a copy of the archive, adds the file to the copy and finally replaces the original. This is terribly slow once the archive gets to a couple 100 MB in size. So far I am using a combination of the two approaches by adding a dozen files each time in a loop, but it is an unreliable hack and still very slow. Is there a better way to do it? I tried to use 7-Zip wrapper DLLs (I'm a C# programmer), but none of them worked reliably and I was repeatedly suggested to just use 7za instead.

    Read the article

  • Centralized backup method recommendation for SMEs with various OSes

    - by Akinator
    Hi I was wondering what in your opinion is the "best" method for having "everything" backed-up in the following situation. We are a SMEs with 10 computers in total. Three of those computers are MACs The rest are windows (1 vista, 4 win7 and 2 XPs) I'm very open to what the method should be but you should also consider the follwing: Very limited resources Quite "small" bandwidth (4 MBs for all (download) 0.4 MBs (upload, yep, thats it)- though this might get, a little bit better) One of the main thing to back up would be the mails, considerations: All windows computers use outlook, mainly 2003 There is one mac that uses outlook too (for mac of course - not 2011 yet) We also have to backup the files: Not a huge amount Very few very big files Very organizes (by machine) What I would like is to hear your opinions as to which would be the best method (or combination of methods - preferably one of course) considering. We are not sure what do we need and I'm open to suggestions, though an online (cloud based applications) would be great, remember the the bandwidth is unbearable. Last think to consider, it that we would like to do weekly updates (unless the method is very easy of course). Thanks in advance!! I tried to be as specific as possible, but if anything is needed I'll gladly update, please ask for any clarification needed! Please avoid any answers like upgrade all to windows 7 and throw away your macs :) our's may not be an ideal situation, but it is what it is, and right now, it would be impossible for us to change it for a lot of circumstances.

    Read the article

  • Is there a way to do a sector level copy/clone from one hard drive to another?

    - by irrational John
    Without going into distracting details, I'm attempting to duplicate the contents of the 500GB drive in my MacBook to another 500GB drive. But this is turning out to be an unexpected hassle because the drive contains both the OS X partition and an NTFS partition with Win 7 via Apple's Boot Camp. With the exception of Clonezilla, the tools I have looked at so far all have some limitation. The Mac tools don't want to deal with the NTFS partition. The Windows tools are totally clueless about either the HFS+ partition and/or the hybrid MBR/GPT Boot Camp partitioning. Clonezilla looked like it would do what I want but apparently I can't figure out how to use it. After doing what I thought was a sector to sector copy I found that only the NTFS partition had been migrated. The others were apparently empty. (And frankly, I'm not positive Clonezilla migrated the partition table correctly either). Note: It takes over 2 hours using SATA to read/write all sectors with these drives. So I'm not up for using trial & error to narrow in on the right combination of Clonezilla options to use. I'm beginning to think that maybe the answer is to boot Linux (probably Ubuntu) and then use some ancient BSD command. Trouble is I don't know what command (or parameters to use) in order to do a sector level copy from one drive to another. As far as I know the drives have the same number of sectors so this should be trivial. Sigh.

    Read the article

  • Can Dovecot IMAP automatically create Maildir folders for new (virtual) users?

    - by user233441
    everyone. I am learning to set up a dovecot home IMAP server using a virtual Ubuntu 12.04 machine. My intention is eventually to have a home server that uses POP3 to take email from several addresses and remove them from my ISP's servers, while making them accessible through a home IMAP server (this is similar to the setup described at https://help.ubuntu.com/community/POP3Aggregator, which explains how to set up the system with dovecot version 1, and is thus outdated). I intend to use the ISP's server directly when sending messages, and to BCC all sent messages to myself. I've completed the basic set up of the test server: getmail uses POP3 to fetch messages from two test email accounts, and successfully delivers them to the respective Maildir-style new folders on the virtual machine. Dovecot then successfully sees these messages. I have two questions: 1) I had to set up new, cur, and tmp folders for both of the test accounts manually to get this setup to work. Is there a way to get dovecot to create these Maildir folders automatically when I create a new virtual user account (e.g., when I add a user and password combination to my dovecot password file), or is it expected that I write a bash script to automate that task? 2) I would welcome any comments you have on how this approach could be improved as I learn to set it up. My motivations with this approach are 1) to enable archiving/storing emails from several hosting providers that impose a cap on server storage, and 2) to give me somewhat greater control over email storage without requiring that I set up and administrate a mail server from scratch (which I'm not yet prepared to do) (this follows the recommendations at https://ssd.eff.org/tech/email). Thank you!

    Read the article

  • Linux: Encryption of a physical LVM volume doesn't imply encryption of its logical subvolumes?

    - by java.is.for.desktop
    Hello, everyone! I installed OpenSuse one year ago on my notebook. I created all partitions except /boot inside an LVM partition. I enabled encryption for it during setup. The system asked me a password on each boot later. Everything seemed fine... But one day I wanted to cancel the boot process and did it with SysRq REISUB. During entering this combination, the system suddenly continued to boot without any password being entered. I had no /home and no swap, but / was mounted! I checked multiple times, it was inside an "encrypted" physical LVM volume. Later I found out that OpenSuse can't encrypt / at all. There is an option to enable encryption for each logical volume, and indeed it fails for /. Later I tried Fedora. The options during partitioning were misleading by same means. I could enable "encryption" of a physical volume and each logical subvolume. With the exception that Fedora actually allowed to encrypt /. Question: What's the point of setting up "encryption" for a physical LVM volume, when it doesn't imply (real) encryption of its logical subvolumes? Did I get something wrong in this whole concept?

    Read the article

  • As an admin, what tools do you use to log what you do to your boxes?

    - by Jerry
    I am more of a linux applications developer than an admin. Over time, I've built servers and maintained them, sometimes to offer services, mostly just to develop the applications I work on. Way back when I would create a file in my account to keep notes on what I did on each machine, so that I could replicate that when I migrated to other machines. Nowadays, I install something a private trac installation, install it's blog plugin, and then use that to make notes of everything I install, and most commands that I run, as well as the output. This provides me a combination wiki and blog that I find very useful as a "captain's log". I do this mostly so that when I migrate to a new clean machine, I have a much easier time in bringing it up. And yet, I am always amazed when I see others just install this, delete that, run this, setup this config, ... without seeming to use any way to actually note what they are doing. What do YOU do, and what tools are available? I am especially interested in the transition between maintaining a few machines for a few people and maintaining several to dozens of machines providing a real service. What are the best practices, and where can I find good resources? Thanks!

    Read the article

  • Is there a remote desktop or vnc app for the IPad that properly handles Bluetooth keyboard shortcuts?

    - by Steve Bison
    I've tried 4 or 5 remote desktop apps, the most notable being Jump Desktop and Splashtop Streamer. Most of these remote desktop apps have some sort of on-screen keyboard for typing with the IPad, including special keys like shift, control, alt. The special keys act like "sticky keys" meaning they stay depressed until another key is pressed, to make it easier to do key combinations. Even non-standard keyboard combinations like shift+enter work, in this sticky sense. When using a Bluetooth keyboard with the remote desktop apps, both Jump and Splashtop Streamer recognize the shift + letter combination for doing capital letters. However, generically pressing shift, cntrl, or alt does not depress the sticky on screen shift buttons or do anything at all. Only a few combinations are recognized (again like shift+letter, cntrl+C). Most combinations do not work (shift+enter, alt+tab). Even having the keyboard shortcuts work like sticky keys (press shift then enter, not both at once) would be much better than the limited functionality they have now. Is there an app, jailbreak app, or workaround that lets me use bluetooth keyboard properly with remote desktop on the ipad?

    Read the article

  • Problems with Net::FTP slowing down

    - by c0bra
    I'm running into an issue with using Net::FTP (latest version 2.77) to transfer files to a remote host where a process is waiting to take the file and feed into some other system. The remote process does this every 5 minutes but ignores files that were modified in the last 0.2 seconds (that's right, 1/5th of a second). The problem is that for some reason the transfer seems to halt or slow down a bit and no data is transferred for several seconds and during this time the process picks up the incomplete file and removes it. The weird thing is that manually using the ftp binary the file seems to transfer fine. I've tried messing with all of the Net::FTP switches (Active/Passive, different Blocksizes) and nothing seems to help. What's also weird is that the file seems to transfer fairly quickly at first and on occasion a bit into the transfer. Like 300-500k will go up immediately, but then it slows down to where the file size is only increasing by 2,896 bytes every several seconds. It doesn't seem to happen when I try sending the file to a different remote host, but since a regular manual ftp transfer works with this host I don't know what to think. Some combination of Net::FTP and possibly a slow or wonky connection?

    Read the article

  • Allowing Sharepoint to relay email through Exchange

    - by dunxd
    I have written a Sharepoint 2007 web part that sends a field from a form to a specified email address. I have got the form working as I require, but at present it can only send to internal email addresses. Sharepoint's email functions use SMTP to send to our Exchange 2003 server, but because our Exchange server is configured to prevent relaying, if the To: address is not at a local domain, it won't deliver the mail. I don't want to open up our Exchange server to be a completely open relay. What I want is to allow my Sharepoint servers to send mail to addresses outside our domain. The following seem possible: Allow all mail sent from one of the Sharepoint servers to be relayed Allow all mail from a web application pool account to be relayed (I am not sure that the application pool authenticates to the SMTP server though) A combination of the two Can anyone advise on the best way of doing this? Is setting up a dedicated SMTP server on the Exchange server (not a separate physical server) the right way of going about this? EDIT: Note this is for Exchange 2003. There is a post on setting this up in Exchange 2007 which appears to have recognised the frequent requirement to do what I need. It doesn't give much detail on 2003 though. Can anyone expand?

    Read the article

  • Centralized backup method recommendation for SMEs with various OSes

    - by Akinator
    Hi I was wondering what in your opinion is the "best" method for having "everything" backed-up in the following situation. We are a SMEs with 10 computers in total. Three of those computers are MACs The rest are windows (1 vista, 4 win7 and 2 XPs) I'm very open to what the method should be but you should also consider the follwing: Very limited resources Quite "small" bandwidth (4 MBs for all (download) 0.4 MBs (upload, yep, thats it)- though this might get, a little bit better) One of the main thing to back up would be the mails, considerations: All windows computers use outlook, mainly 2003 There is one mac that uses outlook too (for mac of course - not 2011 yet) We also have to backup the files: Not a huge amount Very few very big files Very organizes (by machine) What I would like is to hear your opinions as to which would be the best method (or combination of methods - preferably one of course) considering. We are not sure what do we need and I'm open to suggestions, though an online (cloud based applications) would be great, remember the the bandwidth is unbearable. Last think to consider, it that we would like to do weekly updates (unless the method is very easy of course). Thanks in advance!! I tried to be as specific as possible, but if anything is needed I'll gladly update, please ask for any clarification needed! Please avoid any answers like upgrade all to windows 7 and throw away your macs :) our's may not be an ideal situation, but it is what it is, and right now, it would be impossible for us to change it for a lot of circumstances.

    Read the article

  • How to modify a message, so it will be for 100% recognizable as spam by Exchange junk e-mail filter

    - by user71061
    Hi! I have an sendmail server, sitting in front of my Exchange server. This server filter spam with SpamAssassin (and do it incredibly well!), but it merely tag spam messages with appropriate header flags and by modifying message subject. When such a message arrives to user mailbox on Exchange server, where it is examined by Echange/Outlook junk e-mail filter, which put most of spam in junk message folder. And that is my problem: most, but not all! To put all spam in junk e-mail message folder, user has to define an rule, saying f.e: "If header contains text 'X-Spam-Flag: YES' then move it to 'Junk e-mail messages' folder". Fine, but it has to be done on every user (for some users, this task is too "complicated" to made it themselves :-) . So I want to know, how could I modify message header in such a way, that Exchange junk e-mail filter will for 100% recognize this message as a spam, freeing user from task of defining his own rule. Some solution could be defining such a rule by using AD and group policy, but I wan't to avoid this due to many possible caveats: there are so many combination of different operating system and different Outlook versions, and to be honest, I doubt if it is even possible.

    Read the article

  • Is there a way to use VirtualBox without using it's resource registry?

    - by Catskul
    Summary VirtualBox seems to want everything to be "registered" which makes it much more annoying to work with on the command line. I'm attempting to create an automated script which will create, move, start, stop, and destroy virtual machines and virtual disks. Requiring registration will complicate the task for the following reasons. leaves state information around that can cause unpredicted edgecases causing scripts to fail. creates potential name space collisions for multiple process creating VMs with the same name moving/copying resources on the same machine is more complicated because references in the registry need to be updated copying resources (disk + vm combination) to another machine require reconfiguration once they reach their target machine, and require the transfer of extra meta data to do the reconfiguration. If something unexpectedly fails, and an unregister thus fails to happen, left over configuration information can cause problems in subsequent runs. Use Case My specific use case is for a continuous integration server which creates and destroys VMs and Disk images potentially with the same name, and would require more logic to deal with the registry's statefulness. Imaginary Example It seems that I should just be able to for example (using some imaginary and/or incorrect commands): mkdir foobar customdiskimg_script ./foo/foo.vdi vboxmanage createvm --name "foo" --ostype Linux --basefolder ./foo/foo.xml vboxmanage storagectl ./foo/foo.xml --name foo --add ide vboxmanage storageattach --storagectl foo --medium ./foo/foo.vdi ./foo/foo.xml vboxmanage startvm ./foo/foo.xml TLDR Is there a way to use virtualbox without "registering" harddisks and VMs?

    Read the article

  • Linux Firefox copy/paste issue

    - by Daniel
    I'm using Firefox 15.0.1 on Fedora 17 without running gnome or kde. The problem I'm having is that whenever I select text outside of Firefox, for instance in xterm, the middle mouse button doesn't copy it inside Firefox, for instance in a text area, but rather brings up a context menu. A related problem is that whenever I middle click inside Firefox, for instance in a text input, the middle mouse button brings up a menu when I'd like it to paste. Even if I select Paste in the menu not the selected text (from outside Firefox) gets pasted but the last selected text inside Firefox. In about:config I tried "middlemouse.paste true" and also "middlemouse.paste false" together with the add-on Auto Copy but no combination worked. A middle mouse click always brings up a context menu. But the Auto Copy did help with automatically copying selected text to the clipboard. With Auto Copy the only problem I still have is pasting by middle button. Follow up: somehow the problem solved itself. After removing Auto Copy firefox works as I expect it to (as any X application). I can't figure out why it was not doing it before, probably I was messing too much with about:config and not restarting frequently enough.

    Read the article

  • SQL Server 2005 Disk Configuration: Single RAID 1+0 or multiple RAID 1+0s?

    - by mfredrickson
    Assuming that the workload for the SQL Server is just a normal OLTP database, and that there are a total of 20 disks available, which configuration would make more sense? A single RAID 1+0, containing all 20 disks. This physical volume would contain both the data files and the transaction log files, but two logical drives would be created from this RAID: one for the data files and one for the log files. Or... Two RAID 1+0s, each containing 10 disks. One physical volume would contain the data files, and the other would contain the log files. The reason for this question is due to a disagreement between me (SQL Developer) and a co-worker (DBA). For every configuration that I've done, or seen others do, the data files and transaction log files were separated at the physical level, and were placed on separate RAIDs. However, my co-workers argument is that by placing all the disks into a single RAID 1+0, then any IO that is done by the server is potentially shared between all 20 disks, instead of just 10 disks in my suggested configuration. Conceptually, his argument makes sense to me. Also, I've found some information from Microsoft that seems to supports his position. http://technet.microsoft.com/en-us/library/cc966414.aspx In the section titled "3. RAID10 Configuration", showing a configuration in which all 20 disks are allocated to a single RAID 1+0, it states: In this scenario, the I/O parallelism can be used to its fullest by all partitions. Therefore, distribution of I/O workload is among 20 physical spindles instead of four at the partition level. But... every other configuration I've seen suggests physically separating the data and log files onto separate RAIDs. Everything I've found here on Server Fault suggests the same. I understand that a log files will be write heavy, and that data files will be a combination of reads and writes, but does this require that the files be placed onto separate RAIDs instead of a single RAID?

    Read the article

  • Best format for hard drive for Windows and Mac?

    - by Neil
    I have a 500 GB USB External Hard Drive. I need four partitions on it, for the following purposes: 160 GB for a bootable backup of my Mac. 160 GB for a bootable backup of my Windows. 11 GB for a bootable Snow Leopard Install Disk Rest as for file storage. Now I need a partition table which will get recognised on both Windows and Mac, without needing extra software on Windows, which will let me keep bootable copies of both OS'es, but let me access the file storage from both OS'es. Currently, I have a GUI Partition Table, with Mac OS Extended (Journaled) Partitions for the two backups, Mac OS Extended for the Install Disk, and NTFS for the file storage. While this gets recognised perfectly on my Mac, thanks to an NTFS for Mac driver from Paragon, when connected to Windows, the drive is detected by the machine (listed in Safely Remove USB), but not recognised in Windows Explorer unless I install MacDrive, which is not feasible for me to install on public Windows Machines I might wanna access my storage area on. Can someone recommend the best combination of formats and software/drivers to get this done seamlessly?

    Read the article

  • Virtual Server HDD shrinks without apparent reason

    - by Christian
    We have a virtual hosted Linux server, and in the last few months every now and then the HDD shrinks from 400GB down to the exact byte count that is in use. All existing data can be downloaded and displayed without a problem, but we can't upload or edit any files because of the "full" hard drive. Here is a screenshot, where "size" should be 400GB: This has happened twice before, and again today. The last times, when I reported the issue to the host, they said "that isn't possible, you must be doing it wrong", but soon after the call, the problem vanished without us doing anything, so I suppose that they have some kind of problem they're not willing to admit. Even after the fact, they acted like nothing was wrong and wrote me a mail in which they explained that I can use "df -h" to view available disk space (well duh, how do you think I noticed this particular issue?). Questions about if and what they had done were ignored. It has happened around the 25th to 28th of the month, so I suspect that they might have a cronjob running every 30 days or so which wreaks havoc with some VM configs. I just want to understand the problem, but the host support hasn't been very helpful in that regard. I have tried Googling the issue, but any combination of search terms I can come up with just gives me tutorials on how to change HDD size in a virtual machine. a) What could be the cause of shrinking HDD size in a Ubuntu 12.04.3 LTS server? Could there be anything in our virtual machine or is it more likely to be an issue with the vm host? b) Can I do anything about it without needing to contact the host's support? c) Is there anyway I can prevent this from happening at all?

    Read the article

  • Notepad++ incorrect syntax highlithing?

    - by user360919
    So I want to build a XHTML 1.0 Strict based website. Using Notepad++ for syntax highlighting came as an idea to me. But when I tried to put the XML declaration (as stated in the spec, proper XHTML pages should use a XML declaration and be served as application/xhtml+xml) I can't get the entire document highlighted propperly. Here is the code I used for a basic page: <?xml version="1.0" encoding="UTF-8" standalone="no" ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en-us" lang="en-us"> <head> <meta http-equiv="Content-Type" content="application/xhtml+xml; charset=UTF-8" /> <title>Page</title> <script type="application/javascript"> alert("A perfectly valid xHTML page..."); </script> <style type="text/css"> #test { text-align: center; } </style> </head> <body> <h1 id="test">TEST</h1> </body> </html> Paste this in Notepad++ and you'll see that it won't highlight the code between <script type="application/javascript"> and </script> (it renders its background white) if language is set to XML. If I set the language to HTML, then the script gets correctly highlighted but the XML declaration is not. What to do? How to make a hybrid language - combination of XML and HTML?

    Read the article

  • mod_rewrite and Apache questions

    - by John
    We have an interesting situation in relation to some help desk software that we are trying to setup. This is a web based software application that allows customers and staff to log into it and access tickets and supply updates, etc. The challenge we are having deals with the two different domains that we use and the mod_rewrite rules to make it all work with our SSL certificate that is only bound to one of the domains. I will list the use case scenarios below and the challenges that we are having. If you access http://support.domain1.com/support then it redirects fine to https://support.domain2.com/support If you access http://support.domain2.com/support then it redirects fine to https://support.domain2.com/support If you access https://support.domain1.com/support then it throws an error of "server cannot be found" If you access https://support.domain1.com/support/ after having visited https://support.domain2.com/support then you are presented with a "this connection is untrusted" error about the certificate only being valid for the domain2 domain instead of the domain1 domain name I have tried just about every mod_rewrite rule that I can think of to help make this work and I have not been able to locate the correct combination. I was curious if anyone had some ideas on how to make the redirects work correctly. In the end, we are needing all customers and staff to land at https://support.domain2.com/support regardless of the previous URL combinations that they enter, like listed above. Thanks in advance for your help with this.

    Read the article

  • Software disables itself when the PC is accessed via RDP

    - by blckgrffn
    We have a large, specialty printer that has vendor specific software that enables its use outside of it showing up simply as a printer in the Windows Control Panel. This software recognizes when we RDP into the machine and "disconnects" the PC from the printer within its proprietary control panel. All is well when an application like TeamViewer is used to access the machine. Ostensibly, the application is helping us be safe by "enforcing" that the machine used for the printer is a walk up workstation, or so the support folks informed me. If TeamViewer etc, fixes the issue, then what is the problem? We have many headless workstations in our warehouse attached to a variety of specialty machines, all used via RDP. We want/need to keep access to the machines the same for the sanity of our production staff. The meat of the question - how, specifically, might a machine know that it is being accessed via RDP (terminal services management???) and how might this be defeated without altering an application or driver. Of note, the system being used is a Windows 7 Pro machine hooked to the printer via USB. Thanks! Nat edit Is there any combination of /admin switches, etc. that will possibly fix this? Simply putting /admin did not.

    Read the article

  • Can no longer duplicate display to external monitor on Windows 7

    - by rbeier
    We have a large TV at work - I connect my laptop to it to share my screen during meetings. Until today, my laptop display has been duplicating to the TV automatically when I connect the TV cable to the laptop. The display resolution would decrease automatically to be compatible with the TV. Today, however, it's stopped working. When I connect the cable to the TV, the display extends rather than duplicating. Using the Win+P key combination (or Fn+F7 on my Lenovo laptop), I can choose to duplicate the display - but when I do this, it ends up only displaying on the laptop. I can get it to display on the TV by hitting Win+P and choosing "projector only", but then I can't see what I'm doing on the laptop screen. I have a Lenovo W520 laptop running Windows 7, connected to the TV using a DisplayPort-to-HDMI converter cable. The TV's native resolution is 1280x720; the laptop's native resolution is 1600x900. I've tried booting with the TV cable already connected; I've tried manually lowering the display resolution on the laptop to 1280x720 before duplicating the display. Neither works. Does anyone have any other suggestions?

    Read the article

  • Open source app to manage and run commands on cloud servers? [closed]

    - by Mark Theunissen
    I'm creating a SaaS platform, and I need a component / library that can create, delete and store the connection details for cloud servers. It also needs to support executing shell commands on these servers and returning the response to the caller. I want a central database of servers and their configuration, plus the ability to reach out and manage the servers via SSH execution of bash scripts. I don't want something that needs agents on every server like Chef. For example, this command is received by the hypothetical application: CREATE USER server = server12345 name = myuser It's translated into the following set of actions and executed by the app, which knows how to connect to server12345, and how to create a user on that server: $ ssh root@server12345 $ adduser myuser And returns the output from the shell: Added user myuser. I've done research on Google and can't quite quite find something that does this already. I've found: fabric This part handles the executing of the shell commands very elegantly, and can take multiple server definitions, but it's supposed to be a deployment tool so doesn't do everything that would be required above - for example, it doesn't have a daemon mode where it listens for commands - it expects to be executed on the shell. It also can't provide the central database functionality. libcloud This library can handle the server admin (CRUD) part, but doesn't have a command interface daemon either, and doesn't let you execute commands on the servers. I guess I need something that is a combination of libcloud, fabric and django for an API. Or something else that does that same thing regardless of language. Overmind Overmind is a GUI and wrapper around libcloud, but doesn't support the command execution part. What am I missing here?

    Read the article

  • Mysql: Working With 192 Trillion Records... (Yes, 192 Trillion)

    - by Sarah
    Here's the question... Considering 192 trillion records, what should my considerations be? My main concern is speed. Here's the table... CREATE TABLE `ref` ( `id` INTEGER(13) AUTO_INCREMENT DEFAULT NOT NULL, `rel_id` INTEGER(13) NOT NULL, `p1` INTEGER(13) NOT NULL, `p2` INTEGER(13) DEFAULT NULL, `p3` INTEGER(13) DEFAULT NULL, `s` INTEGER(13) NOT NULL, `p4` INTEGER(13) DEFAULT NULL, `p5` INTEGER(13) DEFAULT NULL, `p6` INTEGER(13) DEFAULT NULL, PRIMARY KEY (`id`), KEY (`s`), KEY (`rel_id`), KEY (`p3`), KEY (`p4`) ); Here's the queries... SELECT id, s FROM ref WHERE red_id="$rel_id" AND p3="$p3" AND p4="$p4" SELECT rel_id, p1, p2, p3, p4, p5, p6 FROM ref WHERE id="$id" INSERT INTO rel (rel_id, p1, p2, p3, s, p4, p5, p6) VALUES ("$rel_id", "$p1", "$p2", "$p3", "$s", "$p4", "$p5", "$p6") Here's some notes... The SELECT's will be done much more frequently than the INSERT. However, occasionally I want to add a few hundred records at a time. Load-wise, there will be nothing for hours then maybe a few thousand queries all at once. Don't think I can normalize any more (need the p values in a combination) The database as a whole is very relational. This will be the largest table by far (next largest is about 900k) UPDATE (08/11/2010) Interestingly, I've been given a second option... Instead of 192 trillion I could store 2.6*10^16 (15 zeros, meaning 26 Quadrillion)... But in this second option I would only need to store one bigint(18) as the index in a table. That's it - just the one column. So I would just be checking for the existence of a value. Occasionally adding records, never deleting them. So that makes me think there must be a better solution then mysql for simply storing numbers... Given this second option, should I take it or stick with the first... [edit] Just got news of some testing that's been done - 100 million rows with this setup returns the query in 0.0004 seconds [/edit]

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >