Search Results

Search found 9008 results on 361 pages for 'undocumented feature'.

Page 94/361 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • Adding port forwardings programmatically on a ControlMaster SSH session

    - by aef
    I just found out about the ControlMaster/ControlPath feature of OpenSSH, which allows you to use a single SSH connection to run multiple terminals. As I often use SSH to use port forwarding to get encrypted and authenticated VNC sessions I instantly recognized that you can't add port forwardings to a remote server to which you already have an established connection. This sucks. Sometimes later I found out that you can circumvent this limitation by typing ~C in a running SSH terminal session. This opens up a command-line which allows you to add or remove port forwardings. My quesion now is: How can I add port forwardings on an existing SSH session which is using the ControlMaster/ControlPath feature, without the need to have access to a terminal session inside that SSH session. I need this to enable my script which starts a secure tunneled VNC connection for me to add and later remove its port forwardings. (I know I could use a terminal multiplexer such as GNU Screen or tmux, actually I'm doing this already. But I like the idea of using just one SSH session for serveral reasons.)

    Read the article

  • .NET 3.5 installation comes up with Error 0x800F0906, then 0x800F0081F using dism

    - by Austin Meadows
    I've recently tried installing .NET 3.5 for an application on Windows 8.1. I used the OS's popup thing to download/install .NET 3.5 and always get error code 0x800F0906. Upon further research, I found I would have to pop in my Windows 8 CD and install it with this command, where "E:\" is where my CD is mounted: Dism /online /enable-feature /featurename:NetFx3 /All /Source:E:\sources\sxs /LimitAccess This and any derivative of it (e.g., removing /LimitAccess) has not worked for me and has either given me the same error code (0x800F0906) or a different one, 0x800F0081F. I've even copied the sxs folder to my hard drive, just in case something was going on with the CD Drive, only to have the same results. In that case, I used this command line: Dism /online /enable-feature /featurename:NetFx3 /All /Source:C:\dotnet35 /LimitAccess I find this surreal because in both cases, the files are indeed there but the program thinks it's not. Here's the CBS.log file. Any ideas on how to fix this? Any help is very appreciated :) EDIT: I now have a proper dism.log file, I'm not sure what happened to the last one or why it did that. Here's the link to the new log file. It's interesting to note that it doesn't recognize some of the commands in the script such as "featurename" or "source".

    Read the article

  • How to backup iTunes on Windows to folder/share, i.e. without "Back Up to Disc"? No DVD writer avail

    - by Chris W. Rea
    (Surprised I didn't already find an answer here to this!) I've got a computer on which I'd like to back up the iTunes library – music, movies, apps, everything. We're talking multiple gigabytes. Unfortunately, it seems that iTunes' own built-in "Back Up to Disc" feature (the only backup feature I can find in iTunes) only functions with a CD or DVD writer/burner. The computer in question does not have a DVD burner. While it has a CD burner, attempting to back up to CDs would require dozens of discs plus more time than I'm willing to spend swapping them. So: What is the recommended way to back up an entire iTunes library on a Windows computer, to a non-CD/DVD location such as an external hard drive or a network shared folder? Then, once such a backup has been performed, what is the process for restoring the library – e.g. after the computer has been repaved with a new version of Windows – so that iTunes is resurrected whole and recognizes devices it syncs with? Thank you!

    Read the article

  • Is there a quick way of undoing a folder change in Far Manager?

    - by Johannes Rössel
    I love Far Manager. However, it has a feature to quickly go to the root directory of a drive with Ctrl+\. I do sometimes need and use this feature, but more frequently I use Ctrl+? to quickly insert the file name under the cursor into the command line. As it so happens, the ? key is located dangerously close to \ which is why I sometimes erroneously go the root directory (which then is doubly unfortunate since I originally wanted to work with a file in the directory I was in). Now I could probably just redefine Ctrl+\ to do nothing, although I still sometimes need that (can be replicated with a quick cd\, though). But Windows Explorer, in the wake of the WWW, provided us with a handy directory history and two separate ways of navigating backwards: backwards through the history and backwards through the hierarchy. Is there something quick and easy to get back to the folder I were in? This is less of an issue in C:\Users\Me (still nagging) but more so in deeper hierarchies.

    Read the article

  • Mac OS X Lion (10.7) Drive Encryption

    - by Skoota
    My iMac has two drives (a 256 GB solid-state drive, and regular 2 TB hard drive). The Mac OS X Lion system is installed on the solid-state drive and, like many other users, I have moved my user profile folder onto the secondary 2 TB drive. However, as you may be aware, FileVault 2 on Mac OS X Lion (10.7) only encrypts the system drive. This leaves my data drive (containing my user profile folder, with all of my data) unencrypted. I am aware that work arounds for this issue exist (such as https://github.com/jridgewell/Unlock) but I am not happy with the results since they involve decrypting the data drive on startup using a LaunchDaemon (before any users have logged into the computer) essentially meaning that any user who logs onto the computer will see the unencrypted drive. I would like a method which will only unencrypted the data when an authorised user logs into the computer. As such, is there a way to do one of the following? Encrypt the entire data drive and only decrypt the drive when an authorised user logs into the computer. This would be equivalent behaviour to the Lion FileVault 2 feature, but on a secondary drive rather than the system drive. Encrypt only the user profile folder on the data drive, and only decrypt the folder when the user logs into the computer. This would be equivalent to the behaviour of FileVault 1 on previous versions of Mac OS X? I am happy to pay for a commercial third-party product that provides the required feature(s), but I have not yet been able to find one. Thanks in advance for any assistance.

    Read the article

  • Offline cache copies in Windows file sharing

    - by netvope
    I frequently access media files (music or video) on a remote Windows file share. My Internet connection is not very fast, and I find it a waste of bandwidth when I repeatedly access the same files. For example, I may listen to the same song 30 times in a month. So, I would like to cache files I've used. I know Windows has an "Always available offline" feature but I dont' think it suit my needs. I don't want to make the whole share "available offline" as the remote Windows file share is huge (in terabytes). Making individual files "available offline" is tedious as the files are scattered in many different directories. It would be much more convenient if the system can simply cache those I've used. I could also manually make a local copy each time I use a file... but this is even more troublesome than making each file "available offline" Also The files on the share seldom change. Many of the files are rarely used. Some of the files are frequently used. I don't have a list of the most frequently used files. It would be the best if I could tell Windows to cache the last accessed 10GB, but apparently it doesn't have this feature. So I think the best way is to have a SMB/CIFS caching proxy. What do you think? I have a Linux box sitting around. Perhaps I should try to setup samba4?

    Read the article

  • What's next for all of these Microsoft "overlapping" and "enhanced" products ?

    - by indyvoyage
    Recently I attended a road show, organised by MS Gold Partner company in the UK. The products discussed were: SharePoint server (2010 and 2007), Exchange server, Office Communication Server 2007, Exchange hosted services Office Live meeting, Office Communicator, System Center Configuration Manager and Operation Manager, VMware, Windows 7 etc. As Microsoft claims the enhancement in the each product against higher version, I felt that clients are not much interested in all these details. For example Office Communicator, surely they have improved a lot the product and first site all said 'WOW' great product, but nobody wish to pay money for all these extra features. Some argued, they are bogged down by all these increased number of menus. They don't need soft call feature included with mobile call. It apply for all other products as well such as MS office (next what 2 ribbons ?), windows OS and many more. Indeed there must be good features in all these products, but is it worth to spend money and time to update the older system ? Also sometimes these feature will decrease the productivity instead increase it. *So do you think what ever enhancement MS is doing in the products is only for selling purpose, not a real use ?? and I think also keep the developer busy learning the new tools and features. * I am sure some some people here will argue that some people need this sort of features. But I am not talking about NASA or MI5 guys. I am talking of usual businesses and joe public. Any ideas welcome.

    Read the article

  • No power save on external hard drive - How to implement?

    - by blastawaythewall
    I recently bought a new 3.5" USB external hard drive which I thought had a power save feature on it, but it turns out that it doesn't. So whether it's being used or not, it spins, which wouldn't be a problem if it wasn't so loud and didn't get really really hot. After doing some research, it seems like what controls this is the enclosure of the hard drive and not the drive itself or the OS (although I suspect that's not entirely true). I attempted to use the "hdparm" utility but it couldn't identify my externals. Just to be clear, I'm defining power save as a hard drive spinning down after a certain time period of not being used (read from/written to). Also, I have other externals that do this, so it's not a problem from my computer. Here's my question: Is there a way to implement a power save-like feature on my hard drive through software, OS settings, or anything else? Here's some details: Running: Windows XP Home SP2 HD Model: Cavalry CAUM-B-OTB 2TB (although the website only lists 1TB max) Inside: Hitachi Deskstar 7K2000 (HDS722020ALA330) (I would link, but new users can only post 1) Thanks in advance.

    Read the article

  • Shrinking a large transaction log on a full drive

    - by Sam
    Someone fired off an update statement as part of some maintenance which did a cross join update on two tables with 200,000 records in each. That's 40 trillion statements, which would explain part of how the log grew to 200GB. I also did not have the log file capped, which is another problem I will be taking care of server wide - where we have almost 200 databases residing. The 'solution' I used was to backup the database, backup the log with truncate_only, and then backup the database again. I then shrunk the log file and set a cap on the log. Seeing as there were other databases using the log drive, I was in a bit of a rush to clean it out. I might have been able to back the log file up to our backup drive, hoping that no other databases needed to grow their log file. Paul Randal from http://technet.microsoft.com/en-us/magazine/2009.02.logging.aspx Under no circumstances should you delete the transaction log, try to rebuild it using undocumented commands, or simply truncate it using the NO_LOG or TRUNCATE_ONLY options of BACKUP LOG (which have been removed in SQL Server 2008). These options will either cause transactional inconsistency (and more than likely corruption) or remove the possibility of being able to properly recover the database. Were there any other options I'm not aware of?

    Read the article

  • SSH multi-hop connections with netcat mode proxy

    - by aef
    Since OpenSSH 5.4 there is a new feature called natcat mode, which allows you to bind STDIN and STDOUT of local SSH client to a TCP port accessible through the remote SSH server. This mode is enabled by simply calling ssh -W [HOST]:[PORT] Theoretically this should be ideal for use in the ProxyCommand setting in per-host SSH configurations, which was previously often used with the nc (netcat) command. ProxyCommand allows you to configure a machine as proxy between you local machine and the target SSH server, for example if the target SSH server is hidden behind a firewall. The problem now is, that instead of working, it throws a cryptic error message in my face: Bad packet length 1397966893. Disconnecting: Packet corrupt Here is an excerpt from my ~/.ssh/config: Host * Protocol 2 ControlMaster auto ControlPath ~/.ssh/cm_socket/%r@%h:%p ControlPersist 4h Host proxy-host proxy-host.my-domain.tld HostName proxy-host.my-domain.tld ForwardAgent yes Host target-server target-server.my-domain.tld HostName target-server.my-domain.tld ProxyCommand ssh -W %h:%p proxy-host ForwardAgent yes As you can see here, I'm using the ControlMaster feature so I don't have to open more than one SSH connection per-host. The client machine I tested this with is an Ubuntu 11.10 (x86_64) and both proxy-host and target-server are Debian Wheezy Beta 3 (x86_64) machines. The error happens when I call ssh target-server. When I call it with the -v flag, here is what I get additionally: OpenSSH_5.8p1 Debian-7ubuntu1, OpenSSL 1.0.0e 6 Sep 2011 debug1: Reading configuration data /home/aef/.ssh/config debug1: Applying options for * debug1: Applying options for target-server.my-domain.tld debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: auto-mux: Trying existing master debug1: Control socket "/home/aef/.ssh/cm_socket/[email protected]:22" does not exist debug1: Executing proxy command: exec ssh -W target-server.my-domain.tld:22 proxy-host.my-domain.tld debug1: identity file /home/aef/.ssh/id_rsa type -1 debug1: identity file /home/aef/.ssh/id_rsa-cert type -1 debug1: identity file /home/aef/.ssh/id_dsa type -1 debug1: identity file /home/aef/.ssh/id_dsa-cert type -1 debug1: identity file /home/aef/.ssh/id_ecdsa type -1 debug1: identity file /home/aef/.ssh/id_ecdsa-cert type -1 debug1: permanently_drop_suid: 1000 debug1: Remote protocol version 2.0, remote software version OpenSSH_6.0p1 Debian-3 debug1: match: OpenSSH_6.0p1 Debian-3 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.8p1 Debian-7ubuntu1 debug1: SSH2_MSG_KEXINIT sent Bad packet length 1397966893. Disconnecting: Packet corrupt

    Read the article

  • Is the APC BR700G UPS (or similar) compatible with Active PFC power supplies?

    - by David Zaslavsky
    I'm looking at getting a UPS for my home computer. So far the APC BR700G looks very promising, except for one thing: one of the reviews on Newegg says that this UPS does not work with a power supply with Active PFC. Pros: Unit looks great, built well, very heavy, was excited to use it. Cons: Didn't research enough - many newer power supplies like my corsair 750w (and yes dells and other mainstreamers sell them too) that I bought last year have a feature called active pfc (power factor corrected). The signal for this backup battery doesn't fully support that feature and can cause issues. You can find an article on APCs site if you search their user forums for PFC. And the power supply in my computer is, in fact, an Active PFC PSU. I've already found one answer on this site claiming that it's not an issue, that "most quality supplies these days have PFC and work just fine with a UPS." That disagrees with the review on Newegg. Can someone explain this discrepancy? Also, what is it exactly about a UPS that makes it incompatible with an Active PFC PSU? (if anything) Is there some way to tell based on the technical specifications, or do I just have to hunt for reviews online to avoid wasting my money? While any input would be appreciated, I would prefer to get an answer from someone with actual experience with similar UPS's and Active PFC power supplies, who can tell me whether it works or not.

    Read the article

  • Which version control should I use for my configuration files?

    - by rakete
    I want to store some of my configuration files (~/.emacs.d/, .Xdefaults, etc. linux $HOME stuff) in version control so I can easily sync them with my notebook/workplace and see my past changes and revert to them should the need arise. So far it seems to me that there are quite some people using git for this and I think that I too want to use a distributed vcs for this (if only to get more used to them) but I can't say that I am very experienced with all things dvcs. I did use darcs and git briefly and so far I can say that I really like the way git handles branches, and I think the possibility to have different branches within the same directory is especially useful for my use case. Darcs on the other hand has cherry picking of patches, which too is quite the convenient feature when managing configuration files (at least I assume it is). So, what would you recommend to use? And what would be your reasoning for your recommendation? What other vcs with nice feature that I haven't mentioned exist and would make a good vcs to store configuration files and why?

    Read the article

  • zsh auto-complete event designator

    - by simont
    (See my previous question for additional context). I'm migrating to zsh from bash, and using oh-my-zsh. When my zsh history looks something like the following: git status git add -A git commit I want to be able to re-run git add -A. To do that, I could use !?git add, which should: !?str[?] Refer to the most recent command containing str. The trailing ‘?’ is necessary if this reference is to be followed by a modifier or followed by any text that is not to be considered part of str. The link for zsh event designators is here. Unfortunately, I can't do this - as I'm typing !?git add, as I hit the ' ', it auto-completes the command to the most recent command matching git (ie, it auto-completes with git commit). I can't use the event designator properly because of this auto-completion as I hit the space. I assume this is an oh-my-zsh feature. I have no idea where to look, though - greping for 'complet' in the oh-my-zsh source doesn't get me anywhere. My question: how do I turn off this feature? Or, if that's not something that's known, where should I be looking - if I was going to implement this auto-complete when whitespace is entered, where would be a logical place to do so in the oh-my-zsh framework?

    Read the article

  • How to configure CISCO switch 2960 for port-based address allocation on a single port only?

    - by Jack
    CISCO 2960 allows you to configure so-called Port-Based address allocation. It makes the switch to associate IP address it is giving out via DHCP with port-identifier, which is random, switch created identifier. In practice it means that any machine connected to such configured port will always get the same IP address, regardless of what that machine's MAC address is. I want to have that feature configured on --some ports-- only. But no matter what commands I try it seems that this can only be done for all ports, all for none. Even though CISCO manual seems to indicate there's both global and per-port command to enable that. Here are relevant commands from CISCO manual: configure terminal ip dhcp use subscriber-id client-id (this configures the DHCP server to globally use the subscriber ID as the client ID on all incoming DHCP messages) interface FastEthernet0/1 ip dhcp server use subscriber-id client-id (Optional: Configures the DHCP server to use the subscriber ID as the client ID on all incoming DHCP messages on the interface) but it appears if I configure only per-interface than there's no effect at all, if I configure globally and per interface - CISCo behaves as if all ports were configured to use that feature. Any ideas?

    Read the article

  • Multiheaded X.org with a single workspace-pool

    - by blauwblaatje
    I've got an idea for x.org/$randomwindowmanager in combination with a multiheaded setup, but I haven't figured out how it should work. Also I don't really know where to place the feature request. Now for the idea. I've been working with screen (wikipedia:GNU_Screen) for some years now. One thing I like about it, is the fact that I can get a multi-display mode (screen -x), so you can have multiple terminals all connected to the same screen. The fun thing about it, is that you can get 2 terminals with the same content and switch my onscreen layout, without moving the terminals. I admit, in screen it's not extremely useful, but I think for a wm it can be. Imagine this. You've got two monitors and 4 workdesks. On one workdesk I've got my IDE with code, on the second one I've got the output, on the third one I've got the documentation and on the forth one I've got my e-mail and IM clients. At one moment, I want my IDE and output on my monitors, another moment my code and documentation and Yet another moment my IM to consult a colleague and documentation or code. Finally my colleague comes to help me at my desk. I'd like it if we could both watch the same workdesk without him sitting on my lap, so I turn one monitor so he can see it better. It would be great if we could see the same thing that's on my monitor (exclude mousepointer). The thing with most WMs is that your workspaces on the two monitors are either separated or glued together. If they're separated, you can change workspaces on each monitor autonomous, but you can't exchange applications between monitors because they're different x-clients (iirc). If they're glued together (xinerama), you can exchange the applications, but when changing your workspace, the other monitors change too. So, what I'd like to know is this. Is this already possible or should I submit a feature request somewhere (and if so, where?)

    Read the article

  • How to slow down audio files?

    - by verve
    I need a program (with an easy learning curve) that lets me slow down mp3 (at the very least this format) music and audiobook files. The software needs to be able to slow down the audio at the chosen speeds without altering the pitch and accuracy of the words being pronounced. Perhaps like the language software "Byki Deluxe's" "SlowSound" feature? I'm learning a foreign language (German) and I find the speeds at which the books are being read too fast. I need to hear the pronunciation of each word much more clearly to learn how to pronounce the words myself. Is there such a product out there? Now, I know you can slow down stuff in VLC but it sounds really artificial. I need something that slows down audio files without altering the accuracy of the words being pronounced. It doesn't have to be freeware; ease of use and quality is more important to me. Win 7 64-bit. IE 8. Edit: Are there any software-for-pay like Audacity? Only the beta works in Win 7. Also, I'd prefer to be able to slow down a file live and not have to create a new file to use the feature.

    Read the article

  • Encrypted folders and files stay encrypted when copied

    - by user66126
    Hi all, I just tried out Windows 7 feature to encrypt a folder. I found that when I access the encrypted folder from another computer (the parent folder of the encrypted folder is shared) I can see the files there but I could not open it (which is good). But when I copy the file to another folder outside the encrypted folder (regardless it is on the same remote computer or to the computer from where I am accessing the files) then I can open the file without any problem. This might be how it works ... but that's not what I need. My question is: Can I encrypt a folder (and all files inside), access those files (create, edit) seamlessly while I am logged in normally to the computer ... but make the files stays encrypted when they were copied to another directory outside the encrypted folder? Regardless they were copied to the same computer or another computer or uploaded to a remote server. If this is not a feature that Windows natively support, is there third party software that does that? Thank you.

    Read the article

  • How to remove Media Center from Windows 8

    - by Arabella
    I bought 2 Windows 8 upgrades, 1 for my PC and 1 for my notebook. I added Windows Media Center to my notebook using the free offer in November (side note: the key was emailed to me within 5 minutes, I see many people have been complaining that it takes a few days). Today I decided to add WMC to my PC as well, so I went onto the Microsoft website, same like last time, and I received the email within a few minutes. Once I added WMC, entered the key and the computer rebooted, my activation is now broken: This product key is already being used on another PC. Try a different key or buy a new one. After rereading the product key email, I realised that the WMC key was exactly the same as the one I had received in November for my notebook (I used the same email, i.e. my Microsoft account Outlook email, for both). I didn't think this would be a problem, as on Microsoft's feature pack page it states: ...is limited to five licenses per customer per promotion. So then I decided, I'll just remove WMC from my PC and go back to Windows 8 Pro. So I turned off the WMC feature, PC restarted, activation still broken because my key has been replaced. I then tried to activate it with my original Pro key. The error it gave was that this key cannot be used with this version of Windows, as it is now Windows 8 Pro with Media Center and not Windows 8 Pro anymore. I've searched a bit and it seems the only way to remove it is do a clean install. I tried the Windows 8 Downgrade Helper, which told me I was already running Win 8 Pro when I tried to downgrade, and that I was running Win 8 Pro with Media center when I tried the other option. To sum up: How do I remove Windows Media Center from Windows 8 Pro without having to do a clean install?

    Read the article

  • What is the oldest hardware still in production use? How is it kept running?

    - by sleske
    In the spirit of the question What is your oldest hardware that still works?, I'd like to ask: What is the oldest hardware you know that is still in production use? And what challenges did you (or someone else) face in keeping it running (scarce documentation, no support, no spare parts available...)? Most organizations will retire / upgrade software and hardware after 5-10 years, but sometimes old software is kept running on old boxes, because it "just works". I once worked at a client site that was running a critical piece of (in-house developed) business software on a single server running HP-UX. The server was old (ca. 12-13 years), but fortunately still running without problems; however, getting spares would have been very difficult, and since software installation was undocumented, any significant system changes or even new hardware might have caused significant downtime and data loss. We eventually managed to replace it, but this is not always possible. I also read that many organizations still run decade-old mainframe hardware, particularly for highly customized systems controlling industrial machines or power plants. Which old hardware have you encountered? How did you manage these challenges? Related question: http://serverfault.com/questions/82467/should-old-servers-be-retired

    Read the article

  • Log and debug/decrypt an windows application's HTTPS traffic

    - by cweiske
    I've got a proprietary windows-only application that uses HTTPS to speak with a (also proprietary, undocumented) web service. To ultimately be able to use the web service's functionality on my linux machines, I want to reverse-engineer the web service API by analyzing the requests sent by the application. Now the question: How can I decrypt and log the HTTPS traffic? I know of several solutions which don't apply in my case: Fiddler is a man-in-the-middle HTTPS proxy which I cannot use since the application doesn't support proxies. Also, I do not (yet) know if it works with self-signed server certificates, which I doubt. Wireshark is able to decrypt SSL streams if you have the server's private certificate, which I don't have. any browser extension since the application is not a browser If I remember correctly, there have been some trojans that capture online banking information by hooking into/replacing the window's crypto API. Since the machine is mine, low level changes are possible. Maybe there is a non-trojan (white-hat) network log application out there which does the same? There is a blackhat presentation with some details available to read. They refer to Microsoft Research Detour for easy API hooking.

    Read the article

  • Google Chrome no longer treats " Web Apps" specially

    - by Adrian Petrescu
    I'm running Google Chrome (Dev Channel), with the --enable-apps flag, in both OSX and Ubuntu. I have four or five WebApps installed, and they appear in the "New Tab" page just fine. The problem is that, before, when the feature first became available in the Dev Channel, the actual tabs hosting the webapps received special treatment; they would have 3D Dock-like look, and (more importantly) the tab bar would be hidden while using that tab. Sometime in the last few weeks, however, it seems that the special treatment just disappeared with one of the daily updates. The webapps still show up in the New Tab page, they still work in the sense that they capture all URLs going to that webapp, and they use the right icons; but they've basically become indistinguishable from just a regularly pinned tab. The two special features mentioned above have disappeared, on both Ubuntu and OS X. My questions are simply: a) Does this happen to anyone else? When exactly did it begin? b) Why did Google regress the feature? c) Is there any flag I can enable to get it back?

    Read the article

  • Is it possible to use Google Docs Viewer to view files already in Google Docs?

    - by john2x
    The title is a little confusing. I'll elaborate. As far as I can tell, the Google Docs Viewer tool accepts a link to a raw document file (e.g. .doc, .pdf, et. al.), and renders its contents in the browser. For example, this url to a pdf http://research.google.com/archive/bigtable-osdi06.pdf when passed to Viewer, returns this link: http://docs.google.com/viewer?url=http%3A%2F%2Fresearch.google.com%2Farchive%2Fbigtable-osdi06.pdf What I'm trying to achieve is, use the Viewer to view a document already hosted in Google Docs (i.e. no longer a raw document file). When passing a link to a Google Docs document to the Viewer, the result is not as expected. It renders the link's HTML source instead of the document's contents. The reason I want to do this is that I want to be able to use the "embed" feature of Viewer to view Google Docs documents. Does Google Docs have a "link to embeddable view" feature? P.S. Here is a sample snippet to an embedded document. This is what I want, but pointing to an existing Google Docs document. <iframe src="http://docs.google.com/viewer?url=http%3A%2F%2Fresearch.google.com%2Farchive%2Fbigtable-osdi06.pdf&embedded=true" width="600" height="780" style="border: none;"></iframe>

    Read the article

  • How does Google geo location service work?

    - by heaosax
    I dont use google maps much, but I was using it today and I clicked the "Show my location" button for the first time, then firefox asked for permission and I clicked "share my location", google maps showed my location pretty accurate. But, how does this system really works? I mean how can google know where I live? I am connecting to the internet with a VPN, so my "public IP" is not from my country, but from sweden, also I use linux and I change the mac of my wireless device, but google still show my location. I know I can disable this feature setting firefox about:config geo.enabled to false, but I am curious about how google can know where I live even when I dont have a real mac address and my IP is not from my real country. Basically I'd like to know if this feature works only because of code that exists in chrome and firefox (which spies my system)? I am worried about anyone knowing where I live, I mean... where is my privacy? Part of the fun about the internet is remaining anonymous.

    Read the article

  • MS Word TOC that references # pages rather than page number

    - by buttonsrtoys
    We frequently need to write specifications in Word which require a TOC that refers to the total number of pages in a section, rather than the page number. E.g., Section No. Pages 01010 Summary of Work..............5 01025 Prices.......................2 01400 Quality Control..............1 01700 Contract Close Out...........2 A wrinkle is that each section is a separate file. To date, we've been writing or TOC by hand, which has introduced every error imaginable. Is there an MS feature that populates a TOC with page totals? If not, I've done a little VB in Office, so wouldn't be opposed to that route as need be, as long as it was usable by our low tech users. Related question - all the section files are in the same folder. It would be nice if the TOC loaded every file in a folder, rather than having to specify each one. Is this a feature of Word or would this require VB? We tried a master document with links to subdocuments, but since the number of section files ebbs and flows with each project, the approach required too much maintenance for our Wordophobes.

    Read the article

  • How to export and import an user profile from one Quassel core to another?

    - by Zertrin
    I have been using Quassel as my bouncer for IRC for quite a long time now. We (a group of administrators of a small network) have set up a shared Quassel core with many users on the same core. But now I would like to export everything related to my user account from the Quassel database on this core, in order to re-import it later in another Quassel core on my own server. Unfortunately, while a feature for adding users has been implemented into Quassel, nothing is so far provided for either exporting or deleting an user. (if deleting-a-user feature was available, I could have made a copy of the current database, delete all the other users leaving only mine, and use this resulting database on my own server, while leaving the first one untouched on the shared server) Despite extensive research on the Internet on this subject, I've found so far no solution. I have to precise that the backend database for the core has been migrated from the default SQLite backend to a PosgreSQL backend as the database grew sensibly (over 1,5 GB for now). However I'd be glad to hear from any working solution (SQLite or PostgreSQL backend) describing a way to export the data related to a specific user profile and then re-import-it in a new Quasselcore database.

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >