Search Results

Search found 21454 results on 859 pages for 'via'.

Page 222/859 | < Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >

  • How can I make a copy of a printer in Win7?

    - by hawbsl
    Has anyone been able to copy an existing printer in Win7? I know that we were able to do this in XP (not as a direct copy/paste, but by installing the printer twice) but in Win7 it doesn't seem to be possible. Googling the answer is hopeless because searching for "copy printer" or "duplicate printer" you get a bunch of posts about "printer copiers" or people complaining about duplicate printers getting created in the background (precisely what I'd like to be able to do) It'd be good to know how to do it in general, but if it depends on the printer type, then in our case we are trying to make a copy of an HP Laserjet. Tried installing from the CD - but the CD is too old for Win7 Tried installing via Add Printer and that seems to install the printer but it's marked with an error. Tried installing via the .exe installer from the HP site and that does result in a successful printer being installed, but it won't let you install the same printer twice (stalls on the "insert USB cable now" step - simply won't enable the greyed out "Next" button). The reason this is required is so that we can print to one to the feeder and to the tray separately.

    Read the article

  • DVI output only working on Windows, not during booting or on Linux

    - by Mononofu
    So yesterday I booted my laptop up and the external monitor I have it connected to just stayed black. At first, I thought the problem would go away when Ubuntu was loaded, but it didn't. I tried to reboot a few times, to no avail. Then I decided to give Windows 7 a try, and suddenly (at the login-screen), my external monitor turned on and worked like normal. I have connected the monitor via DVI, and this only seems to work with Windows now. I don't even get a signal in my BIOS! Mind you, everything was working fine before that, and I didn't change a single thing. I then tried to connect the monitor via VGA (from my DVI jack, which can output VGA using an adaptor), and it worked again. However, 1920x1200 using VGA looks like crap - black print on white background is basically illegible. Do you have any ideas how to fix this peculiar problem? I only use windows for gaming, so it's no real help that it still works normally. Please also excuse any spelling mistakes, I am practically typing this blindly. Edit: I only have one graphics card in my laptop, and I can't select anything related to that in my BIOS. In fact, I can pretty much do almost nothing there. My laptop is a Nexoc Osiris E703, graphics gard is a GeForce Go 7900 GTX. As I mentioned before, DVI output during booting and on Ubuntu was working fine for years before yesterday!

    Read the article

  • unicorn and nginx, went wrong

    - by achempion
    I try to deploy my app via capistrano. It was done, but when I start to nginx and show my site in the browser I see 'We're sorry, but something went wrong.' It is bad. I use unicorn. See my configs https://gist.github.com/3904032 I try to start server via rails s -e prodiction and it's work! I think that this error may be because I can't restart server root@li272-194:~# /etc/init.d/nginx restart Restarting nginx: the configuration file /etc/nginx/nginx.conf syntax is ok configuration file /etc/nginx/nginx.conf test is successful [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: still could not bind() nginx. any ideas? nginx log 2012/10/17 02:57:41 [error] 3271#0: *1 could not find named location "@myapp", client: 91.192.62.77, server: 178.79.153.194, request: "GET / HTTP/1.1", host: "178.79.153.194" 2012/10/17 02:19:08 [crit] 2448#0: *8 connect() to unix:/srv/zarcon/shared/unicorn.sock failed (2: No such file or directory) while connecting to upstream, client: 91.192.62.77, server: zarkon, request: "GET / HTTP/1.1", upstream: "http://unix:/srv/zarcon/shared/unicorn.sock:/", host: "178.79.153.194"

    Read the article

  • DVI output _only_ working on Windows, not during booting or on Linux

    - by Mononofu
    So yesterday I booted my laptop up and the external monitor I have it connected to just stayed black. At first, I thought the problem would go away when Ubuntu was loaded, but it didn't. I tried to reboot a few times, to no avail. Then I decided to give Windows 7 a try, and suddenly (at the login-screen), my external monitor turned on and worked like normal. I have connected the monitor via DVI, and this only seems to work with Windows now. I don't even get a signal in my BIOS! Mind you, everything was working fine before that, and I didn't change a single thing. I then tried to connect the monitor via VGA (from my DVI jack, which can output VGA using an adaptor), and it worked again. However, 1920x1200 using VGA looks like crap - black print on white background is basically illegible. Do you have any ideas how to fix this peculiar problem? I only use windows for gaming, so it's no real help that it still works normally. Please also excuse any spelling mistakes, I am practically typing this blindly. Edit: I only have one graphics card in my laptop, and I can't select anything related to that in my BIOS. In fact, I can pretty much do almost nothing there. My laptop is a Nexoc Osiris E703, graphics gard is a GeForce Go 7900 GTX. As I mentioned before, DVI output during booting and on Ubuntu was working fine for years before yesterday!

    Read the article

  • Windows Server 2012 Can't Print

    - by Chris
    I know this may sound incredibly stupid and there is probably an easy solution but I can't seem to find it. Friends of mine recently upgraded their server for their small business from the POS old one. New hardware and a change from Windows Server 2003 to Windows Server 2012. I've got everything they need transfered over and running except for printing. They need to be able to print to printers in the vans their technicians use from the server via remote desktop. In other words the use a laptop to remote desktop into the server and need to print invoices out from the remote server to printers attached locally via usb. On the old server they just installed the identical driver and that was it, they could print as needed. On this server no matter what we seem to do we can't get it to print remotely, and in the process we also discovered that the server can't even print to the network printer. It sees the printer on it's network and it sees (through redirect) the printers in the vans but when you hit print it claims it did and nothing happens. There isn't an issue with the printers themselves as every other device we have can print to them without issues. Is there some setting that is inhibiting the server from printing? Is there something I need to install (print server?) to add the functionality? Thanks in advance for helping me out here

    Read the article

  • Why is there no /usr/bin/ in windows? Would it be dangerous to the entire Program Files to the path?

    - by dotancohen
    I am a Linux user spending some time in Windows and I'm trying to understand some of the Windows paradigms instead of fighting them. I notice that each program installed in the traditional manner (i.e. via orgasmic installers: Yes, Yes, Yes, Finish) adds the executables to C:/Program Files/foo/bar.exe and then adds a shortcut to the Desktop / Start Menu containing the entire path. However, there is no common directory with links to the software, i.e. C:/bin/bar.exe which would link to C:/Program Files/foo/bar.exe. Therefore, after installing an application the only way to use the application is via the clicky-clicky menus or by navigating to the executable in the filesystem. One cannot simply Win-R to open the run dialogue and then type bar or bar.exe as is possible with notepad or mspaint. I realize that Windows 8 improves on this with the otherwise horrendous Start Screen which does support typing the name of the app, but again this depends on the app having registered itself for such. Would I be doing any harm by adding C:/Program Files recursively to the Windows path? I do realize that there will be name collisions (i.e. uninstall.exe) but could there be other issues?

    Read the article

  • Active Directory: Determining DN or OU from log in credentials [closed]

    - by Christopher Broome
    I'm updating a PHP login process to leverage active directory on a Windows server. The logging in process seems pretty straight forward via a "ldap_bind", but I also want to pull some profile information from the AD server (first name, last name, etc...) which seems to require a robust distinguished name (DN). When on the windows server I can grab this via 'dsquery user' on the command prompt, but is there a way to get the same value from just the user's login credentials in PHP? I want to avoid getting a list of hundreds of DNs when on-boarding clients and associating each with one of our users, so any means to programmatically determine this would be preferential. Otherwise, I'll know the domain and host for the request so I can at least set the DC portions of the DN, but the organizational units (OU) seem to be pretty important for querying data. If I can find some of the root level OU values associated with the user I can do a ldap_search and crawl. I browsed through the existing questions and found some similar but nothing that really addressed this, so my apologies if the obvious answer is out there. Thanks for the help.

    Read the article

  • Windows 7 / TCP/IP network share guide - looking for to resolve failure to mount lacie network drive but works on XP,Linux,Mac.

    - by Rob
    Can anyone advise me of a really good, readable, Windows 7 TCP/IP network share guide, book, or reference. I want this because I cannot mount my Lacie 2big ethernet network drive in Windows 7 (32 bit home), but I can mount it in Windows XP Home 32bit, Ubuntu Linux 10.04 and Apple MacOS X. This drive is being mounted via the accompanying Lacie Ethernet Agent in XP (which I believe uses "Bonjour" protocol), on Mac and Linux it works without further need for software. Another Super User user has the same problem, but no answer: Trouble accessing network drives in Windows 7 I hope my take on the question shows a better willingness to investigate and do some digging - and therefore invite some suggestions to help with this. The drive is detected by Windows 7 (i.e. speech bubble "network drive found") but on trying to open an Explorer window, this remains blank with the Windows busy pointer. I'd prefer not to reinstall Windows 7 to see if that cures the problem, I'd rather understand what is happening/not happening, perhaps even compare differences with Windows XP. Suggestions, please for such guides or even the original problem itself. Update Edit Rewrote question more comprehensively here: Mhttp://superuser.com/questions/304209/looking-for-definitive-answer-to-accessing-a-network-share-via-windows-7-home-and

    Read the article

  • Ubuntu Launcher Items Don't Have Correct Environment Vars under NX

    - by ivarley
    I've got an environment variable issue I'm having trouble resolving. I'm running Ubuntu (Karmic, 9.10) and coming in via NX (NoMachine) on a Mac. I've added several environment variables in my .bashrc file, e.g.: export JAVA_HOME=$HOME/dev/tools/Linux/jdk/jdk1.6.0_16/ Sitting at the machine, this environment variable is available on the command line, as well as for apps I launch from the Main Menu. Coming in over NX, however, the environment variable shows up correctly on the command line, but NOT when I launch things via the launcher. As an example, I created a simple shell script called testpath in my home folder: #!/bin/sh echo $PATH && sleep 5 quit I gave it execute privileges: chmod +x testpath And then I created a launcher item in my Main Menu that simply runs: ./testpath When I'm sitting at the computer, this launcher runs and shows all the stuff I put into the $PATH variable in my .bashrc file (e.g. $JAVA_HOME, etc). But when I come in over NX, it shows a totally different value for the $PATH variable, despite the fact that if I launch a terminal window (still in NX), and type export $PATH, it shows up correctly. I assume this has to do with which files are getting loaded by the windowing system over NX, and that it's some other file. But I have no idea how to fix it. For the record, I also have a .profile file with the following in it: # if running bash if [ -n "$BASH_VERSION" ]; then # include .bashrc if it exists if [ -f "$HOME/.bashrc" ]; then . "$HOME/.bashrc" fi fi

    Read the article

  • gitweb refusing to blame

    - by Slipp D. Thompson
    I'm attempting to get gitweb (git 1.8.4.2, via git instaweb) in a project dir on my Debian server to offer blame views. In my /etc/gitweb.conf: … # default logo, favicon, etc. settings $feature{'blame'}{'default'} = [1]; $feature{'pickaxe'}{'default'} = [1]; $feature{'snapshot'}{'default'} = ['tgz', 'txz', 'zip']; $feature{'highlight'}{'default'} = [1]; $feature{'pathinfo'}{'default'} = [1]; In my global config file: [gitweb] blame = true snapshot = tgz, txz, zip patches = 256 avatar = gravatar [instaweb] local = false httpd = apache2 -f port = 4321 In my project's .git/config file: [gitweb] blame = true And yet, when I try to load a git blame view (via hand-modifying the URL to http://myserversip:4321/?p=.git;a=blame;f=Tests/InchCoordProxyTests.m;h=b4b2…;hb=53b4, since blame action links don't show up): Doing a quick search for “Blame view not allowed” in the gitweb.cgi source reveals plainly that the gitweb_check_feature('blame') conditional is failing. What am I doing wrong? Or, is there a way to verbosely print out why gitweb is doing what it's doing (e.g. which config files were read, which settings were loaded from each file, etc.)?

    Read the article

  • IIS 6 Denies access to the default document

    - by Jim
    I've got Windows Server 2k3 with IIS6 hosting a couple ASP.NET MVC 2 applications (.NET 4), all in the Default Web Site. Most of them simply use Integrated authentication, but a couple use forms as well. All the applications work properly and are correctly accessible. The problem I'm trying to resolve is access to the default document. It is currently specified as index.htm. Both index.htm and the Default Web Site are configured to allow anonymous access (with none of the authenticated acces boxes checked). However, access is denied to the file. Accessing via server.domain.tld/ and server.domain.tld/index.htm both yield 401 errors. However, server.domain.tld/default.htm (file does not exist) properly returns a 404. If I alter the file security on index.htm to allow integrated authentication, then requesting /index.htm directly works properly for users with domain accounts, but anonymous users get a login prompt/401. How can I configure IIS to allow all users to view index.htm via server.domain.tld/?

    Read the article

  • How to troubleshoot Linksys E4200 Remote Management

    - by Jordan
    My Linksys E4200 is configured for Remote Management, but the router is not accepting the connections. Here's the configuration under Administration Management Remote Management Access: Remote Management: Enabled Access via: HTTP Remote Upgrade: Disabled Allowed Remote IP Address: Any IP Address Remote Management Port: 8080 The router is setup to use 192.168.10.41 as its static Internet IP address, and 192.168.35.1 as its LAN IP address. I can access the router just fine via its LAN IP address, but I can't make a connection using http://192.168.10.41:8080. I've tried variations of the settings above (enabled HTTPS, enabled Remote Upgrade, set an IP range of 192.168.10.1-254) but nothing has worked yet. Hoping someone can at least point me in the right direction. Thanks. Update: To clarify, I have a wired router that connects straight to the T1 modem. It's configured to use 192.168.10.1-254 as its internal LAN range. The E4200 wireless router in question is on that LAN using 192.168.10.41 as its WAN IP address. The E4200's internal LAN range is 192.168.35.1-254. I'm not trying to access the E4200 from the Internet, I'm just trying to access it from its WAN IP address. Thanks.

    Read the article

  • ATI Radeon 5670 Won't Show Resolutions over 1400x900

    - by Phil Sandler
    Just got my new Dell computer with Windows 7 and an ATI Radeon 5670. I attached it to my current monitor, which is a Samsung 24" (2443bwt). Windows 7 does not allow me to display in resolutions greater than 1400 x 900. The setup through a VGA cable into the VGA port of the card. The card also has a DVI port, but I need to use the VGA port because a KVM that supports VGA only. My old PC (which is Windows XP, GeForce 8600 video) can display in 1900 x 1200 on the same monitor (which is what I want) and even higher. It does this through a vga cable also connected to the KVM (through the DVI port but using an adapter). I have tried the same setup (DVI = VGA adapter) on the new PC and nothing changed. I have tried: Updating the drivers via Windows "Update Driver" (says they are current) Installing the updated version of the drivers from ATI (made no difference) Installing Powerstrip (all the options I would need for a custom resolution are greyed out) Installing the drivers/software from ATI caused the ATI Catalyst Control Center software to stop functioning, so I can no longer even start it. I have found some references to other people having this problem and instructions on cleaning the software off and reinstalling it (as uninstalling normally doesn't solve it). I will try this tonight. In any case, I didn't see any options in CCC that would allow me to override the settings for max resolution. However I didn't tinker with it too much before I tried updating the drivers, so I may have missed a setting. I contacted Samsung via online chat and they say it's a problem with the video card/driver (of course--what else would they say?). Any thoughts on what else I could try?

    Read the article

  • One workstation gets slow access to the server, but others are fast

    - by Mike Hanson
    I've just setup a machine with Windows Server 2008. It hosts various services, like IIS, POP3, SMTP, Music for Squeezeboxes, VNC. All was working well for the first week or so. One day I needed to create a mapped drive on the server, so it could access files on my workstation. Windows indicated that Network Discovery was needed, so I turned it on with the "Home / Office" option (rather than "Public"). This may be coincidence, but since that time I've been having troubles accessing various services from my main workstation (running Windows 7/64): POP3 continued working correctly, but SMTP was delayed or failed entirely. (Telnet took 20 seconds to connect, but Outlook would never send messages.) VNC failed entirely. I reinstalled it on the server, and now it works but feels sluggish. The music web server was extremely delayed and usually failed. I tried reinstalling, and now it takes about 30 seconds to show the page name on the browser tab, and another 30 seconds to display any page contents. Other machines on the local network seem fine, as do machines connected via the Internet. I don't believe I changed anything on my own machine that would cause this. I considered the possibility that my anti-virus was involved, so I uninstalled AVG (commercial version), but that didn't help. I installed Norton 360 after that, and it didn't complain of viruses on my machine, and the delays remained. Because only my machine is affect, I'm tempted to blame it, except that reinstalling software on the server improved the situation, so there is almost certainly something going on with the server too. The firewall has all the necessary ports open, and it works fine for the other workstations (including external machines connected via the Internet), which indicates that it should be OK. Any ideas?

    Read the article

  • No network connection for vmware esxi guests

    - by JavaDev
    I'm new to VMware and setting up an Esxi server as a trial with the intention of possibly virtualizing some of our servers in the near future. I have setup ESXi on a Dell poweredge server, and installed a Centos 5.6 and Ubuntu 11.04 guest os on the server. However I cannot get networking on my guest OS's. The host is connected to a network with a DHCP server via a switch and is configured with a static IP. I have the default set-up for networking on the host: both guests are connected to the default vmnic1 adapter via the virtual switch vSwitch0. One thing though, the virtual adapter shows 'Observed IP ranges' to be XXX.XXX.XXX.194-XXX.XXX.XXX.195 (I've blanked out the initial prefixes) i.e just 1 address, even though the network the host is connected to has the usual 255.255.255.0 subnet mask. On the guest machines (using DHCP) by default, I can see an eth0 interface but with no connection or assigned IP address. A physical machine connected to the network gets a DHCP lease as expected. How do I get networking working on my guest OSes? Apologies for the long-winded question.

    Read the article

  • Enterprise Redirection Services?

    - by Aaron Alton
    This is probably a case of "if I new what it was called, I could google it in 5 minutes" - but I don't know what it's called. It's probably best to explain the requirement using an example. We have a number of services (vpn, owa, etc) which we host from one of our datacenters. We have a number of datacenters, and we technically have the infrastructure already in place to support these services at a number of our datacenters. To provide access to these "services", I would create an external DNS entry (ex. VPN.MyCompany.com Gateway IP for one of my DCs), and clients will connect to it via the DNS entry. Since I have multiple datacenters that can support this service, I could theoretically offer a "highly available, geographically dispersed" solution if I could point this DNS entry to some sort of third party who offers highly available "redirection" services. If my primary site goes down, I could just make a change via some management console and configure the redirector to point to a different DC. Of course, it would be fairly straightforward to set this sort of thing up on one of our servers, but that would kinda defeat the purpose of a highly available third party. Is anyone familiar with a service like this? I'm thinking something like DynDNS, but with Enterprise availability guarantees.

    Read the article

  • Can't do anything with Ext. HDD, ".Trashes" is probably the boogyman. (On a MAC!)

    - by Sander Schaeffer
    I bought a external Harddrive today, A Samsung. But I'm not able to do anything with it A few notes on that. I can't put anything on the harddrive. It keeps on 'preparing copying files' I can delete anything on the harddrive system files, except the folder ".Trashes". It gives error 'Unexpected error: -50' I've tried to empty my own trashcan, no changes. I've set the file permission on the .Trashes to read/write everyone, doesn;t change a thing Trying to format the whole drive with DiskUtility, but quits at start, because the drive cannot be deactivated I've tried a few terminal commands sudo -s -r rf /Volumes/Untitled\ 1/.Trashes - Directory not empty -r rf /Volumes/Untitled\ 1/.Trashes - no permissions Also cd /Volumes ls -al cd name_of_partition ls -al -rm -rf .Trashes Again: Permission error. Also: I can't change drive permissions via Disk Utility, via the button 'recover drive permissions', because it is 'blank' I really can't figure out how to delete .Trashes, format the drive or get the damn thing working. Any suggestions? p.s. If this is the wrong Stack Exchange site: Please redirect me!

    Read the article

  • How do I tell Websphere 7 about a front end load balancer so that re-directs are handled correctly?

    - by TiGz
    On WebLogic 11G I can use the console to set the FrontendHost and FrondendPort on a server or on a cluster so that re-directs are handled correctly and end up resolving to the front end load balancer instead of the local host. The MBeans associated with this on WebLogic are, for example: MBean Name com.bea:Name=AdminServer,Type=WebServer,Server=AdminServer Attribute Name FrontendHost Description The name of the host to which all redirected URLs will be sent. If specified, WebLogic Server will use this value rather than the one in the HOST header. Sets the HTTP frontendHost Provides a method to ensure that the webapp will always have the correct HOST information, even when the request is coming through a firewall or a proxy. If this parameter is configured, the HOST header will be ignored and the information in this parameter will be used in its place. Type java.lang.String Readable / Writable RW How is the same thing achieved under Websphere 7? Follow up info: So I have 2 use cases actually. One is that I have a web app running under WebSphere on host A on port 9002 and a LB running on host B at port 80, when I visit the home page of the app via the LB on http://hostb/app the app redirects my browser to http://hostb:9002/app and it 404's I think this is WebSphere's fault but I guess it could be the app's fault? The second is that the web app in question needs to send emails containing URls that the customer can click on to get back into the web app - obviously this needs to be via the LB. On WebLogic the app uses MBeans to derive the LB url and I was hoping to use a similar mechanism on WebSphere.

    Read the article

  • Query specific nameserver for a particular domain upon VPN connect

    - by MT
    Some background: I have a work laptop with Ubuntu 9.10 on it. I have a small network at home where I've been running some basic services (for myself/my family) for 10 some years. In my home network there is a nameserver (Fedora) running Bind 9 with two "views". One view is the "outside" view and it provides name resolution (to the Internet at large) for email, a wiki, and a couple of blogs. The "inside" view provides name resolution (to the internal RFC1918 addresses of theses servers) as well as all the inside hosts, network equipment, ...etc. I connect with an openvpn client to my home network from outside (such as work). What I'd like to be able to do is resolve names on my internal network across this VPN (so I get the RFC1918 "inside" responses) without fully changing my resolver to the DNS server at my hose. For example, if I connect to the VPN from work, I can change my resolver (by editing resolv.conf) to the DNS server at my house (across the VPN) and then successfully resolve all of the inside DNS names on my home network. The issue I have with this is that now I'm no longer able to resolve "inside" names provided by my work's DNS servers (because I'm using my home DNS server). Alternatively, I can connect to the VPN and access my home severs via IP addresses directly, but this is inconvenient and causes issues with Apache name-based hosting (among other things). In the end, the effect I'm trying to achieve is as follows: When I connect to the VPN I automatically start sending DNS requests for *.myhomedomain.com to my home nameserver, but any other requests continue to go the the nameserver I was using before (the one I received on my company LAN via DHCP). When I disconnect the VPN, requests for *.myhomedomain.com go back to the local LAN DNS server (e.g. all requests are going there now). I'm looking for suggestion at to how this can be accomplished.

    Read the article

  • Setting up logging for a remote backup script

    - by Brian Dainis
    So I wrote up a short script that I am planning to run via a cron job daily to package up my site files and send them to a remote location. I also plan to incorporate DB dumps, but I have not gotten that far yet. My issue today however is that Im am uncertain how to log the output of each command for errors, warnings, or other pertinent information the command may output. I would also like to install sometype of fail safe so if something goes horribly wrong the script will stop dead in its tracks and notify me via email or something. Ok the email thing is not as critical, but would be nice. Does anybody have any ideas for that? Here is what I have so far. By the way, both servers are CentOS 6.2 running standard LAMP. #!/bin/sh ################################# ### Set Vars ################################# THEDATE=`date +%m%d%y%H%M` ################################# ### Create Archives ################################# tar -cf /root/backups/files/server_BAK_${THEDATE}.tar -C / var/www/vhosts gzip /root/backups/files/server_BAK_${THEDATE}.tar ################################# ### Send Data to Remote Server ################################# scp /root/backups/files/server_BAK_${THEDATE}.tar.gz user@host:/home/bak1/ftp/backups/ ################################# ### Remove Data from this Server ################################# rm -rf /root/backups/files/server_BAK_${THEDATE}.tar.gz

    Read the article

  • Nginx settings are screwing up my Drupal form submissions, how do I fix?

    - by bflora
    How do I tell Nginx to "ignore" specific URLS or pages on my web site? I run a Drupal site where anonymous visitors get served via NGINX while logged in users get served via Apache. We do this to keep the load down and scale better. It works great, except, since we set up nginx, a good number of Drupal forms no longer work. For example, before installing Nginx, if you created a new article, then clicked "edit" and edited the article. You could click "save" and your changes to the article would be saved. After setting up nginx, when you make edits and then click "save," the page simple refreshes, but now with "nginx-index.php" inserted into the URL. And your changes to the form were not actually saved to the database. So if you go to edit an article, you'll be on domain.com/node/##/edit or something like that. When you try to save your changes to the form, you'll wind up at domain.com/nginx-index.php?q=node/##/edit. And your changes will not be saved. There is a way around this, but only for administrative users. If you go to a form where this problem is happening, then comment or comment-out three lines in our settings.php file, the form will save properly. Those three lines are: // 'cache_form' = array( // 'engine' = 'db', // ), If they're commented, you uncomment them, them save the form. If they're uncommented, you comment them out and save the form. Obviously, this sucks. My friend who set up our server (and then left the country) told me that there are some Nginx settings that can tell it to "ignore" certain URLs or pages which could work here. How do I do this and where do I do it?

    Read the article

  • Over gigabit connection, Teracopy does 31MB/s, but Windows 8 does it at ~109MB per second?

    - by Gaurang
    I got my brain-melting first taste of Gigabit networking today, between my 2011 MacMini and Windows 8 Pro desktop connected via Cat.5e to Linksys WRT320N(sporting dd-WRT). After making sure that the line speed on both systems showed 1Gbps, I proceeded to copying a 2.4GB MP4 from the Mini to the Win 8 desktop (SMB sharing). Although satisfied with the 30-34 MB/s that Teracopy was showing (that was a proper step-up for me from 10 MB/s), I still was curious about this massive difference in the advertised and real-world speed. 2 hours of Google had me believing that there were other factors that resulted in less speed, SMB being one. So just for the sake of doing it, I iPerf'd both the systems and guess what that showed - around 875mbps on both systems! I then stumbled upon this little piece of info after which I turned off Teracopy and copied the same file through Windows 8's regular copier. 109 MB/s. Molten brains :) What exactly is causing this? And can I enable such speeds via Teracopy? I really dig the extra features that Teracopy has, will surely miss them now :D

    Read the article

  • Reverse SSH tunnel: how can I send my port number to the server?

    - by Tom
    I have two machines, Client and Server. Client (who is behind a corporate firewall) opens a reverse SSH tunnel to Server, which has a publicly-accessible IP address, using this command: ssh -nNT -R0:localhost:2222 [email protected] In OpenSSH 5.3+, the 0 occurring just after the -R means "pick an available port" rather than explicitly calling for one. The reason I'm doing this is because I don't want to pick a port that's already in use. In truth, there are actually many Clients out there that need to set up similar tunnels. The problem at this point is that the server does not know which Client is which. If we want to connect back to one of these Clients (via localhost) then how do we know which port refers to which client? I'm aware that ssh reports the port number to the command line when used in the above manner. However, I'd also like to use autossh to keep the sessions alive. autossh runs its child process via fork/exec, presumably, so that the output of the actual ssh command is lost in the ether. Furthermore, I can't think of any other way to get the remote port from Client. Thus, I'm wondering if there is a way to determine this port on Server. One idea I have is to somehow use /etc/sshrc, which is supposedly a script that runs for every connection. However, I don't know how one would get the pertinent information here (perhaps the PID of the particular sshd process handling that connection?) I'd love some pointers. Thanks!

    Read the article

  • Debian: Unable to mount a second drive as a subdirectory inside of another partition.

    - by jkndrkn
    Hello. I have the following /etc/fstab: # /etc/fstab: static file system information. # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc defaults 0 0 /dev/md1 / ext3 defaults,errors=remount-ro 0 1 /dev/md0 /boot ext3 defaults 0 2 /dev/md5 /home ext3 defaults 0 2 /dev/md3 /opt ext3 defaults 0 2 /dev/md6 /tmp ext3 defaults 0 2 /dev/md2 /usr ext3 defaults 0 2 /dev/md4 /var ext3 defaults 0 2 /dev/md7 none swap sw 0 0 /dev/sdc /home/httpd ext3 defaults 0 2 /dev/hda /media/cdrom0 udf,iso9660 user,noauto 0 0 /dev/sdc1 /mnt/usb/backup-1 auto defaults 0 0 I am unable to get /dev/sdc/ to mount at /home/httpd/ on reboot. The /home/httpd/ directory exists. Mounting via mount -t ext3 /dev/sdc /home/httpd works just fine. Mounting via mount -a generates the following error message: mount: you must specify the filesystem type This is, incidentally, the same message that I see while booting. The error message goes away if I comment out the line in fstab starting with /dev/sdc.

    Read the article

  • Problems setting up VLC Sever/client streaming

    - by Ayos
    I'm trying to set up a Linux machine as the server and a Windows XP machine as the client. Both machines are connected to the same local network via a Wi-Fi router. I setup the stream with the following properties : http stream port 8080 play locally And not much else. No firewall on the windows client(Windows firewall is disabled) When I try to open network stream via the client machine(Using VLC or Windows Media Player) I get the following errors: Media Player error code : 0xC00D11B3: Encountered a network Problem. VLC Console: main warning: connection timed out access_mms error: cannot connect to 192.168.1.3:8080 main debug: no access module matching "http" could be loaded main debug: TIMER module_need() : 12625.810 ms - Total 12625.810 ms / 1 intvls (Avg 12625.809 ms) main error: open of `http://192.168.1.3:8080' failed main debug: dead input main debug: repeating item main debug: starting playback of the new playlist item main debug: resyncing on http://192.168.1.3:8080 main debug: http://192.168.1.3:8080 is at 0 main debug: creating new input thread main debug: Creating an input for 'http://192.168.1.3:8080' main debug: using timeshift granularity of 50 MiB, in path 'C:\DOCUME~1\Accer\LOCALS~1\Temp' main debug: `http://192.168.1.3:8080' gives access `http' demux `' path `192.168.1.3:8080' main debug: creating demux: access='http' demux='' location='192.168.1.3:8080' file='\\192.168.1.3:8080' main debug: looking for access_demux module: 0 candidates main debug: no access_demux module matched "http" main debug: TIMER module_need() : 0.461 ms - Total 0.461 ms / 1 intvls (Avg 0.461 ms) main debug: creating access 'http' location='192.168.1.3:8080', path='\\192.168.1.3:8080' main debug: looking for access module: 2 candidates access_http debug: http: server='192.168.1.3' port=8080 file='' main debug: net: connecting to 192.168.1.3 port 8080 qt4 debug: IM: Deleting the input main debug: TIMER input launching for 'http://192.168.1.3:8080' : 13397.979 ms - Total 13397.979 ms / 1 intvls (Avg 13397.978 ms) qt4 debug: IM: Setting an input Need Help. Thanks in advance.

    Read the article

< Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >