Search Results

Search found 17621 results on 705 pages for 'just my correct opinion'.

Page 498/705 | < Previous Page | 494 495 496 497 498 499 500 501 502 503 504 505  | Next Page >

  • Why is Apache htdigest authentication failing in IE10 on Windows 8?

    - by Kevin Fodness
    One of our developers reported that for the past week or two, the htdigest authentication that we have set up on our test sites in Apache is not working in IE10 on Windows 8. It's fine on IE10 on Windows 7, and it's fine on Chrome on Windows 8. The specific behavior is: Navigate to site with htdigest authentication enabled, username and password form pops up, enter correct username and password, and the username and password box pops up again. Potentially useful information: All patches applied on Windows 8 box No additional software on Windows 8 box other than Outlook 2013 and a browser test suite (Chrome, Firefox, Opera, Chrome Canary, Opera Next) Win8 running in a virtual machine on Xen Same behavior can be replicated on Win8/IE10 on Browserstack.com Server running Ubuntu 10.10 with Apache 2.2.16 This feels like a patch was applied to the Windows box that broke digest authentication for IE10 on Win8 (box configured for automatic updates). However, without knowing a specific date I can't necessarily nail this down. Has anyone else experienced this problem? EDIT: This problem only happens in the "Metro" interface, not when running IE10 in desktop mode. As of a few weeks ago, it worked fine even in the "Metro" interface.

    Read the article

  • WordPress permalinks not working, everything seems fine

    - by javipas
    I have a WordPress blog I've migrated from another CMS, and I've being having a lot of problems with my permalinks structure: lots of articles give a 404, although they are there, somewhere, published. The site is www.muycomputerpro.com (MCP for short), and for example an article that should be found is: http://muycomputerpro.com/Actualidad/Especiales/2009-las-grandes-crecen-en-la-bolsa If I do a search on the search tool at MCP, the result is there (see EnlacesMCP-1.jpg) But when I click on the link, our 404 error page appears (see EnlacesMCP-2.jpg) The weird thing is, the article is published, and the permalink is the right one, as you can see on this screenshot of the WordPress CMS: The permalink (below the title) is correct (http://www.muycomputerpro.com/Actualidad/Especiales/2009-las-grandes-crecen-en-la-bolsa/) but it does not work. In fact, if I try to use the short link (http://www.muycomputerpro.com/?p=5023) the article does not show either. I've accessed my WordPress DB and I've search the article to see if there is something wrong there, but from what I can tell all the fields are OK, here's a screenshot: I really don't know what is causing this. The permalink structure should work (I'm using the "Custom permalink" plugin to preserve the old URLs that had a alphanumeric code at the end of the postname) and the permalink config on wordpress is "/%postname%/". I really need help :(

    Read the article

  • What options to use for Accurate bacula backup?

    - by Kiss Stefan
    It's actually 2 question in one. First is a bit more theoretically. So when specifying accurate options how does bacula figure out if a file needs to be backed up ? it's a simple AND ? As in if the options are Accurate = sm5 bacula will not backup the file if ((size = old size) AND (modtime = old modtime) AND (md5 = old md5)) Is that correct ? Do any of the options take precedence ? as in would be a file skipped if modif time is diffreent but it has the same md5sum ? Are there any implied options that you cannot ignore ? Practical case, ( bacula 5.0.1 ) I have to back-up a svn repo, in order to be able to make incremental backups as simple as posible i am hotcopying (client run before) it to another location, that bacula will backup ( then delete it with client run after). Now in the fileset i have Accurate = spnd5 This should tell bacula to take into consideration size , permission bits number of links , decreases in size and md5sum. However , an incremental is also including a full copy of the svn. What am i doing wrong ? it seems that it takes into account creation time even tho i have not specified it.

    Read the article

  • Issue with multiple bridging for KVM hosts

    - by Henry-Nicolas Tourneur
    I'm using KVM and libvirt on my host (Debian lenny) + 2 bridges per guest (one for mgmt, one for public traffic). That setup isn't stable at all, sometimes I can do pings to a management ip, sometimes not. I don't know if my bridging paramateres are correct, could you check ? or if there is anything wrong ... Please also note that interface on guest doesn't flap and that I got not logs on my host. Of course forwarding is enabled. iface eth3 inet manual auto bond0 iface bond0 inet manual slaves eth1 eth2 pre-up ip link set bond0 up down ip link set bond0 down auto br0 iface br0 inet static address 10.160.0.7 netmask 255.255.255.128 bridge_ports eth3 bridge_fd 9 bridge_hello 2 bridge_maxage 12 bridge_stp off auto br0:1 iface br0:1 inet static address 10.160.0.9 netmask 255.255.255.128 auto br0:2 iface br0:2 inet static address 10.160.0.10 netmask 255.255.255.128 auto br1 iface br1 inet static address 217.4.40.242 netmask 255.255.255.240 gateway 217.4.40.241 pre-up /etc/network/firewall start bridge_ports bond0 bridge_fd 9 bridge_hello 2 bridge_maxage 12 bridge_stp off auto br1:1 iface br1:1 inet static address 217.4.40.252 netmask 255.255.255.240 auto br1:2 iface br1:2 inet static address 217.4.40.253 netmask 255.255.255.240

    Read the article

  • (manually configured) kernel update leaves wireless in a mess

    - by Mala
    I recently upgraded my kernel from 2.6.31-gentoo-r6 to 2.6.32-gentoo-r7. In both cases, I configured everything manually. However, since the upgrade, my wireless card appears to be on the fritz. It will connect to networks just fine, and remain connected, but can only access the internet (and other hosts on the network) for about 3 seconds after connecting. Reconnecting to the network appears to fix the problem... for another 3 seconds or so. The problem is "solved" by booting into the older kernel. The relevant lspci entry is 02:00.0 Network controller: Intel Corporation PRO/Wireless 5300 AGN [Shiloh] Network Connection I'm pretty sure I have the correct drivers enabled in the kernel Device Drivers ---> Network device support ---> Wireless LAN (IEEE 802.11) ---> <*> Intel Wireless Wifi [*] Enable LED support in iwlagn and iwl3945 drivers [*] Enable Spectrum Measurement in iwlagn driver [*] Enable full debugging output in iwlagn and iwl3945 drivers <*> Intel Wireless WiFi Next Gen AGN (iwlagn) [*] Intel Wireless WiFi 4965AGN [*] Intel Wireless WiFi 5000AGN; Intel WiFi Link 1000, 6000, and 6050 Series I tried with the other intel drivers enabled as well (iwl3945) and no difference. Is there something stupid I'm missing? Is there something I have to recompile after upgrading the kernel (a la nvidia)? Thanks Mala

    Read the article

  • Connect Chrome to TOR

    - by Jack M
    I'm having difficulty connecting Chrome to TOR. I started trying yesterday. I started Vidalia and the TOR Browser and then followed the advice at http://lifehacker.com/5614732/create-a-tor-button-in-chrome-for-on+demand-anonymous-browsing - downloading Proxy Switchy and setting it up as stated. This resulted in Error 130 (net::ERR_PROXY_CONNECTION_FAILED) (in Chrome, when I tried to load a webpage). So I looked into Vidalia's settings and noticed that it appeared to be using port 9051, so I set that instead of 8118 as everyone on the internet seems to be suggesting. Then I got a new error: Error 111 (net::ERR_TUNNEL_CONNECTION_FAILED). Digging a bit, I found that Tor should be set as a SOCKS proxy, not an HTTP proxy, so I unticked "use same settings for all protocols" in Proxy Switchy and just set localhost:9051 for SOCKS. That got me Error 7 (net::ERR_TIMED_OUT). And that's when I came here for help. I typed up the above question, but then at the last minute decided to do a bit more reading and found someone here suggested using some command line arguments via a Windows shortcut: "C:\snip\chrome.exe" --proxy-server=";socks=127.0.0.1:9051;sock4=127.0.0.1:9051;sock5=127.0.0.1:9051" --incognito check.torproject.org And that worked perfectly. Yesterday. Today it doesn't, so I'm having to post this question after all. check.torproject.org gives me a "no" with Chrome, but a "yes" with the default Tor Browser. I tried closing Chrome and restarting it (yes, with the correct shortcut) after Vidalia started, but still nothing. The port number hasn't changed or anything. What gives? EDIT: I realized I had a "non tor" instance of Chrome running and that possibly the was causing the command line args t be ignored when I started the new instance. Closed all instances of chrome and ran my Chrome Tor shortcut, and it did get rid of the "not using Tor" message -- because I got another Time Out error instead. Vidalia's bandwidth graph didn't even blink.

    Read the article

  • FWBuilder DNS Object Run Time - when exactly does it resolve the DNS name?

    - by Jakobud
    In Firewall Builder, when you use the DNS Object and set it to run time, when exactly does the firewall (iptables in our case) actually resolve the DNS name? Is it whenever a call is made to that DNS name in the firewall? So the firewall would resolve the name on the fly whenever someone/something tries to access that DNS name? Or is it when you execute the fw script to load the rules into iptables? So in this case, it would resolve the DNS name that one time and then hard-code the resulting ip address into the iptable rules? From what I read, I think its #1, but it's just not 100% clear to me. We have two servers for a certain function on our network. One is the primary server and one is backup. alpha0.domain.com alpha1.domain.com In DNS we have this: alpha.domain.com -> alpha0.domain.com If the primary server goes down and we need to switch to the backup, I just change our local DNS record to point to alpha1.domain.com instead. So back to the firewall, if I just put in a Domain Object as alpha.domain.com, do I have to reload the firewall rules every time we switch to the backup alpha server and change the DNS record? Or will the firewall automatically resolve to the correct address even after the switch?

    Read the article

  • Apache virtual server httpd-vhosts undocumented issue

    - by Ethon Bridges
    I have read the Apache documentation on the https-vhosts.conf file and after a couple of hours fighting this problem, figured it out on my own. Here's the situation: We have a domain that ends in a .ws Apparently you can't do this in the conf file. You MUST use the ? wildcard or it will not work. The * wildcard will not work either. Further, in the ServerAlias directive, anything past the first entry will not work if the first entry in the ServerAlias directive is not correct. Here is an example of an entry that does NOT work. Note that anotherdomain.com and yetanotherdomain.com will fail because thedomain.ws is not configured correctly: <VirtualHost *:80> DocumentRoot /opt/local/apache2/sites/ourdomain ServerName www.thedomain.ws ServerAlias thedomain.ws another domain.com yetanotherdomain.com <Directory /opt/local/apache2/sites/ourdomain> allow from all </Directory> </VirtualHost> Here is an example of our working entry: <VirtualHost *:80> DocumentRoot /opt/local/apache2/sites/ourdomain ServerName www.thedomain.ws? ServerAlias thedomain.ws? another domain.com yetanotherdomain.com <Directory /opt/local/apache2/sites/ourdomain> allow from all </Directory> </VirtualHost> If there is documentation of this, I sure didn't see it.

    Read the article

  • Using URL rewrite module for http to https redirect

    - by johnnyb10
    Following ruslany's suggestion on the URL Rewrite Tips page here, I'm trying to use URL Rewrite to redirect http:// requests for my site to https://. I've written and tested the rule using a test site I set up, and so now the final piece is to create a second site (http) to redirect to my https site. (I need to use a second site because I don't want to uncheck the "Require SSL encryption" checkbox on my existing site.) I'm an IIS newbie so my question is: how do I do this? Should I create a site with the same name and host header, only it will be bound to http? Will IIS let me create a site with the same name? I don't want to screw anything up with my existing site (which is a SharePoint site, currently used by external users). That site currently has http and https bound to it. So my assumption is that, using ISS (not SharePoint), I will create a new site (http only) with the same name and host header as my existing site, and add the URL Rewrite rule to the http site. And then I guess I should remove the http binding from my existing site? Does that seem correct? Any advice, gotchas, etc., would be appreciated. Thanks.

    Read the article

  • mount error 5 = Input/output error

    - by alharaka
    I am running out of ideas. After a long period of testing this morning, I cannot seem to get this to work, and I have no idea why. I want to mount a Windows SMB/CIFS share with a Debian 5.0.4 VM, and it is not cooperating. This the command I am using. debianvm:/home/me# whoami root debianvm:/home/me# smbclient --version Version 3.2.5 debianvm:/home/me# mount -t cifs //hostname.domain.tld/share /mnt/hostname.domain.tld/share --verbose -o user=SUBADDOMAIN.ADDOMAIN.DOMAIN.TLD/username mount.cifs kernel mount options: unc=//hostname.domain.tld\share,ip=10.212.15.53,domain=SUBADDOMAIN.ADDOMAIN.DOMAIN.TLD,ver=1,rw,user=username,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,pass=*********mount error 5 = Input/output error Refer to the mount.cifs(8) manual page (e.g.man mount.cifs) debianvm:/home/me# The word on the nets has not been very specific, and unfortunately it is almost always environment-specific. I receive no authentication errors. I have tried mount -t smbfs and mount -t cifs, along with smbmount and such. I get the same error before. I doubt it is a problem with DNS resolution, because logging shows the correct IP address. dmesg | tail -f no longer shows authentication errors when I format the domain and username accordingly. I have played a little with iocharset=utf8, file_mode, and dir_mode as described here. That did not help either. I have also tried ntlm and ntlmv2 assuming it might be a minimum auth method problem, but not forcing sec=ntlmv2 it can still authenticate without errors anymore. smbclient -L hostname.domain.tld -W SUBADDOMAIN.ADDOMAIN.DOMAIN.TLD -U username correctly lists all the shares and shows it as the following. Domain=[SUBADDOMAIN] OS=[Windows 5.0] Server=[Windows 2000 LAN Manager] Sharename Type Comment --------- ---- ------- IPC$ IPC Remote IPC ETC$ Disk Remote Administration C$ Disk Remote Administration Share Disk Connection to hostname.domain.tld failed (Error NT_STATUS_CONNECTION_REFUSED) NetBIOS over TCP disabled -- no workgroup available I find the last line intriguing/alarming. Does anyone have any pointers!? Maybe I misread the effin manual.

    Read the article

  • What Is The Proper Laptop Battery Care While Running Laptop Solely On Battery?

    - by Boris_yo
    Because of convenience, I had to move my laptop to another room away from room where I always ran laptop on UPS without using battery. Since so far I always run laptop on battery, I question the proper usage to prolong battery life. Currently I run laptop on battery with power supply so battery is constantly being charged until it is full 100% and when it is, I disconnect power supply and continue working until battery meter shows 10% remaining. That's when I plug in power supply and let it charge until 100% once again while I work. But it takes a lot of time to fully charge laptop while working since my power supply is 60W which should be the reason of such slow charge and I think the kind of charger that I use is express charger. The thought of charging laptop until full, all while doing my work makes me think that if it takes way more time to charge, it might keep battery running warm for the period of charging time which brings me to question about whether I should keep running laptop as I've described above or it would be better to leave power supply constantly connected to laptop to keep battery between 99%-100%? On one hand it won't keep battery warm but it will try to frequently supply charge to battery once it gets 99% to replenish charge to 100% (which might reduce battery life?). On the other hand if I'll keep working solely on battery and recharge it when below 10%, the battery will get warm but only when charged. Can anybody suggest the correct way of running laptop on battery to ensure better battery life? Dell Latitude E6420 Windows 7 64-bit

    Read the article

  • Is it possible to add/register an MIB for the Windows built-in SNMP service?

    - by michielvoo
    I need to build monitoring into an existing .NET application. I will use SNMP to send the application's status to the Windows SNMP service. I have used a .NET library to create the SNMP SET request according to the MIB that I have been provided with, and with the correct community. My code now sends multiple 'variables' in a SET request, for example: Id: ".1.3.6.1.4.1.43607.1.1.1.1.1" (ObjectIdentifier) Data: 42 (Integer32) On my machine I have enabled the SNMP service, configured a community with READ/WRITE permissions, and added localhost to the list of hosts to accept requests from. When I send the SET request I get a response, but it has error status 17 which, according to MSDN means SNMP_ERRORSTATUS_NOTWRITABLE. The response also has error index set to 8, which is the number of variables I send. If I send 7 variables, the error index is set to 7. I think the problem is that the Windows SNMP service is preconfigured to only accept SET requests for a fixed set of MIBs. How can I get the Windows SNMP service to 'accept' my custom MIB SET request? Edit: I downloaded and installed the Windows Server 2003 Resource Kit and tried to 'compile' the MIB file with mibcc.exe ("SNMP MIB Compiler") but I have not been able to compile any MIB files (even the most basic ones like SNMPv2-SMI.mib).

    Read the article

  • Windows 7 Aero theme's "greyed out" - no found fix

    - by Robsta
    Brand new machine that was working fine then randomly it changed the theme when I booted into a sort of "basic" theme (white task bar, no see through windows etc) I've done and attempted many fixes and I still don't understand why it doesn't work. I've tried these two solutions: "How to enable Windows 7 Aero Theme" and "Windows 7 Aero Themes Greyed out" These solutions included registy changes, stopping/starting services, and force starting the aero theme. The closest I got seems to be when I went into: Control panel (category view) Find and fix problems (System and Security) Display Aero Desktop Effects I follow through the wizard and let it do its thing and then I get an error window that pops up: Personalization - "This theme can't be applied to the desktop. Try clicking a diffrent theme." That's what I get from the wizard. What can I do? My drivers are all up to date, there are no viruses on the computer, directx is installed and updated, and the registry is all correct. EDIT: When I boot the computer, I get a notification stating that windows failed to communicate with the windows desktop services.

    Read the article

  • DNS propagation delay or bad configuration?

    - by Javier Martinez
    I have been waiting the DNS propagation for almost 24 hours. I'am no impatient, but I want to know if I configured my zone good or I have any error in it. I think that is good, because if I use my server dns like my DNS secondary I can resolve and lookup host well. ; ; BIND data file for mydomain.net ; $TTL 86400 @ IN SOA mydomain.net. mydomain.net. ( 20120629 ; Serial 10800 ; Refresh 3 hours 3600 ; Retry 1 hour 604800 ; Expire 1 week 86400 ) ; Negative Cache TTL ; @ IN NS ns1 @ IN NS ns2 IN MX 10 mail ns1 IN A 5.39.X.Y ns2 IN A 5.39.X.Z There is not any errors in /var/syslog about bind daemon. Is everything correct? Do I only need to wait up to 48 hours for the right DNS propagation? My nslookup from a remote machine with the nameserver of the bind host: $ nslookup mydomain.net Server: bind-host-ip Address: bind-host-ip#53 Name: mydomain.net Address: domain-ip

    Read the article

  • How do I get basic ProxyPass to work on Apache 2.2.17?

    - by Ansis Malins
    I'm trying to get around the ERR_UNSAFE_PORT restriction in Chrome by making Apache reverse proxy other HTTP servers on the machine. I load mod_proxy with sudo e2enmod proxy I add ProxyPass /znc/ http://localhost:6667/ to my httpd.conf I restart Apache with sudo /etc/init.d/apache2 restart When I open up /znc/, I get 500 Internal Server Error. I added LogLevel debug, restarted apache, tried again, and got nothing suspicous: [Fri Oct 19 18:55:17 2012] [debug] proxy_util.c(1818): proxy: grabbed scoreboard slot 0 in child 21528 for worker http://localhost:6667/ [Fri Oct 19 18:55:17 2012] [debug] proxy_util.c(1934): proxy: initialized single connection worker 0 in child 21528 for (localhost) [Fri Oct 19 18:55:17 2012] [debug] proxy_util.c(1818): proxy: grabbed scoreboard slot 1 in child 21528 for worker proxy:reverse [Fri Oct 19 18:55:17 2012] [debug] proxy_util.c(1934): proxy: initialized single connection worker 1 in child 21528 for (*) [Fri Oct 19 18:55:17 2012] [notice] Apache/2.2.17 (Ubuntu) PHP/5.3.8 configured -- resuming normal operations [Fri Oct 19 18:55:17 2012] [info] Server built: Feb 14 2012 17:59:20 [Fri Oct 19 18:55:17 2012] [debug] prefork.c(1018): AcceptMutex: sysvsem (default: sysvsem) [Fri Oct 19 18:55:22 2012] [debug] proxy_util.c(1818): proxy: grabbed scoreboard slot 0 in child 21532 for worker http://localhost:6667/ [Fri Oct 19 18:55:22 2012] [debug] proxy_util.c(1837): proxy: worker http://localhost:6667/ already initialized [Fri Oct 19 18:55:22 2012] [debug] proxy_util.c(1934): proxy: initialized single connection worker 0 in child 21532 for (localhost) [Fri Oct 19 18:55:22 2012] [debug] proxy_util.c(1818): proxy: grabbed scoreboard slot 1 in child 21532 for worker proxy:reverse [Fri Oct 19 18:55:22 2012] [debug] proxy_util.c(1837): proxy: worker proxy:reverse already initialized [Fri Oct 19 18:55:22 2012] [debug] proxy_util.c(1934): proxy: initialized single connection worker 1 in child 21532 for (*) So I'm stumped at this point. What to do? I'm running Ubuntu Server 11.10. ZNC responds with a correct 200 OK and HTML when queried directly both from the local machine and the Internet.

    Read the article

  • NetBackup's bplist doesn't get user/group info for Windows files

    - by Gnustavo
    I'm trying to get information about storage consumption from NetBackup's bplist output. I'm running NBU 6.0MP5 on a RHEL 3 server. The server is backing up several Solaris, Linux, and Windows machines. When I use bplist to get information about files backed up on any UNIX machine I get something like this: # bplist -C unixclient -R 99 -l -s 01/28/2006 -e 01/29/2006 / drwxr-xr-x test ccase 0 Nov 16 09:28 /l/home2/test/ -rw------- test ccase 4737 Jan 06 17:54 /l/home2/test/.bash_history -rw-rw-r-- test ccase 104 Nov 11 2004 /l/home2/test/.bashrc However, when I use it to list files backed up on any Windows client I can't get the user and group information. They both always appear as 'root'. Like this: # bplist -C winclient -t 13 -R 99 -l -s 02/20/2006 / drwx------ root root 0 Feb 20 14:26 /C/temp/ -rwx------ root root 41 Feb 20 14:26 /C/temp/asdf.txt drwx------ root root 0 May 25 2004 /C/temp/CTRMNGR/ Does anyone know why bplist doesn't show the correct user/group for Windows files? If it can't, is there a way to get that information using another command? Thanks. Gustavo.

    Read the article

  • Add single sign-on into existing web app

    - by EvilDr
    Apologies if this isn't the best site, I've search for an answer but can't find anything quite right. I don't actually now the correct terminology I should be using here, so any pointers will be appreciated. I have a web application that accessed by many different users across different organisations. Access is provided by each user having a unique username/password which is stored in SQL (database fields are customerID, userID, username). Some organisations are now asking if we can change this to allow "Active Directory single sign-on" so that users don't need to remember yet another set of login details. From research I can see how this is achieved using OpenAuth and Google (etc), but I know hardly anything about AD and can't find much information on this (again I'm sure it helps when you know the terminology). Is this request even possible to achieve, given that most users will be from different (and unrelated) organisations? I saw on a Microsoft Build video not long ago that there is some kind of replication service for AD to allow Cloud authentication. Is this what I should be aiming for?

    Read the article

  • What options to use for Accurate bacula backup ?

    - by Kiss Stefan
    It's actually 2 question in one. First is a bit more theoretically. So when specifying accurate options how does bacula figure out if a file needs to be backed up ? it's a simple AND ? As in if the options are Accurate = sm5 bacula will not backup the file if ((size = old size) AND (modtime = old modtime) AND (md5 = old md5)) Is that correct ? Do any of the options take precedence ? as in would be a file skipped if modif time is diffreent but it has the same md5sum ? Are there any implied options that you cannot ignore ? Practical case, ( bacula 5.0.1 ) I have to back-up a svn repo, in order to be able to make incremental backups as simple as posible i am hotcopying (client run before) it to another location, that bacula will backup ( then delete it with client run after). Now in the fileset i have Accurate = spnd5 This should tell bacula to take into consideration size , permission bits number of links , decreases in size and md5sum. However , an incremental is also including a full copy of the svn. What am i doing wrong ? it seems that it takes into account creation time even tho i have not specified it.

    Read the article

  • Word 2010, Multiple Columns, Vertical center one column only

    - by Nancy N Jones
    I am creating a document with two columns in Microsoft Word 2010. I want the first column to be centered vertically. I want the second column to be on the same page and the vertical placement to be from the top. I highlight my text in the first column that I want centered vertically, then go to Page Layout Margins Custom Margins Layout, you can choose to center the vertical alignment. I have choosen the "Section Start" to be "Column" and also tried "Continuous." In all cases it always shifts all of my second column information to a new page. I don't want my second column text to be on a new page, I want it to be on the same page and vertically aligned from the top--not the center. Am I understanding the functionality of the Section Start on the Layout tab correctly? Maybe the page layout isn't the correct formatting to use. What I am really doing is formatting columns. I haven't found anywhere to format the columns for this. Am I missing some important column formatting features? I know that I can use the paragraph formatting and add space above the first line of text to make it look like it is centered vertically. However, this is a template for a master document and will be changed frequently. I really would like the first column text to be automatically formatted to be centered vertically without having to go in and manually change the space above the paragraph every time. Your assistance would be greatly appreciated.

    Read the article

  • Adventures in Drupal multisite config with mod_rewrite and clean urls

    - by moexu
    The university where I work is planning to offer Drupal hosting to staff/faculty who want a Drupal site. We've set up Drupal multisite with clean urls and it's mostly working except for some weird redirects. If you have two sites where one is a substring of the other then you'll randomly be redirected to the other site. I tracked the problem to how mod_rewrite does path matching, so with a config file like this: RewriteCond %{REQUEST_URI} ^/drupal RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /drupal/index.php?q=$1 [last,qsappend] RewriteCond %{REQUEST_URI} ^/drupaltest RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /drupaltest/index.php?q=$1 [last,qsappend] /drupaltest will match the /drupal line and all of the links on the /drupaltest page will be rewritten to point to /drupal. If you put the end of string character ($) at the end of each rewrite condition then it will always match on the correct site and the links will always be rewritten correctly. That breaks down as soon as a user logs in though because the query string is appended to the url so just the base url will no longer match. You can also fix the problem by ordering the sites in the config file so that the smallest substring will always be last. I suggested storing all of the sites in a table and then querying, sorting, and rewriting the config file every time a Drupal site is requested so that we could guarantee the order. The system administrator thought that was kludgy and didn't address the root problem. Disabling clean urls should also fix the problem but the users really want them so I'd prefer to keep them if possible. I think we could also fix it by using an .htaccess file in each site to handle the clean url rewriting but that also seems suboptimal since it will generate a higher load on the server and the server is intended to host the majority of the university's external facing web content. Is there some magic I can do with mod_rewrite to get it to work? Would another solution be better? Am I doing something the wrong way to begin with?

    Read the article

  • Is there a good alternative to Videora iPod Converter?

    - by Richard
    I use Videora to convert my videos (in DivX/XviD format) to something I can play on my iPod Classic. I really dislike it. It's clunky, riddled with adverts, sometimes doesn't convert properly (the infamous "invalid public atom" error - see Google for more) and has a UI that truly stinks. On the upside, it's free, accepts a list of video files (via the oddly hard to find "1-click convert" button) and just gets on with the converting as it already knows the correct settings for my iPod. One final nice touch is that once they are converted, it'll automatically upload them into iTunes. Are there any alternatives which have all the upsides but none of the downsides? Bonus points if they can set the metadata in iTunes correctly for TV shows (season, show, episode) and delete the converted file afterwards (as my iTunes settings means that a copy is made elsewhere). I've looked at a bunch of applications (handbrake, virtualdub, mediacoder, format factory, any video converter, convertxtodvd) but many of them fail the "just select a list of files and get on with converting" test - let alone all the other features I want. I have no desire to individually set the video size of each file or the codec or the post-processing options. I'm currently using the command line version of HandBrake (handbrakecli) and a hand-written DOS batch file to go through every file in a folder and convert it. It does most of what I want, just not in a very slick way. Can anyone recommend anything better? It needs to work on Windows 7 and be free.

    Read the article

  • adding a custom user folder on Ubuntu

    - by Narcolapser
    Question: How do you add a custom folder to the collection of user folders that come with Ubuntu? Info: I just loaded my netbook with Ubuntu Desktop 10.04LTS (Desktop because it is an aspire one and the Apocalypse seems to follow when ever i try to install netbook remix onto it). It comes with standard folders like Documents, Music, Pictures, Downloads(though this one doesn't appear until you actually download something), Videos, etc etc. These are handly little folders because they have little symbols on them and are nicely located in my file browser. it is basically like the folder lay out the windows had in vista. I do a lot of little programing on this computer so i have a folder in which i keep all these single kb code files. Obviously named "Code" that I keep in my home folder. But I would really like to it over listed next to my other user folders. In summary, how do you add a folder to the listing on the file browser. And, if possible, how do you give it an icon? (I understand fully that I will probably have to make said Icon) those two things are what I'm seeking to do. ~n P.s. please correct me if I'm using the wrong name. I just guessed and called them "User Folders" because they were folders the user uses. made sense. but if they have another name like "libraries" please say so. Thanks

    Read the article

  • Tomcat Solr times out

    - by user568458
    (Plesk 10.4 centos 5.8 linux apache2 server, with Tomcat5 on port 8080 and Apache Solr) I get "The connection has timed out" on requesting domain.com:8080 or www.domain.com:8080 or ip.ad.dr.ess:8080 Every reason I can find why this might be seems not to be the case: Plesk thinks Tomcat is running fine and lists it as an active service. The firewall currently has an accept all rule on port 8080. There's nothing relevant in the catalina tomcat logs (/var/log/tomcat5) - just some stuff from last time tomcat was started. There's no record at all of the requests that fail. netstat -lnp | grep 8080 gives the following, which I beleive means Tomcat is listening to requests to port 8080 on all ip addresses from any ip and any port (please correct me if I'm wrong): : tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 4018/java This covers every cause of this time out that I can find - so I must be missing something fundamental. It seems Tomcat is running, listening to the right port, is getting an appropriate IP address, is not obstructed by a firewall and is not failing after receiving a request in a way which would be recorded in the logs (so I believe it can't be out of memory, or anything like that). I'm all out of ideas on how to continue debugging this. I must have overlooked something obvious. Can anyone help?

    Read the article

  • can't ssh from mac to windows (running ssh server on cygwin)

    - by Denise
    I set up an ssh server on a fresh windows 7 machine using the latest version of cygwin. Disabled the firewall. I can ssh into it from itself, from a different windows box (using winssh), and from a linux vm. In spite of that, I tried to ssh in from two different macs, and neither would let me! This is the debug output: OpenSSH_5.1p1, OpenSSL 0.9.7l 28 Sep 2006 debug1: Reading configuration data /etc/ssh_config debug1: Connecting to 3dbuild [172.18.4.219] port 22. debug1: Connection established. debug1: identity file /Users/Denise/.ssh/identity type -1 debug1: identity file /Users/Denise/.ssh/id_rsa type 1 debug1: identity file /Users/Denise/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.5 debug1: match: OpenSSH_5.5 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-cbc hmac-md5 none debug1: kex: client->server aes128-cbc hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host '3dbuild' is known and matches the RSA host key. debug1: Found key in /Users/Denise/.ssh/known_hosts:43 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Next authentication method: publickey debug1: Trying private key: /Users/Denise/.ssh/identity debug1: Offering public key: /Users/Denise/.ssh/id_rsa Connection closed by [ip] It shows the same output, and fails at the same place, whether I have put my public key on the ssh server or not. Any help would be appreciated-- hopefully someone has run into this before?

    Read the article

  • How do you change a Cisco ASA 5510 management interface?

    - by Sam Sanders
    I want to add a redundant interface to my Cisco ASA 5510. The management interface is currently using Ethernet0/1 (10.1.25.254/24) one of the interface I want to use for the redundant interfaces. So I wanted to setup Management0/0 as the new management interface. The other interface I want to use is Ethernet0/2 (10.1.0.254/24) for the redundant interface. The Ethernet0/3 (10.1.251.5/24) interface is not going to be part of the redundant interface. I gave the Management0/0 an IP address of 10.1.254.5, and was able to connect a win7 box to Management0/0 and use 10.1.254.5 as a gateway; and ping another address on the (10.1.251.0/24) network, but I can't ping the interface (10.1.254.5) itself. I also can't use ASDM/SSH to log onto the ASA at 10.1.254.5. I setup rules in Configuration Device Management Management Access ASDM/HTTPS/Telnet/SSH. That look like the original rules for the Ethernet0/1 interface. The last thing I can think to try would be to change the Configuration Device Management Management Access Management Interface. I'm a bit nervous about changing it, the description of it is a bit vague. What it's going to do if I change it? What is the correct way to change a management interface?

    Read the article

< Previous Page | 494 495 496 497 498 499 500 501 502 503 504 505  | Next Page >