Search Results

Search found 6942 results on 278 pages for 'enabled'.

Page 205/278 | < Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >

  • Find routers IP address on the other side

    - by corsiKa
    Here's the basic setup of my network In this diagram: 1: The internet c: cable 2: Wireless router w: wireless connection 3: A win7 box with internet connection sharing enabled 4: A wireless router, but I'm only using its LAN capabilities to connect box 5 to the internet. 5: A win7 box, the computer I'm using to make this post. So its internet works just fine. Now if I'm on box 5, and I ping 192.168.1.1, I hit 4. If I'm on box 3 and I ping 192.168.1.1, I hit 2. Now obviously box 3 does not think 4's IP address is 192.168.1.1, or I wouldn't be able to connect to the internet. Okay, now that you know as much as I do about my network, here's my question: If I was on box 3, how would I determine the IP address of 4? Basically I'm running a webserver on box 5 and want to access this webserver on box 3 and other boxes. So that's the end goal. If there's other information there that can help, I'd appreciate it. Thanks!

    Read the article

  • Configuring MySQL for Power Failure

    - by Farrukh Arshad
    I have absolutely no experience with databases and MySql. Now the problem is I have an embedded device running a MySQL database with a web based application. The problem is when I shutdown my embedded device it just cut off the power, and I can not have a controlled shutdown. Given this situation how can I configure MySql to prevent it from failures and in case of a failure, I should have maximum support to recover my database. While searching this, I came across InnoDB Engine as well as some configuration options to set like sync_binlog=1 & innodb_flush_log_at_trx_commit=1. I have noticed my default Engine is InnoDB and binary logs are also enabled. What are other configurations to make for best possible failure & recovery support. Updated: I will be using InnoDB engine which supports Transactions. My question is how best I can configure it (InnoDB + MySQL) so that it can provide best possible fail-safe as well as crash recovery mechanism. One configuration option I came across is to enable binary logging which InnoDB uses at the time of recovery. Regards, Farrukh Arshad

    Read the article

  • Wake a Mac display from sleep via SSH

    - by MaxGabriel
    I'm using Jenkins as a CI server, where I'm SSHing into an iMac running OS X Mountain Lion (10.8.4) to run some UIAutomation integrations tests on an iOS app. The iMac actually sits 10 ft from me (but across a table) so I'm able to see the screen. However, the tests don't wake up the display, so I often can't see them. Is there a way to wake up the display from the terminal once Jenkins has SSHed in? So far I have tried using Applescript to press an arrow key, and using the Wake Assist application. I also tried setting the wake schedule to be the current date. Finally, I tried using the caffeinate command: caffeinate -t 300 &. The computer's "Wake for Wi-Fi access" checkbox is enabled. So far my best workaround is to just set the iMac to stay awake for atleast 3 hours. However, it'd be nice to keep normal sleep behavior, as I hypothesize that the screen waking from sleep would alert me visually that the integration tests are running. It's also significantly cooler :)

    Read the article

  • My yum repository able to search packages, but not able to install it in RHEL?

    - by mandy
    I set up yum from dvd. Following is the containts of my .repo file: [dvd] name=Red Hat Enterprise Linux Installation DVD baseurl=file:///media/dvd enabled=0. I'm able to search packages. However while installation I'm getting below error: [root@localhost dvd]# yum install libstdc++.x86_64 Loaded plugins: rhnplugin, security This system is not registered with RHN. RHN support will be disabled. Setting up Install Process Nothing to do My Yum Search output: [root@localhost dvd]# yum search gcc Loaded plugins: rhnplugin, security This system is not registered with RHN. RHN support will be disabled. ============================================================================= Matched: gcc ============================================================================= compat-libgcc-296.i386 : Compatibility 2.96-RH libgcc library compat-libstdc++-296.i386 : Compatibility 2.96-RH standard C++ libraries compat-libstdc++-33.i386 : Compatibility standard C++ libraries compat-libstdc++-33.x86_64 : Compatibility standard C++ libraries cpp.x86_64 : The C Preprocessor. libgcc.i386 : GCC version 4.1 shared support library libgcc.x86_64 : GCC version 4.1 shared support library libgcj.i386 : Java runtime library for gcc libgcj.x86_64 : Java runtime library for gcc libstdc++.i386 : GNU Standard C++ Library libstdc++.x86_64 : GNU Standard C++ Library libtermcap.i386 : A basic system library for accessing the termcap database. libtermcap.x86_64 : A basic system library for accessing the termcap database. Please guide me on this, I want to install gcc on my RHEL.

    Read the article

  • How can I use HAproxy with SSL and get X-Forwarded-For headers AND tell PHP that SSL is in use?

    - by Josh
    I have the following setup: (internet) ---> [ pfSense Box ] /-> [ Apache / PHP server ] [running HAproxy] --+--> [ Apache / PHP server ] +--> [ Apache / PHP server ] \-> [ Apache / PHP server ] For HTTP requests this works great, requests are distributed to my Apache servers just fine. For SSL requests, I had HAproxy distributing the requests using TCP load balancing, and it worked however since HAproxy didn't act as a proxy, it didn't add the X-Forwarded-For HTTP header, and the Apache / PHP servers didn't know the client's real IP address. So, I added stunnel in front of HAproxy, reading that stunnel could add the X-Forwarded-For HTTP header. However, the package which I could install into pfSense does not add this header... also, this apparently kills my ability to use KeepAlive requests, which I would really like to keep. But the biggest issue which killed that idea was that stunnel converted the HTTPS requests into plain HTTP requests, so PHP didn't know that SSL was enabled and tried to redirect to the SSL site. How can I use HAproxy to load balance across a number of SSL servers, allowing those servers to both know the client's IP address and know that SSL is in use? And if possible, how can I do it on my pfSense server? Or should I drop all this and just use nginx?

    Read the article

  • configuration issue with respect to .htaccess file on ubuntu

    - by Registered User
    I am building an application tshirtshop I have following configuration in /etc/apache2/sites-enabled/tshirtshop <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www/tshirtshop <Directory /var/www/tshirtshop> Options Indexes FollowSymLinks AllowOverride All Order allow,deny allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> and following in .htaccess file in location /var/www/tshirtshop/.htaccess <IfModule mod_rewrite.c> # Enable mod_rewrite RewriteEngine On # Specify the folder in which the application resides. # Use / if the application is in the root. RewriteBase /tshirtshop #RewriteBase / # Rewrite to correct domain to avoid canonicalization problems # RewriteCond %{HTTP_HOST} !^www\.example\.com # RewriteRule ^(.*)$ http://www.example.com/$1 [R=301,L] # Rewrite URLs ending in /index.php or /index.html to / RewriteCond %{THE_REQUEST} ^GET\ .*/index\.(php|html?)\ HTTP RewriteRule ^(.*)index\.(php|html?)$ $1 [R=301,L] # Rewrite category pages RewriteRule ^.*-d([0-9]+)/.*-c([0-9]+)/page-([0-9]+)/?$ index.php?DepartmentId=$1&CategoryId=$2&Page=$3 [L] RewriteRule ^.*-d([0-9]+)/.*-c([0-9]+)/?$ index.php?DepartmentId=$1&CategoryId=$2 [L] # Rewrite department pages RewriteRule ^.*-d([0-9]+)/page-([0-9]+)/?$ index.php?DepartmentId=$1&Page=$2 [L] RewriteRule ^.*-d([0-9]+)/?$ index.php?DepartmentId=$1 [L] # Rewrite subpages of the home page RewriteRule ^page-([0-9]+)/?$ index.php?Page=$1 [L] # Rewrite product details pages RewriteRule ^.*-p([0-9]+)/?$ index.php?ProductId=$1 [L] </IfModule> the site is working on localhost and is working as if there is no .htaccess rule specified i.e. if I were to view a page as http://localhost/tshirtshop/nature-d2 then I get a 404 Error but if I view the same page as http://localhost/tshirtshop/index.php?DepartmentId=2 then I can view it. What is the mistake if any one can point out in above configuration, or else I need to check any thing else? sudo apache2ctl -M Loaded Modules: core_module (static) log_config_module (static) logio_module (static) mpm_prefork_module (static) http_module (static) so_module (static) alias_module (shared) auth_basic_module (shared) authn_file_module (shared) authz_default_module (shared) authz_groupfile_module (shared) authz_host_module (shared) authz_user_module (shared) autoindex_module (shared) cgi_module (shared) deflate_module (shared) dir_module (shared) env_module (shared) mime_module (shared) negotiation_module (shared) php5_module (shared) reqtimeout_module (shared) rewrite_module (shared) setenvif_module (shared) status_module (shared) Syntax OK I am using Apache2 on Ubuntu 12.04

    Read the article

  • Why won't Media Monkey add one particular folder of mp3's?

    - by ChrisF
    I'm using the latest and greatest version on Media Monkey (free version) and it won't find the mp3's in one particular folder in my music tree. It can see all the other files in the tree and the folder shows up when I click Add/Rescan files to the library... I have full control over the folder and all the files in the folder. The files play in Windows Media Player. The files play in Media Monkey if I right click and play from the context menu. All the tracks are at least 2 minutes long and over 5MB long and Media Monkey is set to ignore files shorter than 20KB and include all files regardless of length. There was an issue in that the that the genre of the tracks was set to "Classical" and the option that allows you to browse the classical music independently of the other music isn't enabled in the free version. It's a Gold version option only. I hadn't spotted that my other classical music was also missing from the library (I have rather a large library). Once I retagged the music with a different tag and tried to add the files again it reported that it added the tracks, but they still didn't show up in the library.

    Read the article

  • VMware Workstation "Power off" and "Suspend" buttons are disabled. What is going on?

    - by Ed Norris
    I've been using an XP image for a few months now with no problems. Recently the power off and resume buttons were disabled and I'm not sure what happened to cause that. In addition, the menu items to do those functions are grayed out (like VM | Power | Power Off) I'm using VMware Workstation 6.5.3 on a Windows 7 64-bit host. The image is Windows XP 32-bit. There is plenty of free space and memory on the host and the CPU is not pegged. I am able to power off the image through its Start menu, but that's a workaround not a fix. Any suggestions? TIA EDIT: Well after manually shutting down the image then closing and reopening the image file (which I've done before) in preparation for reinstalling the tools as suggested below, it looks fine. The power off and suspend buttons are enabled and work. So what do I do with this question now? "Close and restart a few times and it may work" doesn't seem useful...

    Read the article

  • IP to IP forwarding with iptables [centos]

    - by FunkyChicken
    I have 2 servers. Server 1 with ip 1.1.1.1 and server 2 with ip 2.2.2.2 My domain example.com points to 1.1.1.1 at the moment, but very soon I'm going to switch to ip 2.2.2.2. I have already setup a low TTL for domain example.com, but some people will still hit the old ip a after I change the ip address of the domain. Now both machines run centos 5.8 with iptables and nginx as a webserver. I want to forward all traffic that still hits server 1.1.1.1 to 2.2.2.2 so there won't be any downtime. Now I found this tutorial: http://www.debuntu.org/how-to-redirecting-network-traffic-a-new-ip-using-iptables but I cannot seem to get it working. I have enabled ip forwarding: echo "1" > /proc/sys/net/ipv4/ip_forward After that I ran these 2 commands: /sbin/iptables -t nat -A PREROUTING -s 1.1.1.1 -p tcp --dport 80 -j DNAT --to-destination 2.2.2.2:80 /sbin/iptables -t nat -A POSTROUTING -j MASQUERADE But when I load http://1.1.1.1 in my browser, I still get the pages hosted on 1.1.1.1 and not the content from 2.2.2.2. What am I doing wrong?

    Read the article

  • Can't access shared folder of win8/win7 machine - Error code: 0x80004005. Unspecified Error

    - by ruslan
    It's ironic that I, software engineer with 12 years of experience, continue to have this problem from one version of Windows to another without being able to achive consistent result (sometimes it works). Here it goes again. I have a machine with Win8 Consumer Preview. It doesn't really matter that it's win8. I had same issue with win7 before. On given machine I created local admin user with same name and password I have on second PC (the machine I'm typing this from now). I have two questions to you guys. Why I'm not able to access C$ share of win8 machine from another Win7 machine? I get error that C$ doesn't exist even though it does. Why I'm not able to access share named "test" in Win8 for which Permission set to Full for Everyone. When I attempt to access it from Win7 machine I'm asked to enter username and password. After entering administrator credentials I get error "Windows cannot access \192.168.1.123\test. Error code: 0x80004005. Unspecified Error". Windows Firewall is disabled on Win8 machine for both Private and Public networks. Guest account is disabled. Built-in admin account is enabled. Machine is pingable from other machines.

    Read the article

  • Gigabyte H55N-USB3: No video on HDMI

    - by newt
    I built a new PC with a Gigabyte H55N-USB3 / Intel Core i5 650. With a monitor plugged in the DVI port, everything works fine. I installed Windows 7 32-bit and enabled remote desktop connection. After that, I unplugged the monitor, plugged it into network and installed everything else (drivers, programs, etc) via RDP. However, when I try to use the HDMI port on my TV nothing appears. Neither during the boot, neither after Windows starts. The TV says there's "no signal" (if I remove the cable the message changes to "check cable"). The cable is new, and it is working fine with my home theater on same TV (by the way, it is the cable which came bundled with the home theater). Video driver is the latest from Intel site. Anyway, this shouldn't be the problem since there is no image during the boot. Any ideas or tips would be welcome. I'm googling around but found nothing useful, yet.

    Read the article

  • WAMP Installation - Multiple PHP Version Issues

    - by Pete171
    I have installed WAMP because I am attempting to modify an an application which uses Zend Optimizer (I cannot use Z.O. with PHP5.3+, which is why I decided to install WAMP). I downloaded the latest version - Wamp Server 2.1 - which comes bundled with PHP5.3.5. I then downloaded a PHP version that would be compatible with Z.O. - 5.2.9. - and a compatible Apache version, 2.0.63. My problem: PHP scripts run fine, but anything with MySQL does not work. Running the testmysql.php script returns the fatal error: Fatal error: Call to undefined function mysql_connect() in C:\wamp\www\testmysql.php on line 2". I have looked in the PHP INI files inside both PHP versions, and I'm fairly sure the relevant information is there. At least, there are parts inside that mention MySQL! Perhaps somebody could clarify exactly what information should be present? Also, when visiting a page that called phpinfo(), I noticed that the 'Loaded Configuration File' was pointing to C:\wamp\bin\php\php5.3.5\php.ini, even though I have enabled the older PHP version. I've stopped and started Apache, too, and that hasn't made a difference. Is anybody able to offer any assistance? Anything at all would be great; I'm not very good at messing around with Apache/

    Read the article

  • How do I change the default ftp folder in MacOS X 10.6?

    - by Wild_Eep
    I'm running WordPress 2.9.1 from a Mac running 10.6.3. WordPress is installed to the /Library/WebServer/Documents folder. WordPress has a feature called AutoUpdate. Clicking an autoupdate button will download and install updated versions of the WordPress software, or third-party plugin tools. It's a convenient way to keep things up to date. WordPress uses FTP to download the files. I've enabled FTP and set up a user account and opened the requisite ports in my firewall for FTP traffic. This doesn't seem to be enough for my self-hosted installation, though. I'm sure this feature was originally designed for someone who has access to a remote shared webserver, and that it's merely a configuration challenge related to the FTP setup. I feel that if I can adjust the initial directory that the FTP service presents to the AutoUpdate feature, everything else will work properly. So, my question is, how do I adjust what folder is presented when a given user connects to a Mac running 10.6.3 via FTP?

    Read the article

  • Having two IP Routes/Gateways of last Resort on an HP Switch

    - by SteadH
    We have an HP Layer 3 Switch that is doing IP routing between vlans. The general set up is that the switch has an IP address on each VLAN and IP routing is enabled. On our servers VLAN, we have a firewall that has a connection to the outside world. To set a IP route on the HP router, we use IOS command ip route 0.0.0.0 0.0.0.0 192.168.2.1 where 192.168.2.1 is the address of our firewall, and the zeros essentially mean to route all traffic that the switch doesn't know what to do with out the firewall as a gateway. We're in the middle of an ISP and firewall change. I set up the new firewall and ran the IOS command ip route 0.0.0.0 0.0.0.0 192.168.2.254 (the address of the new firewall). Things started working nicely. When I reviewed the configuration of the switch though, I noticed that it did not replace the previous ip route command, but just added another route. Now, I know how to remove the old firewall route (no ip route 0.0.0.0 0.0.0.0 192.168.2.1), but what is the effect of having these two 0.0.0.0 routes? Is it switch implosion? Will a server just respond back over the route it receives the request from? I've read elsewhere that having two default gateways is an impossibility by definition, but I'm curious about this situation that our switch allowed. Thanks!

    Read the article

  • Migrating WebLogic 10.3.0 to new host. Slow managed server startup times

    - by wadevondoom
    We are migrating our Blue Martini Commerce application (only supported on WebLogic 10.3.0) to a new host (Redhat 6.3 on a VMWare ESX vm). We are seeing extremely slow start up times for our managed server(s) that is basically 20x slower than our current production. As a for instance the Publish managed server takes ~30 - 45 seconds in current production and in the new environment it takes ~10 minutes. The setup uses the same domain structure and JVM as the current production environment. The same setup files are used. We use jdk1.6.0_33 on 64 bit architecture. We used the generic 64bit weblogic installer and used pack / unpack utilities to migrate the domain. The JAVA_OPTS to start this server are: "-d64 -Xms256m -Xmx512m -XX:PermSize=48m -XX:MaxPermSize=256m" The sysadmins have checked /etc/sysctl.conf and /etc/limits.conf to ensure we were not hitting some kind of process limit. As I am not sure what this managed server does from a Blue Martini perspective during the phase of startup I also had the DBA check to ensure that Oracle RAC (11.2.0.3) wasn't also hitting some kind of process limit or if there was a tns listener issue. The new host is quite a bit stricter with their server lock downs so there are a few differences.... Redhat 6.3 in new env, RH 5.7 in current SElinux is targeted in new env and disabled in current VM in new env and dedicated hardware in current iptables disabled in current. It was enabled in new prod but I had them disable it just in case I apologize for not being more specific. I am mostly hoping got some tips. I do not have the typical root access I would normally have in this environment. I am just hoping got a path forward. I did a few 'kill -3' to see if there are blocked threads and I got nadda. The service works for all intents and purposes it is just painfully slow. Thanks you all in advance for reading and best regards. Wade

    Read the article

  • Password-less login into localhost

    - by Brad
    I am trying to setup password-less login into my localhost because it's required for a tutorial. I went through the normal steps of generating an rsa key and appending the public key to authorized_keys but I am still prompted for a password. I've also enabled RSAAuthentication and PubKeyAuthentication in /etc/ssh_config. Following other suggestions I've seen, I tried: chmod go-w ~/ chmod 700 ~/.ssh chmod 600 ~/.ssh/authorized_keys But the problem persists. Here is the output from ssh -v localhost: (tutorial)bnels21-2:tutorial bnels21$ ssh -v localhost OpenSSH_5.9p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug1: Connecting to localhost [::1] port 22. debug1: Connection established. debug1: identity file /Users/bnels21/.ssh/id_rsa type 1 debug1: identity file /Users/bnels21/.ssh/id_rsa-cert type -1 debug1: identity file /Users/bnels21/.ssh/id_dsa type -1 debug1: identity file /Users/bnels21/.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9 debug1: match: OpenSSH_5.9 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA 1c:31:0e:56:93:45:dc:f0:77:6c:bd:90:27:3b:c6:43 debug1: Host 'localhost' is known and matches the RSA host key. debug1: Found key in /Users/bnels21/.ssh/known_hosts:11 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Next authentication method: publickey debug1: Offering RSA public key: /Users/bnels21/.ssh/id_rsa debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Offering RSA public key: id_rsa3 debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Trying private key: /Users/bnels21/.ssh/id_dsa debug1: Next authentication method: keyboard-interactive Password: Any suggestions? I'm running OSX 10.8.

    Read the article

  • Can't run utilities/.exe's that use the network from a [DFS] windows share on Windows 2008 servers. Can this be overcome?

    - by Jim Lawhon
    Under Windows Server 2008 I'm unable to run many utilities that use network resources. This works just fine under Windows Server 2003. For example: \\domain\dfs\tools$\bin\sendmail.exe ... \\domain\dfs\tools$\bin\psexec.exe ... echo %_metric% %_value% %_unixtime% | \\domain\dfs\bin\foo$\nc graphite.domain 2003 -w1 Reproducing and maintaining this folder on a large number of servers/vm's is not desirable. Is there a way to allow Windows Server 2008 to run these tools? If so, can this be enabled via GPO or in a fashion that can be scripted during automated builds? Edit: The commands/tools do work just fine, when run from local drives. Edit2: Wget example: d:\scripts\helpers>z:\bin\wget http://www.google.com SYSTEM_WGETRC = c:/progra~1/wget/etc/wgetrc syswgetrc = z:/etc/wgetrc --2011-04-11 00:32:15-- http://www.google.com/ Resolving www.google.com... failed: Host not found. z:\bin\wget: unable to resolve host address `www.google.com' wget can neither use DNS to resolve the IP nor can it use HTTP if provided an IP directly. Edit3: The problem seems to be tied to DFS/DFS shares. Tools run correctly from other normal windows-server file-shares. They also run correctly when run directly from the file-servers behind the DFS. They only fail when we attempt to run them from the DFS UNC path or mapped drives.

    Read the article

  • Enable bitlocker an save key to share

    - by user273694
    I have searched all over the web but cannot find a complete answer to this: How to enable Bitlocker on a laptop with TPM, and store a file with the Bitlocker recovery key and TPM password by USING THE manage-bde command line tool. The file should be the same as when created in the Bitlocker manager UI. I DO NOT want to save to AD. The same question was asked here but was not answered correctly. The goal is to write a script to be used with an endpoint manager. I have tried the following: manage-bde -on C: Works fine, but does not create or save a key. manage-bde -on C: -rk C:\myfolder\ and manage-bde -on C: -RecoveryKey C:\myfolder\ -rp The output from the last two methods state that a key has been saved to c:\myfolder and so on, but that is not the case. It also says that I have to: Save the password in a secure location Insert a USB flash drive with an external key file into the computer. Restart and run hardware test type "manage-bde -status" to check if the hardware test succeeded After a restart, I get an error saying that Bitlocker could not be enabled because the bitlocker startup key or recovery kpassword cannot be found on the USB device.... C: was not encrypted. Why am I asked to insert a USB?? I simply want to encrypt the hard drive and save the recovery information to a file automatically. Is that too much to ask? Help please!

    Read the article

  • Sync Local ICS File with Android via Exchange/Outlook

    - by sinDizzy
    At my company we have a 3rd party app which tracks off-hours duty for all of our engineers. The app is not web-enabled and we cannot make any changes to it. It does write a simple text file and I have created an app that translates that to an ICS file. My goal is to have that appear on my calendar on my Android phone. Here is the path I am working on: DutyApp -- TextFile -- ICSFile -- Outlook(exchange) -- Android (via exchange sync) My problems: If I place the ICS file on our FILE server and then in Outlook if I go to the option CalendarOpen CalendarFrom Internet it shows up in Outlook and looks pretty good. After a couple minutes it shows up on my Android phone as well. If I change the original ICS file those changes never display in Outlook and never sync to my Android phone. This seems to be a one shot deal almost like an import. Now if I place the ICS file on our WEB server and then in Outlook if I go to the option CalendarOpen CalendarFrom Internet and use webcal:\ as the address, it shows up in Outlook and also looks pretty good. Any changes I make to the original ICS file display in Outlook. However the entire calendar never shows up in Android. This calendar is a subscription and it seems, although am not sure, that Android doesn't display Exchange subscription calendars. Yes I know it works with Gmail subscription calendars but this is Exchange. So my question is what other options are there? We are behind a firewall so cant link the ICS file to a Gmail account. I can't put the ICS file anywhere else other than our file or web server.

    Read the article

  • Using AWStats, cannot get MaxNbOfExtraX to limit rows in Extra Report

    - by user137519
    Folks, got something really odd here I'd like to resolve. I've been using Awstats and have a couple of extra reports. I cannot get any of them to limit the rows using MaxNbOfExtraX to work. Here are two examples: ExtraSectionName1="Top 100 Searches" ExtraSectionCodeFilter1="200 304" ExtraSectionCondition1="URL,/search/search_post.php" ExtraSectionFirstColumnTitle1="Search Parameters" ExtraSectionFirstColumnValues1="QUERY_STRING,(.*)" ExtraSectionFirstColumnFormat1="QueryParameters: %s" ExtraSectionStatTypes1=HL ExtraSectionAddAverageRow1=0 ExtraSectionAddSumRow1=1 MaxNbOfExtra1=100 MinHitExtra1=4 ExtraSectionName2="Top 100 Downloads" ExtraSectionCodeFilter2="200 304" ExtraSectionCondition2="URL,/filedownload.php" ExtraSectionFirstColumnTitle2="File Downloads" ExtraSectionFirstColumnValues2="QUERY_STRING,(.[0-9]{5})(h|p)?." ExtraSectionFirstColumnFormat2="File ID: %s" ExtraSectionStatTypes2=HL ExtraSectionAddAverageRow2=0 ExtraSectionAddSumRow2=1 MaxNbOfExtra2=100 MinHitExtra2=3 According to all documentation I've read the MaxNbOfExtra1 should keep the limit to 100. However when I run this, with the debug messages enabled I get a message indicated that the query will be in excess of of 500 and would not run it. I increased the number of ExtraTrackedRowsLimit to 2000 and it would work. But the option I provided should have lowered that. I even tried without the ExtraTrackedRowsLimit with MaxNbOfExtra1=100 but same error: No limit to 100 and the "excess of 500" error. I have the URLWithQuery=1 and my reports do run properly along with my regex filters. I am using MinHitExtra1 to limit the rows and that works, but why can I not get the MaxObOfExtraX option to work. Any ideas? Thanks in advance.

    Read the article

  • Dual Boot Installing Ubuntu 12.04 with Windows 7 (64) on a non UEFI system fails

    - by Randnum
    I cannot seem to install the correct boot loader for a non-UEFI firmware system. I'm trying to install Ubuntu 12.04 and Windows 7 (64) which are technically compatible with GPT but for windows only if the firmware is UEFI enabled. My system uses the old BIOS system and does not support UEFI. Therefore, whenever I finish my Ubuntu install and try to install Windows I get a "cannot install to GPT partition type" error. Even if I use Gparted to format a special NTFS file format for windows it can't handle the GPT partition style because it doesn't have UEFI. But my ubuntu install always forces GPT during installation and never asks if I want to install the old BIOS style MBR instead. How do I resolve this? Both OS's will install fine on their own the problem is when I try to install the second OS it doesn't recognize any of the other's partitions and tries to rewrite it's own on top of the other. I've tried both OS's first and always run into the same problem. Since there is no way to make Windows recognize GPT without upgrading my Motherboard how do I tell Ubuntu to use the old BIOS MBR on install? Do I have to download a special Ubuntu with a specific grub version? or should I manaually configure my partition somehow to force it not to use GPT? Thank you,

    Read the article

  • Can't remote into Virtual PC

    - by Spamela
    I used to be able to remote into my Virtual PCs. It has been working for at least a year. Yesterday just stopped working... I cannot figure it out... Things I have triple-checked: 1. My Virtual PCs have "Allow Remote Access" checked. 2. My Virtual PCs have an account in the Administrator group that is password protected. 3. My Host's entry in the registry for the Terminal Services Port is still the default of 3389. So here is the strange thing. I can't even remote into the Virtual PC from it's host much less another PC... From the host, I can ping the Virtual PC and get a response but when trying to remote into it from the host I get the following error: Remote Desktop can't connect to the remote computer for one of these reasons: 1)Remote access to the server is not enabled. 2)The remote computer is turned off 3)The remote computer is not available on the network My host is running Windows 7. Virtual PCs are running XP. Thank you for looking at this!

    Read the article

  • Ubuntu odd mouse and keyboard behavior when window gains inner focus

    - by Scott
    This morning on my Ubuntu 10.10 system when I open a window - for example, system-preferences-about me, if I click in to a field such as "work email", I can no longer close the window with the mouse! Clicking the X on the window will not work. Also, I loose the ability to click on anything else - clicking on the desktop, icons, menus, workspaces, etc. do not work. Even the effect when you hover over a folder on the desktop and that folder highlights - that stops until the window is closed. If I open this same screen but do not click in to a field, everything is fine - I can close the window with the X and everything else works fine. Same thing happens with several other windows I tried. Even calculator - I can open it, everything is fine until I click on a button in the calculator - then no ability to click on anything else. Have to Alt-F4 to close the window. The system is only about a week old from a fresh install (64 bit ubuntu - quad core amd machine). I uninstalled wine, turned off remote desktop/disabled in startup, also in startup disabled visual assistance, bluetooth, dropbox, klipper. Reboot, no difference. The only other thing non-standard I see in startup is nvidia. Using a logitec usb mouse, saitek usb keyboard. Was working fine yesterday. I can not think of anything I did / installed yesterday. I switched themse, then went to update manager and saw two x server / x org related updates and installed them, reboot and NOW IT IS FINE! However, then I re-enabled dropbox, klipper and remote desktop rebooted and the problem is back. Again I disabled and rebooted. Problem is still there!! So somehow I fixed it at least for a few minutes, but now it is back and I am out of ideas.

    Read the article

  • Adobe Reader XI doesn't allow the editing of fields in a document after it's been edited?

    - by leeand00
    One of the users at our company has a *.pdf file she received from the state of Pennsylvania. The version of Adobe Reader she is using is Adobe Reader XI 11.0.3. She uses this pdf file to send in a report. Her workflow goes like this: She makes a copy of the file. She opens the file and the file displays in purple at the top: Please fill out the following form. You can save data typed into this form. Highlight Existing Fields She fills in the specifics by entering values into the form fields. A few weeks later she returns to the same pdf document and can no longer edit the fields, instead she gets the following message: "This document enabled extended features in Adobe Reader. The document has been changed since it was created and use of extended features is no longer available. Please contact the author for the original version of this document." She's also running Windows 7 and I've been told that the issue was once fixed by setting compatibility mode on Adobe Reader XI to Windows XP SP3.

    Read the article

  • Viewing Postscript (or PDF) on OS X: Aliasing issues

    - by mankoff
    I am generating postscript graphics and am trying to find a balance between non-aliasing and over-aliasing. If I use the raw ghostscript viewer gs on the Postscript, it looks good. The text appears anti-aliased, but the image remains nice and blocky. Unfortunately, gs has no real user interface and loses all of the nice things that Preview.app has. I could install gv, but the dependency bloat is huge! It requires all of gnome. And even that isn't a great viewer compared to Preview.app or Skim.app. Here is an image viewed with gs: From a user-interaction and Mac-ish perspective, Preview.app (or Skim.app is a much nicer program to use. They have the option to turn on or off aliasing, but neither option looks very good. Which aliasing on, the image is blurry. When it is off, the graphic matches what is seen from gs, but there are two issues. Minor issue: the font is ugly. Uglier than with gs. Major issue: Every PDF is un-aliased, making it hard to read regular PDFs full of text. So, in summary: Is there a way to manually generate the PDF from the PS that overcomes these issues? Is there a way to find a middle ground of alias/unalias with Preview.app? Is there another app that displays with quality like gs, but has a decent UI like Skim.app or Preview.app Is there a way to have Preview.app turn off aliasing for only one file (containing graphics) but leave it enabled in general so that text PDFs are still readable?

    Read the article

< Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >