Search Results

Search found 3195 results on 128 pages for 'doc hoffiday'.

Page 103/128 | < Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >

  • SharePoint extranet security concerns, am I right to be worried?

    - by LukeR
    We are currently running MOSS 2007 internally, and have been doing so for about 12 months with no major issues. There has now been a request from management to provide access from the internet for small groups (initially) which are comprised of members from other Community Organisations like ours. Committees and the like. My first reaction was not joy when presented with this request, however I'd like to make sure the apprehension is warranted. I have read a few docs on TechNet about security hardening with regard to SharePoint, but I'm interested to know what others have done. I've spoken with another organisation who has already implemented something similar, and they have essentially port-forwarded from the internet to their internal production MOSS server. I don't really like the sound of this. Is it adviseable/necessary to run a DMZ type configuration, with a separate web front-end on a contained network segment? Does that even offer me any greater security than their setup? Some of the configurations from a TechNet doc aren't really feasible, given our current network budget. I've already made my concerns known to management, but it appears it will go ahead in some form or another. I'm tempted to run a completely isolated, seperate install just for these types of users. Should I even be concerned about it? Any thoughts, comments would be most welcomed at this point.

    Read the article

  • Word Document Turns to Read-Only

    - by Psycho Bob
    I am running into an issue with a user whose Word document is somehow turning itself into Read-Only. The user is using Word 2003 and is accessing a document that is in a Server 2008 share. The document itself starts out as a normal, editable document (user has Full Control permissions), and the user is able to save and do the 'normal' things you would do to a document. However, after a couple of saves, the document turns to Read-Only (according to the title bar) even though the Read-Only attribute is not checked on the document's properties. Here is some additional information about the situation: *User has approximately 5-8 Word documents open at a time *User saves the document frequently (sometimes at a frequency of once per minute) *Once the document is closed it will open as a normal document if reopened *When the document does turn to Read-Only the user will do a "Save As" on the document and save it as FILENAME # where # is some increment of how many times this has happened (some documents are up to their 30th iteration) I understand that there is probably some room for user education here and that they could just be copying the RO document to a new one, closing and opening the RO doc, then copying all the information back. However, I would like to get to the route cause of the problem and try to stop it from happening in the first place. UPDATE: Apparently the reinstall did not fix the issue. I researched the issue a bit more and found that disabling the background save may take care of it, but I haven't had a chance to try it yet. Does anyone else have any other ideas?

    Read the article

  • Require and Includes not Functioning Nginx Fpm/FastCGI

    - by Vince Kronlein
    I've split up my FPM pools so that php will run under each individual user and set the routing correctly in my vhost.conf files to pass the proper port number. But I must have something incorrect in my environment because on this new domain I set up, require, require_once, include, include_once do not function, or rather, they may not be getting passed up to the interpreter to be rendered as php. Since I already have a Wordpress install on this server that runs perfectly, I'm pretty sure the error is in my server block for nginx. server { server_name www.domain.com; rewrite ^(.*) http://domain.com$1 permanent; } server { listen 80; server_name domain.com; client_max_body_size 500M; index index.php index.html index.htm; root /home/username/public_html; location / { try_files $uri $uri/ index.php; } location ~ \.php$ { if (!-e $request_filename) { rewrite ^(.*)$ /index.php?name=$1 break; } fastcgi_pass 127.0.0.1:9002; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } location ~ /\.ht { deny all; } } The problem I'm finding I think is that there are dynamic calls to the doc root index file, while all calls to anything within a sub-folder should be routed as normal ie: NOT passed to index.php. I can't seem to find the right mix here. It should run like so: domain.com/cindy (file doesn't exist) --> index.php?name=$1 domain.com/admin/anyfile.php (files DO exist) --> admin/anyfile.php?$args

    Read the article

  • Cannot Install Phusion Passenger 3.0.13 with Nginx 1.2.1

    - by LightBe Corp
    I installed gem Passenger which installed 3.0.13. Then I executed passenger-install-nginx-module which is what the Nginx instructions on http://www.modrails.com said to do. It installs the latest stable version which is 1.2.1 according to the Nginx official wiki page. I said to install Nginx to /usr/local/nginx (which is the default if you go to the nginx wiki website). I get the following errors: Undefined symbols for architecture x86_64: "_pcre_free_study", referenced from: _ngx_pcre_free_studies in ngx_regex.o ld: symbol(s) not found for architecture x86_64 collect2: ld returned 1 exit status make[1]: *** [objs/nginx] Error 1 make: *** [build] Error 2 -------------------------------------------- It looks like something went wrong Please read our Users guide for troubleshooting tips: /Users/server1/.rvm/gems/[email protected]/gems/passenger-3.0.13/doc/Users guide Nginx.html If that doesn't help, please use our support facilities at: http://www.modrails.com/ We'll do our best to help you. I have done searches for several hours trying to find a resolution. I tried the Google Group for Phusion Passenger but did not find anything. I do not know if there is a mismatch in version numbers or not. The documentation says nothing about this error.

    Read the article

  • legitimacy of the tasks in the task scheduler

    - by Eyad
    Is there a way to know the source and legitimacy of the tasks in the task scheduler in windows server 2008 and 2003? Can I check if the task was added by Microsoft (ie: from sccm) or by a 3rd party application? For each task in the task scheduler, I want to verify that the task has not been created by a third party application. I only want to allow standards Microsoft Tasks and disable all other non-standards tasks. I have created a PowerShell script that goes through all the xml files in the C:\Windows\System32\Tasks directory and I was able to read all the xml task files successfully but I am stuck on how to validate the tasks. Here is the script for your reference: Function TaskSniper() { #Getting all the fils in the Tasks folder $files = Get-ChildItem "C:\Windows\System32\Tasks" -Recurse | Where-Object {!$_.PSIsContainer}; [Xml] $StandardXmlFile = Get-Content "Edit Me"; foreach($file in $files) { #constructing the file path $path = $file.DirectoryName + "\" + $file.Name #reading the file as an XML doc [Xml] $xmlFile = Get-Content $path #DS SEE: http://social.technet.microsoft.com/Forums/en-US/w7itprogeneral/thread/caa8422f-6397-4510-ba6e-e28f2d2ee0d2/ #(get-authenticodesignature C:\Windows\System32\appidpolicyconverter.exe).status -eq "valid" #Display something $xmlFile.Task.Settings.Hidden } } Thank you

    Read the article

  • Updated my WAMP Server and MySQL is eating up 580mB of memory

    - by Jon
    I updated my dev-box's WAMPSERVER, and along with updating PHP and Apache, MySQL updated to '5.6.12'. After doing that, I copied the data folder from my old (5.1.36) install to the new one and now MySQL takes up 580mB which is way too much, since I'm the only person using it (Locally) and there are only 20 or so databases on it, none of which have 'memory' tables. How can I get this down to a decent amount? My my.ini: # For advice on how to change settings please see # http://dev.mysql.com/doc/refman/5.6/en/server-configuration-defaults.html # *** DO NOT EDIT THIS FILE. It's a template which will be copied to the # *** default location during install, and will be replaced if you # *** upgrade to a newer version of MySQL. [mysqld] # Remove leading # and set to the amount of RAM for the most important data # cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%. # innodb_buffer_pool_size = 128M # Remove leading # to turn on a very important data integrity option: logging # changes to the binary log between backups. # log_bin # These are commonly set, remove the # and set as required. # basedir = ..... # datadir = ..... # port = ..... # server_id = ..... # Remove leading # to set options mainly useful for reporting servers. # The server defaults are faster for transactions and fast SELECTs. # Adjust sizes as needed, experiment to find the optimal values. # join_buffer_size = 128M # sort_buffer_size = 2M # read_rnd_buffer_size = 2M sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES Database info: Storage Engine Data Size Index Size Total Size InnoDB 48.00 KB 0.00 B 48.00 KB MEMORY 0.00 B 0.00 B 0.00 B MyISAM 163.64 MB 122.49 MB 286.13 MB Total 163.69 MB 122.49 MB 286.18 MB

    Read the article

  • nginx - proxy_pass is working - Apache isn't doing what it should...

    - by matthewsteiner
    So, I've got this in my nginx.conf: location ~* ^.+.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js)$ { root /var/www/vhosts/example.com/public/; access_log off; expires 30d; } location / { proxy_pass http://127.0.0.1:8080/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } So anything that is a "static file" that exists will just be done with nginx. Otherwise, it should pass it off to Apache. Right now, static files are working correctly. However, if something is passed to apache and it's example.com or subdomain.example.com, apache just spits out the "Apache 2 Test Page" that you get if there's nothing there. Apache worked fine before, so I'm guessing it has to do with the way nginx is "asking". I'm not sure though. Any ideas?

    Read the article

  • Mac Mavericks, ngircd localhost works, private IP doesn't

    - by user221945
    I have configured ngircd to listen on my private ip address. It doesn't. Localhost works fine. Configuration test: ngIRCd 21-IDENT+IPv6+IRCPLUS+SSL+SYSLOG+TCPWRAP+ZLIB-x86_64/apple/darwin13.2.0 Copyright (c)2001-2013 Alexander Barton () and Contributors. Homepage: http://ngircd.barton.de/ This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Reading configuration from "/opt/local/etc/ngircd.conf" ... OK, press enter to see a dump of your server configuration ... [GLOBAL] Name = irc.bellbookandpistol.com AdminInfo1 = Jaedreth AdminInfo2 = San Diego County CA, US AdminEMail = [email protected] HelpFile = /opt/local/share/doc/ngircd/Commands.txt Info = Server Info Text Listen = 10.0.1.5,127.0.0.1 MotdFile = MotdPhrase = "Welcome to irc.bellbookandpistol.com" Password = PidFile = Ports = 6667 ServerGID = wheel ServerUID = root [LIMITS] ConnectRetry = 60 IdleTimeout = 0 MaxConnections = 0 MaxConnectionsIP = 6 MaxJoins = -1 MaxNickLength = 9 MaxListSize = 0 PingTimeout = 120 PongTimeout = 20 [OPTIONS] AllowedChannelTypes = #&+ AllowRemoteOper = no ChrootDir = CloakHost = CloakHostModeX = CloakHostSalt = kBih5mu\kVI!DC6eifT(hd4m/0'zb/=: CloakUserToNick = no ConnectIPv4 = yes ConnectIPv6 = no DefaultUserModes = DNS = yes IncludeDir = /opt/local/etc/ngircd.conf.d MorePrivacy = no NoticeAuth = no OperCanUseMode = no OperChanPAutoOp = yes OperServerMode = no RequireAuthPing = no ScrubCTCP = no SyslogFacility = local5 WebircPassword = [SSL] CertFile = CipherList = HIGH:!aNULL:@STRENGTH DHFile = KeyFile = KeyFilePassword = Ports = [OPERATOR] Name = [REDACTED] Password = [REDACTED] Mask = [CHANNEL] Name = #BBP Modes = tnk Key = MaxUsers = 0 Topic = Welcome to the Bell, Book and Pistol IRC Server! KeyFile = As you can see, it should be listening on 10.0.1.5, but it isn't. After turning on Apache manually, port 80 works on 10.0.1.5, but port 6667 doesn't. It only works on localhost. Is there some terminal command I could use or some config file I could edit to get this to work?

    Read the article

  • SendMail not working in CentOs 6.4

    - by Kane
    I am trying to send e-mails from my CentOS 6.4 but it does not work. My knowledge about servers is quite limited, so I hope someone can help me. Here is what I did: First i tried to send an email using the "mail" command, but it was not in the OS so I installed it. # yum install mailx After that, I tried sending an email using the "mail" command, but it did not send anything. I checked it on the internet and I realized I needed an e-mail server like sendmail, so I installed it. # yum install sendmail sendmail-cf sendmail-doc sendmail-devel After that, I configured it following some tutorials. First, sendmail.mc file. # vi /etc/mail/sendmail.mc Commented out the next line: BEFORE # DAEMON_OPTIONS('Port=smtp, Name=MTA') dnl AFTER # dnl DAEMON_OPTIONS('Port=smtp, Name=MTA') dnl Check that the next lines are correct: # FEATURE(`virtusertable', `hash -o /etc/mail/virtusertable.db')dnl # ... # FEATURE(use_cw_file)dnl # ... # FEATURE(`access_db', `hash -T<TMPF> -o /etc/mail/access.db')dnl Update sendmail.cf # m4 /etc/mail/sendmail.mc > /etc/mail/sendmail.cf Open the port 25 adding the proper line in the iptables file # vi /etc/sysconfig/iptables # -A INPUT -m state --state NEW -m tcp --dport 25 -j ACCEPT restart iptables and sendmail # service iptables restart # service sendmail restart So i thought that would be ok, but when i tried: # mail '[email protected]' # Subject: test subject # test content #. I checked the mail log: # vi /var/log/maillog And that is what I found: Aug 14 17:36:24 dev-admin-test sendmail[20682]: r7D8RItS019578: to=<[email protected]>, ctladdr=<[email protected]> (0/0), delay=1+00:09:06, xdelay=00:00:00, mailer=esmtp, pri=2460500, relay=alt4.gmail- smtp-in.l.google.com., dsn=4.0.0, stat=Deferred: Connection timed out with alt4.gmail-smtp-in.l.google.com. I do not understand why there is a connection time out. Am I missing something? Can anyone help me, please? Thank you.

    Read the article

  • How to avoid index.php in Zend Framework route using Nginx rewrite

    - by Adam Benayoun
    I am trying to get rid of index.php from the default Zend Framework route. I think it should be corrected at the server level and not on the application. (Correct me if I am wrong, but I think doing it on the server side is more efficient). I run Nginx 0.7.1 and php-fpm 5.3.3 This is my nginx configuration server { listen *:80; server_name domain; root /path/to/http; index index.php; client_max_body_size 30m; location / { try_files $uri $uri/ /index.php?$args; } location /min { try_files $uri $uri/ /min/index.php?q=; } location /blog { try_files $uri $uri/ /blog/index.php; } location /apc { try_files $uri $uri/ /apc.php$args; } location ~ \.php { include /usr/local/etc/nginx/conf.d/params/fastcgi_params_local; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param SERVER_NAME $http_host; fastcgi_pass 127.0.0.1:9000; } location ~* ^.+\.(ht|svn)$ { deny all; } # Static files location location ~* ^.+\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js)$ { expires max; } } Basically www.domain.com/index.php/path/to/url and www.domain.com/path/to/url serves the same content. I'd like to fix this using nginx rewrite. Any help will be appreciated.

    Read the article

  • Outlook 2007 + Exchange 2010 (Save All Attachments)

    - by RobertPitt
    About 3 weeks back our company upgraded our mail system to Exchange 2010, all went smooth, few issues but nothing major. A few days ago we had a call from a colleague where he was unable to save all attachments, From File > Save As > Save All Attachments. When the email has a single attachment it works perfectly normal, and depending on the file type it allows you to save multiple attachments. But there's a lot of file types that will not work, such as zip, pdf, doc etc, Usually we get a location box open up asking where we would like to drop the attachments, but it does nothing, You click Save All Attachments and nothing happens. After hours of research I have come across mixed results, a lot of people on forums have been explaining that they have recently crossed over to Exchange 2010 and there issues started there. But on the other hand Microsoft released a KB (278188) which was depressing if that, but that article was published in 2007, as stated by the time stamp, and Exchange 2010 has only come out recently. Im looking to see if you guys have any clues what could be causing this, anything server side that I can take a look at (AD, Exchange, ...). Any help on this is greatly supported

    Read the article

  • Virtualize SBS 2003 - P2V vs migrating to new VM

    - by jlehtinen
    I need to virtualize a SBS 2003 server in my work environment. I need some tips on what people think is the best way to proceed. Background: The SBS 2003 server is the primary DC for the domain and also hosts FTP, RRAS(VPN), DNS, and file shares. Exchange is NOT used, neither is SQL server. DHCP is done via a firewall appliance. I have added a Server 2003 VM to the domain and promoted it to the DC role. AD/DNS is replicating here correctly. This was mainly done to provide fault-tolerance to the domain, I was not intending to make this VM the primary DC. I've already asked about buying upgraded licensing for Server 2008/2012 but was refused due to cost. Options: I see (at least) two routes I could take to complete this. From what I've read option 2 is the "preferred" method, but there's a few steps where I'm not clear on what to expect. Option 1.) P2V the primary DC Power off primary DC Power off secondary DC (to prevent USN rollback in case P2V has issue) P2V (cold clone) primary DC Boot new PDC VM Allow new hardware to detect Remove old NIC hardware from device manager Assign old IPs to new virtual NICs Reboot PDC VM, confirm connectivity and no major issues Power on secondary DC, confirm replication Option 2.) Create new VM, transfer roles, remove original DC from domain Create new VM, install SBS 2003 Do I need the original SBS install discs for this? MS migration doc mentions this. Add VM to domain, promote to DC role Does this start 7 day timer where two SBS servers can be in same domain? Set up RRAS on new VM Set up IIS/FTP on new VM Move file shares to new VM Transfer FSMO roles to new VM DC dcpromo original primary DC out of domain

    Read the article

  • Error compiling PHP 5.5.9 on CentOS 6.5 during make command

    - by Chris Mancini
    Here is the error message: cc: internal compiler error: Killed (program cc1) Please submit a full bug report, with preprocessed source if appropriate. See <file:///usr/share/doc/gcc-4.6/README.Bugs> for instructions. make: *** [ext/fileinfo/libmagic/apprentice.lo] Error 1 The very last thing make was processing is apprentice.lo which appears to be part of the image manipulation libraries (maybe?). I am using Ansible to provision my instance. It is a Digital Ocean single core 512MB VM. I have been using vagrant / ansible with the same config locally for dev and it has compiled fine, this is the first cloud VM I am attempting to provision. The only difference is the base image for my DO server is coming from DO and for my local dev, I built my own Vagrant box via VirtualBox from a stock CentOS basic server install. I pull it down from my DropBox. The problem has been experienced by others and reported as a php bug report My php ansible role up to the error: --- - name: Download php source get_url: url={{ php_source_url }} dest=/tmp register: get_url_result - name: untar the source package command: tar -xvf php-{{ php_version }}.tar.gz chdir=/tmp when: get_url_result.changed or php_reinstall - name: configure php 5.5 command: > ./configure --prefix={{ php_prefix }} --with-config-file-path={{ php_config_file_path }} --enable-fpm --enable-ftp --enable-mbstring --enable-pdo --enable-soap --enable-sockets=shared --enable-zip --with-curl --with-fpm-group={{ nginx_group }} --with-fpm-user={{ nginx_user }} --with-freetype-dir=/usr/lib64/ --with-gd --with-jpeg-dir=/usr/lib64/ --with-libdir=lib64 --with-mcrypt --with-openssl --with-pdo-mysql --with-pear --with-readline --with-tidy --with-xsl --with-zlib --without-pdo-sqlite --without-sqlite3 chdir=/tmp/php-{{ php_version }} when: get_url_result.changed or php_reinstall - name: make clean when reinstalling command: make clean chdir=/tmp/php-{{ php_version }} when: php_reinstall - name: make php command: make chdir=/tmp/php-{{ php_version }} when: get_url_result.changed or php_reinstall Thanks in advance for any help. :)

    Read the article

  • MAMP Pro virtual hosts on Mountain Lion not being recognized

    - by user135242
    I'm running MAMP PRO 2.1.1 on OSX 10.8.1 and not having any luck getting Apache to recognize any hosts that have been set up in MAMP besides localhost. This only happens when trying to use port 80. When I switch to port 8888 everything runs fine. Mountain Lion's Apache has been disabled and I get no errors or warnings when starting MAMP and the servers. Any doc root I set for "localhost" runs without problem. Any other server name I define, however, results in "cannot connect" when viewing in Chrome (not to be confused with "cannot find" - the browser is in fact following /etc/hosts to 127.0.0.1, but MAMP's Apache is simply not responding) I was wondering if anyone else has run into this issue or knows how to solve it. I'm working on some WordPress development and it keeps wanting to redirect to the base URL (with no port reference) even during setup. I'm sure I could fix things from the WP side but I'd rather figure out what the root issue is with MAMP. Thanks in advance for any insight you can provide.

    Read the article

  • DNSSEC - First Signature

    - by Arancha
    I'm testing DNSSEC with Bind 9.7.2-P2. I have a question regarding the first signature created over a zone that already exists. I'm using dynamic DNS. I create the first two keys: one KSK and one ZSK. According to https://datatracker.ietf.org/doc/draft-ietf-dnsop-dnssec-key-timing/, the first ZSK needs to be published for an interval equal to Ipub, before it can be active. I create the ZSK with a Publication date previous to its Activation date. I restart the service and I can see that the key is published at Publication date, but it's no active later, when Activation date arrives. This is the configuration of the zone dnssec.es at the named.conf file: zone "dnssec.es" { auto-dnssec maintain; update-policy local; sig-validity-interval 1; key-directory "dnssec/keys_dnssec"; type master; file "dnssec/db.dnssec.es"; }; Any clue?? Regards

    Read the article

  • Issue with InnoDB engine while enabling and [ skip-innodb ]

    - by Ahn
    How to enable InnoDB, which was previously disabled with skip-innodb option. Case 1: Disabled the innodb with skip-innodb option and show engines givens as below. Engine | Support ... | InnoDB | NO ...... Case 2: As I want to enable the innodb, I commanded the #skip-innodb option and restarted. But now the show engines even not showing the InnoDB engine in the list. ? Mysql Version : 5.1.57-community-log OS : CentOS release 5.7 (Final) Log: 120622 13:06:36 InnoDB: Initializing buffer pool, size = 8.0M 120622 13:06:36 InnoDB: Completed initialization of buffer pool InnoDB: No valid checkpoint found. InnoDB: If this error appears when you are creating an InnoDB database, InnoDB: the problem may be that during an earlier attempt you managed InnoDB: to create the InnoDB data files, but log file creation failed. InnoDB: If that is the case, please refer to InnoDB: http://dev.mysql.com/doc/refman/5.1/en/error-creating-innodb.html 120622 13:06:36 [ERROR] Plugin 'InnoDB' init function returned error. 120622 13:06:36 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 120622 13:06:36 [Note] Event Scheduler: Loaded 0 events 120622 13:06:36 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.1.57-community-log' socket: '/data/mysqlsnd/mysql.sock1' port: 3307 MySQL Community Server (GPL)

    Read the article

  • Trobleshooting extremely slow opening times in Win7 for documents on Win2k8 server

    - by Mazupan
    Hello. It's hard even to describe my problem. It seems there's only problem with extreme slow openings (up to 10 minutes) on Windows 7 (on XP things works fine) for files that are stored on Windows Server 2008. And now what I discovered up till now. If I open (some files, not all, not allways) .doc and .xls files with doubleclicking it takes up to 10 minutes to finaly open the file. In that time, file seems to be locked for all other users. If I cancel opening, file remains locked for some time. Owner on that files is the one who last wrote changes in them. If I change the owner to larger group, which I am member of file gets opened super fast. When opened file can be saved normaly and fast. That file reopens fast. One other user reports that there is only problem when opening the files for the first time in a day. When he openes first file he has no problems with other files at all (or so he says). He also states that when accessing files from home via VPN he has no such problems with files. And now: anybody has a clue where to start looking? I suppose that is misconfiguration problem. But where? File system? Permissions? DFS? VMWare network config? My setup is as follows: Physical server: HP Prolian ML350 G6 Virtual host: VMWare ESXi 4 Guest: Windows Server 2008 Standard Files are accessed via DFS shares. Please help me. Thanks. Mazupan

    Read the article

  • How can I move linked Word/Excel files without breaking the links under Windows 7?

    - by DOUG NEEDHAM
    I currently operate under Windows XP and have multiple links between my Word and Excel files. I have to upgrade to Windows 7. When the .doc and .xls files are converted to .docm and .xlsm, respectively, the links no longer work. The Word document is still attempting to point back to the old .xls file rather than the new file. Also, creating new links between Word and Excel within Office 2010 doesn't seem to work. I create the new link, switch it from "Auto" to "Manual" and everything works fine. But when I copy the files to another folder, the Word document is still trying to link to the file in the previous folder rather than the new folder. This always worked in Windows XP. I've been using linked Word/Excel documents for 10+ years and have never really had a problem. I'm very careful to maintain Word and Excel filenames when moving the files to a new folder. The process has always been to 1.) move the files, 2.) update the links, 3.) rename the files, and 4.) update the links again. It's my understanding that under Windows XP, links between Word and Excel are relative. But under Windows 7 (and Office 2010?), those same links become fixed.

    Read the article

  • Application Virtualization

    - by Peter Versnee
    I have been looking for a solution to virtualize my desktop application as a SaaS. After picking Citrix XenApp as the best solution for my case and trying for ever to get it to work (with some more experienced people) I can't help but to conclude that Citrix has too much problems for my case. Every time I fix one thing some other thing pops up. As I can't imagine Citrix XenApp is the only way to deliver my application to my clients, I'd like to ask you experienced guys if you think there is an other way to deliver it whilst meeting my requirements. I've got no money issues - time on the other hand... My requirements: My application must run on a windows server; The application should be virtualized with the application window only (no desktop); Content Redirection (my application is the only one virtualized, so PDF, DOC(X) etc. is on the users' system); Easy user management; SSO;* Cross platform (Windows, Linux, Mac OSX).* I truly hope I don't annoy you with another of the same question. I've tried to find the same question here and - obviously - didn't found it. *) these I can live without, however very much appreciated

    Read the article

  • Vim: How to Install Airline?

    - by MunkyCheez
    I'm just getting started with Vim. It's a fun experience, but I've found it to be kind of overwhelming. I'm trying to get this plugin installed, vim-airline, but I'm having a lot of trouble. The Installation section on the Github page simply states: copy all of the files into your ~/.vim directory Presumably, this means download the .zip, extract it, and copy all of those files into ~/.vim/. I did this, but Vim just starts up like normal, and running :help airline just gives: Sorry, no help for airline I assume that this means it isn't getting installed. Also, the statusbar remains the same. I'm new to Vim and would really like to get this working. Thanks in advance! EDIT: I also tried putting the files into /usr/share/vim/vim73/. No dice. EDIT 2: I ran :helptags ~/.vim/doc and now the help-page displays when I type :help airline, but I'm still not getting the plugin itself (the status bar). Vim looks the same, but it can now display the help page.

    Read the article

  • Is there a utility to visualise / isolate and watch application calls

    - by MyStream
    Note: I'm not sure what to search for so guidance on that may be just as valuable as an answer. I'm looking for a way to visually compare activity of two applications (in this case a webserver with php communicating with the system or mysql or network devices, etc) such that I can compare the performance at a glance. I know there are tools to generate data dumps from benchmarks for apache and some available for php for tracing that you can dump and analyse but what I'm looking for is something that can report performance metrics visually from data on calls (what called what, how long did it take, how much memory did it consume, how can that be represented visually in a call stack) and present it graphically as if it were a topology or layered visual with different elements of system calls occupying different layers. A typical visual may consist of (e.g. using swim diagrams as just one analogy): Network (details here relevant to network diagnostics) | ^ back out v | Linux (details here related to firewall/routing diagnostics) ^ back to network | | V ^ back to system Apache (details here related to web request) | | ^ response to V | apache PHP (etc) PHP---------->other accesses to php files/resources----- | ^ v | MySQL (total time) MySQL | ^ V | Each call listed + time + tables hit/record returned My aim would be to be able to 'inspect' a request/range of requests over a period of time to see what constituted the activity at that point in time and trace it from beginning to end as a diagnostic tool. Is there any such work in this direction? I realise it would be intensive on the server, but the intention is to benchmark and analyse processes against each other for both educational and professional reasons and a visual aid is a great eye-opener compared to raw statistics or dozens of discrete activity vs time graphs. It's hard to show the full cycle. Any pointers welcome. Thanks! FROM COMMENTS: > XHProf in conjunction with other programs such as Perconna toolkit > (percona.com/doc/percona-toolkit/2.0/pt-pmp.html) for mySQL run apache > with httpd -X & (Single threaded debug mode and background) then > attach with strace -> kcache grind

    Read the article

  • Is it possible to use Google Docs Viewer to view files already in Google Docs?

    - by john2x
    The title is a little confusing. I'll elaborate. As far as I can tell, the Google Docs Viewer tool accepts a link to a raw document file (e.g. .doc, .pdf, et. al.), and renders its contents in the browser. For example, this url to a pdf http://research.google.com/archive/bigtable-osdi06.pdf when passed to Viewer, returns this link: http://docs.google.com/viewer?url=http%3A%2F%2Fresearch.google.com%2Farchive%2Fbigtable-osdi06.pdf What I'm trying to achieve is, use the Viewer to view a document already hosted in Google Docs (i.e. no longer a raw document file). When passing a link to a Google Docs document to the Viewer, the result is not as expected. It renders the link's HTML source instead of the document's contents. The reason I want to do this is that I want to be able to use the "embed" feature of Viewer to view Google Docs documents. Does Google Docs have a "link to embeddable view" feature? P.S. Here is a sample snippet to an embedded document. This is what I want, but pointing to an existing Google Docs document. <iframe src="http://docs.google.com/viewer?url=http%3A%2F%2Fresearch.google.com%2Farchive%2Fbigtable-osdi06.pdf&embedded=true" width="600" height="780" style="border: none;"></iframe>

    Read the article

  • How to add extensions to a lot of files using content of each file?

    - by v8media
    I've got over 10,000 files that don't have extensions from older versions of the Mac OS. They're extremely nested, and they also have all sorts of strange formatting and characters. They don't have file types or creator codes attached to them any longer. A great deal of these files have text in the file that will let me determine extensions (for example Word.Document.8 is in every file created by that version of Word, and Excel.Sheet.8 in every file created with that version of Excel). I found a script that looks like it would work for one of these file types at a time, but it erases parts of filenames after nefarious characters, which is not good. find . -type f -not -name "." -print0 |\ xargs -0 file |\ grep 'Word.Document.8' |\ sed 's/:.*//' |\ xargs -I % echo mv % %.doc So, two questions from that: One is, should I clean the characters in the filenames first, or programmatically deal with those in the script in order to leave them the same? As long as I lose no information from the filenames, I don't see a problem cleaning out slashes and other problem characters. Also, if I clean the filenames, there are likely to be duplicates, so any cleaning script would have to add something like "-1" before the extension to make sure nothing gets lost. 2nd question is how do I change the script so that it will look for more than one file type at the same time and give each the proper extension? I'm not tied to this script, but it is understandable, which is a pro. Mac OS X 10.6 is installed on this file server, but I've got access to any recent versions of OS X. Thanks, Ian

    Read the article

  • Google Apps Domain Level Shared Contacts?

    - by dkirk
    My firm just switched to Google Apps Premiere addition 2 weeks ago and aside from the way Google handles shared contacts, things are going quite well. Previously, on our Exchange server we had numerous shared contact lists set up in the shared folders. We had a separate list for vendors, sales agents, etc.. Is there not a way to set up lists or groups such as this on the domain level in Google Apps? I have found a ton of forums with users asking the same question but no good answers unless you purchase some third party app in the marketplace. I have toyed around with the "google-shared-contacts-client" here: http://code.google.com/p/google-shared-contacts-client/ and this almost does it but it falls short when trying to group contacts on the domain level or when trying to search for a contact by company name. Are either of these things possible? I am now looking to create a Google Doc spreadsheet to share with the domain just to have a separated defined list of contacts that is search-able by various fields... Anyone who could shed some light on domain level contact sharing relating to the points above, I would be most grateful...

    Read the article

  • What does SQL Server's BACKUPIO wait type mean?

    - by solublefish
    I'm using Sql Server 2008 ("R1"), with some maintenance plans that back up my databases to a network share. Some of my backup jobs show long waits of type "BACKUPIO". Of course it seems like this is an I/O subsystem limitation, but I'm skeptical. Perfmon stats for I/O on the production (source) server are well within normal trends for that server. The destination server shows a sustained 7MB/s write rate, which seems incredibly low, even for a slow disk. The network link is gigabit ethernet and nowhere near saturated. The few docs I've turned up about BACKUPIO indicate that it's not specifically a wait on I/O, surprisingly enough. This MSFT doc says it's abnormal unless you're using a tape drive, which I'm not. But it doesn't say (or I don't understand) exactly what resource is missing. http://www.docstoc.com/docs/24580659/Performance-Tuning-in-SQL-Server-2005 And this piece says it's not related to I/O performance at all. http://www.informit.com/articles/article.aspx?p=686168&seqNum=5 "Note that BACKUPIO and IO_AUDIT_MUTEX are not related to IO performance." Anyway, does anyone know what BACKUPIO actually means and/or what I can do to diagnose or eliminate it?

    Read the article

< Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >