Search Results

Search found 17356 results on 695 pages for 'document ready'.

Page 570/695 | < Previous Page | 566 567 568 569 570 571 572 573 574 575 576 577  | Next Page >

  • How can I enable PHP5 for a site? Having problems with every single method.

    - by John Stephens
    I'm working on a client site that is hosted on someone's DIY Debian Linux server [Apache/1.3.33 (Debian GNU/Linux)], and I'm trying to install a script that requires PHP5. By default, the server parses .php files with PHP 4.3.10-22, which is configured at /etc/php4/apache/php.ini, according to phpinfo(). On the server I can see a config directory for PHP5 adjacent to the PHP4 directory: /etc/php5.0/apache2/php.ini. I have tried multiple methods to enable PHP5 for the document root where the site's files are hosted, including all available methods mentioned here. By far, the most common suggestion I've found is to add one or both of the following lines to the site's .htaccess file: AddHandler application/x-httpd-php5 .php AddType application/x-httpd-php5 .php Trouble is, when either or both of those lines are present, the site forces my browser to download any .php files requested, without parsing the PHP at all. All of the other methods mentioned in the above article cause a 500 Internal Server Error. There is no hosting control panel I can access in a browser to enable PHP5 for the site, but I do have shell access. When I asked the server administrator about this issue, he encouraged me to search for the answer on Google. Where could I begin to troubleshoot this issue? Are there ways to test or verify the server's specific PHP5 installation and configuration, using the command line or some other method? Do you have other suggestions to enable PHP5?

    Read the article

  • IIS 7.5 error 500 in fastcgi module after upgrading wordpress to 3.0.2

    - by Maniac13
    I am running multiple wordpress blogs on the following setup: Server 2008 R2; IIS 7.5; PHP 5.3.3; MySQL 5.0.7; I upgraded my wordpress install from 2.9.2 to 3.0.2 (on 2 different sites) today and the upgrade went fine. I can serve .php pages without errors, log into the admin system etc. I can browse my blog by going directly to mywebsite.com/index.php, but when I try to go to mywebsite.com (without the index.php) I get he below 500 error. I reset IIS, removed and re-attached the default document, but I am running out of ideas. Please if anyone has a solution for this that would be great. This is the 500 error I am getting: Error Summary HTTP Error 500.0 - Internal Server Error The page cannot be displayed because an internal server error has occurred. Detailed Error Information Module FastCgiModule Notification ExecuteRequestHandler Handler PHP FastCGI Error Code 0x00000000 Requested URL http://mywebsite.com:80/index.php Physical Path D:\mywebsite.com\index.php Logon Method Anonymous Logon User Anonymous Thanks Stephan

    Read the article

  • Mapped networkdrive on logout

    - by Robuust
    I'm using a script to keep a mapped networkconnection alive, but ofcourse the mapped connection is gone when I logout. The point is now, that I'm running this on Windows Server 2008 R2, where I use remote desktop to login on the administrator account. However, it should remain logged in and not remove the mapped connection as this script takes care of not logging out on MS office 365 sharepoint. Is there a way to keep the mapped networklocation (L:) available after logout? So the script can run to remain the connection? # Create an IE Object and navigate to my SharePoint Site $ie = New-Object -ComObject InternetExplorer.Application $ie.navigate('https://xxx.sharepoint.com/') # Don't need the object anymore, so let's close it to free up some memory $ie.Quit() # Just in case there was a problem with the web client service # I am going to stop and start it, you could potentially remove this # part if you want. I like it just because it takes out a step of # troubleshooting if I'm having problems. Stop-Service WebClient Start-Service WebClient # We are going to set the $Drive variable here, this is just # going to tell the command what drive letter to map you can # change this to whatever you want (if you change it to a # drive that is already mapped it will overwrite it, so be careful. $Drive = "L:" # You can change the drive destiniation to whatever you want, # it has to be a document library or folder of course. $DrvDest = "https://xxx.sharepoint.com/files/" # Here is where we create the object to map the network drive and # then map the network drive $net = New-Object -ComObject WScript.Network; $net.mapnetworkdrive($Drive,$DrvDest) # That is the end of the script, now schedule this with task # scheduler and every so often and you should be set.

    Read the article

  • Lotus Domino - DAOS not reducing file size?

    - by SydxPages
    I have implemented DAOS on a Lotus Domino Server (8.5.3 FP2) as follows: Lotus Domino Server Document: Store file attachments in DAOS: Enabled Minimum size of object before Domino will store in DAOS: 64000 bytes DAOS base path: E:\DAOS Defer object deletion for: 30 days Transaction logging is running, and the specific test database has the following advanced properties set: Domino Attachment and Object Service (ticked) Use LZ1 compression for atachments Compress Database Design Compress Data I have restarted the server. When I run a compact -c, it compacts the database, but does not reduce the size. I have checked the DB in Windows Explorer (60Gb) and the size is the same pre and post. I have checked the directory (E:\DAOS) and it is 35Gb in size. When I run the command 'Tell DAOSMgr Status tmp\test.nsf', I get the following response. From looking up on the net, I believe ticket count = 0 means that the db is not really DAOS'ed? Admin Process: Searching Administration Requests database DAOSMGR: Status tmptest.nsf started DAOS database status: Database: E:\Lotus\Domino\Data\tmp\test.nsf Database state = Synchronized Last resynchronized: 03/09/2012 02:49:13 PM Ticket count: 0 DAOSMGR: Status tmp\test.nsf completed I have run fixup on the database. When I have tried to run the DAOS estimator it has always crashed. This was a problem with larger databases on earlier versions of domino, but not anymore. Can anyone tell me why the size has not reduced? Am I missing anything?

    Read the article

  • How to restore Linode to Vagrant VM?

    - by Iain Elder
    I'm trying to set up a Linux development environment so I can safely make changes to my website without breaking the live site. Linode hosts my live site. A simple solution would be to host my development server on Linode as well, but I want to avoid doubling my hosting costs. The cheapest way I see is to use Vagrant on my Windows workstation to host my development environment. After I attempt to restore the backup to Vagrant and reboot the VM, I can no longer ssh into the Vagrant host. It's probably because by restoring the backup I overwrite some special Vagrant configuration, but I'm not sure how to avoid that. How do I make this approach work? If my approach is fundamentally wrong, can you suggest an alternative? Creating the backup On the Linode I used these commands to create a compressed copy of the entire filesystem, while ignoring things that shouldn't be included in the backup: $ sudo rsync -ahvz --exclude={/dev/*,/proc/*,/sys/*,/tmp/*,/run/*,/mnt/*,/backup/*} /* /backup/2 $ sudo tar -czf /backup/2.gz /backup/2 The backup file is called 2.gz because this is thesecond backup. The first backup is called 1.gz. I use WinSCP to copy the backup file to my Windows workstation. Setting up the Vagrant host I need a Vagrant box that matches my Linode operating system (Ubuntu 12.04.3 LTS, kernel 3.9.3). I selected the closet match from vagrantbox.es: Ubuntu Server Precise 12.04.3 amd64 Kernel is ready for Docker (Docker not included) On my workstation I ran these commands to add the box and initialize and boot an instance: $ vagrant box add ubuntu-precise http://nitron-vagrant.s3-website-us-east-1.amazonaws.com/vagrant_ubuntu_12.04.3_amd64_virtualbox.box $ mkdir linode-test $ cd linode-test $ vagrant init ubuntu-precise $ vagrant up Now Vagrant is running a machine with SSH on port 2222. The operating system version is the same. The kernel version is 3.8.0. Sounds close enough. Restoring the backup With WinSCP I copied the backup file 2.gz to /home/vagrant/2.gz on the Vagrant box. With PuTTY I connected via ssh to my new Vagrant box: On the box move the backup to the filesystem root. $ sudo mv 2.gz / Extract the archive to the filesystem root: $ sudo tar -xvpz -f 2.gz -C / --strip-components=2 (I discovered I need to use strip components because all files in the archive have the prefix backup/2/. I'll fix this for the next backup.) After the tar command completes, I log out of the box. Testing the backup When I try to log in again, it doesn't let me log in as vagrant with a password any more. It does let me log in as iain, my user on the live Linode, with a password. That surprised me because I disabled password authentication on my live Linode. I figured that I have to restart the ssh service for the change to take effect. Instead of restarting just ssh, I chose to restart the whole system. Now I can't even get to the login screen. PuTTY says "connection refused" when I try to connect. What went wrong?

    Read the article

  • Find and Replace several several different values all at once

    - by matt
    I have a file with multiple instances of Text_1 and Text1 and I need to replace both those strings with Text_A and TextB respectively. Currently I'm doing two Find and Replace functions on each file one that finds Text_1 and replaces it with Text_A and the other that finds Text1 and replaces it with TextB. Is there any way to do this all at once instead of having to run "Find and Replace" twice? I am using Dreamweaver CS3, but I also have Notepad++, regular Notepad, OO Writer, MS Word if those will be easier. Ideally I could do this in Dreamweaver or Notepad++ but I'm open to downloading something else to get the job done. I'd prefer not to have to do any command line stuff or create a batch file (while I'm aware of it, I don't understand it really). Edit: In case the above description isn't clear, let me explain it this way... I want to run Find & Replace 1 time in 1 document and I want it to do ALL of the following during that one Find & Replace instance: Find: Text_1 and Replace with: Text_A Find: Text1 and Replace with: TextB I am not trying to do a Find and Replace across several documents.

    Read the article

  • Win8/7/XP print spooler not getting along with Zebra ZT230 via WIFI

    - by Jonathan M
    I have a graphics-intensive 4"x6" label I'm printing to the ZT230. I'm printing multiple (10) copies. When connected via USB, all goes well. However, when connected via wifi, I only get 2 of the labels. A wireshark capture shows that at some point in the process my computer (presumably my windows spooler) is sending a reset packet, which, I believe, would pretty much kill the print job. I'm getting the same results on Win8, Win7 and WinXP. The print job was originally generated on Zebra's ZebraDesigner2 software. For easier diagnosis, I captured it to a .prn file. The .prn file can be found here: https://drive.google.com/file/d/0BwxF_9SAkKzLLTF5bUJVT0lESUU/edit?usp=sharing And the wireshark capture file can be found here: https://drive.google.com/file/d/0BwxF_9SAkKzLTGpSS0ktZW1xV28/edit?usp=sharing And the printer configuration listing: https://docs.google.com/document/d/1zh1Tw4D4yNa2uljOIL1kO2z8se9HK859irpUEwyxlyY/edit?usp=sharing I've started a discussion with Zebra Tech Support, and they're working on it, but I thought I'd toss it out here for more ideas since we're getting kind of stumped. Any ideas why this may be happening?

    Read the article

  • Hylafax: Encounter "No font metric information" when try to send a fax

    - by Chau Chee Yang
    I am using Hylafax 6.0.5 on Fedora 13 x86_64. As there are no rpm package available for Fedora 13, I use the source tar ball to install hylafax myself. Everything seems fine during compile and install. I try to send a fax with sendfax and encounter error: # sendfax -n -d <fax-number> /etc/passwd /usr/local/sbin/textfmt: No font metric information found for "Courier-Bold". Usage: /usr/local/sbin/textfmt [-1] [-2] [-B] [-c] [-D] [-f fontname] [-F fontdir(s)] [-m N] [-o #] [-p #] [-r] [-U] [-Ml=#,r=#,t=#,b=#] [-V #] files... >out.ps Default options: -f Courier -1 -p 11bp -o 0 Error converting document; command was "/usr/local/sbin/textfmt -B -f Courier-Bold -Ml=0.4in -p 11 -s default >'/tmp//sndfaxp5GdJ9' <'/etc/passwd'" It seems like there is problem with font problem. I have ghostscript-fonts installed too. I can't find hyla.conf in path /etc/hylafax. There is no /etc/hylafax path in my file system. All configuration files seems located in /var/spool/hylafax/etc. Please advice. Thank you.

    Read the article

  • Line-length-tolerant XML diff

    - by Jon Skeet
    I've looked at the answers to this question, and unfortunately none of them has helped me so far. Not to beat about the bush, the second edition of C# in Depth is now in copy edit. I want to be able to see what the copy editor's done really easily, so I can reject or accept his changes. We're using a modified form of docbook, but I'm happy enough looking at the raw XML source. All fine so far - except that when the copy editor makes a change, that can change the line wrapping. So something that used to read: <para>Foo bar baz second line</para> now reads <para>Foo bar grontle baz second line</para> Now the real change here is the insertion of "grontle". I don't care that "baz" has moved from the first line to the second line... but all the diff tools I've seen do. I realise that one option would be to reformat the whole document (or possibly just whole paragraphs) into single lines... but that's then really hard to read, because diff tools don't wrap when they're displaying. I'm sure I can manage with the tools I've got, but if anyone knows of anything better, I'd be really glad to hear about it. I suspect my publishers would too :) (I've included the Windows tag here because I'd really need it to be available on Windows. I'd like to hear about any non-Windows software too, but only in case I could help to build it on Windows :)

    Read the article

  • How much memory will a Windows file-server be able to use effectively.

    - by Zoredache
    In the near future we will be moving our fileserver to a newer box that will be running Windows 2008R2. I want to know how much memory Windows will be able to use for a system that is just a file-server. In searching around I found an old document for Windows 2000 that mentions the maximum size of the file-system cache is 960MB. I suspect this limit no longer applies, but is there a new limit? The file server will be just a standard Windows fileserver. It will have 1TB of attached storage. The large majority of the of the files accessed during the day are just typical Office documents. There are 80-100 people usually using the fileserver during a typical day. This system will only be used as a file server, it doesn't have any other roles. In Windows 2008r2 is there any hard limits for the filesystem cache? What are they? The server we will be re-using for this purpose currently has 4GB of memory, but it can be maxed out at 16GB. Is there any value in doing this for a Windows file-server? Are there any performance counters can I look at on the existing 2003 fileserver that will tell me if adding more memory will be worthwhile.

    Read the article

  • Css absolute position don't work in MS WORD

    - by Tim
    Hello! This is a sample: <html> <head> <title>word test</title> </head> <body> <div style='position: absolute; width: 30px; height: 50px; top: 100px; left: 20px; border-color: black; border-width: 1px; border-style: solid;'> <p>Hello!</p> </div> </body> </html> Save it as "word.doc" and open in MS WORD. Absolute position don't work! Div is rendered on the top of document and with 100% width. Why? I can't use a html tables. Version on ms word: 2003

    Read the article

  • Migrating 10-15 Websites Running Linux, LAMP, RoR, WordPress

    - by Michael
    Task is to move 10-15 websites running Linux to new servers hosted by Amazon. These boxes are currently on dedicated servers. Some sites are running WordPress, some have custom CMS, and others might have RoR applications. Unfortunately, there is sparse documentation regarding each site and how services/files are dependent on each other which means there is a lot of detective work that needs to take place. My goal is to properly document each site, what makes them work, etc., so future admins have at least something to work with. Currently my strategy is to download each site so I have a backup of the files then scan through them looking for configuration files -- db connections, apache configs, etc. Then, create a nice spreadsheet with these findings and migrate these out to the new server. My question to ServerFault is this, are there things you would look out for? Easier ways to handle this task that I'm missing? Points will be awarded to answers that help with efficiency. Thanks in advance.

    Read the article

  • Don't break header/footer when making page landscape format

    - by Steeven
    I have a document with a footer with page numbering and a header with some centered text. Long story short: I flipped one page to landscape format. Long story long: I made a forced page break to get a blank page. I then highlighted this page (that is, I highlighted the empty line on this page) and went to the page proporties menu window, clicked on the landscape format button, and chose to apply this setting for the highlighted text only. The result: All that is of course no big deal, and my page is flipped the 90 degrees as wished - but my header and footer break! They disappear from this page... and actually also from the next page (which is still upright portrait format), which is weird. And then the header and footer is back on the following pages again, though the page numbering restarts from 1. On this screenshot you can see that the previous and the following pages have headers and footers, but not the flipped one and not the page following the flipped one. What just happened here? How can I make the page numbering continue without restarting first of all, and then, how can I keep the header and footer on at least the upright page, that suddenly doesn't have them?

    Read the article

  • --log-slave-updates is OFF but updates received from master are still logged to slave binary log?

    - by quanta
    MySQL version 5.5.14 According to the document, by the default, slave does not log to its binary log any updates that are received from a master server. Here are my config. on the slave: # egrep 'bin|slave' /etc/my.cnf relay-log=mysqld-relay-bin log-bin = /var/log/mysql/mysql-bin binlog-format=MIXED sync_binlog = 1 log-bin-trust-function-creators = 1 mysql> show global variables like 'log_slave%'; +-------------------+-------+ | Variable_name | Value | +-------------------+-------+ | log_slave_updates | OFF | +-------------------+-------+ 1 row in set (0.01 sec) mysql> select @@log_slave_updates; +---------------------+ | @@log_slave_updates | +---------------------+ | 0 | +---------------------+ 1 row in set (0.00 sec) but slave still logs the updates that are received from a master to its binary logs, let's see the file size: -rw-rw---- 1 mysql mysql 37M Apr 1 01:00 /var/log/mysql/mysql-bin.001256 -rw-rw---- 1 mysql mysql 25M Apr 2 01:00 /var/log/mysql/mysql-bin.001257 -rw-rw---- 1 mysql mysql 46M Apr 3 01:00 /var/log/mysql/mysql-bin.001258 -rw-rw---- 1 mysql mysql 115M Apr 4 01:00 /var/log/mysql/mysql-bin.001259 -rw-rw---- 1 mysql mysql 105M Apr 4 18:54 /var/log/mysql/mysql-bin.001260 and the sample query when reading these binary files with mysqlbinlog utility: #120404 19:08:57 server id 3 end_log_pos 110324763 Query thread_id=382435 exec_time=0 error_code=0 SET TIMESTAMP=1333541337/*!*/; INSERT INTO norep_SplitValues VALUES ( NAME_CONST('cur_string',_utf8'118212' COLLATE 'utf8_general_ci')) /*!*/; # at 110324763 Did I miss something?

    Read the article

  • I accidentally hijacked my localhost

    - by Zach L
    Opening localhost in the browser is pointing a local webpage (examplePage) after playing with some config files a while back, and I can't figure out how to restore the default behavior. Background: I have XAMPP installed on my Windows 7 machine, and a webpage at c:/xampp/htdocs/examplePage. A couple weeks ago, I was on a mission to get sites root-relative urls (/resource) to work, so I played around with a bunch of apache/conf files, including httpd.conf and httpd-vhosts.conf and also was messing with the Windows hosts file. I gave up at some point, didn't document exactly what I did, and have since probably forgotten some of what I did. Many of my changes stemmed from suggestions in this StackOverflow post What I've Tried I commented out my additions to the hosts file I turned off XAMPP (thus hopefully negating any apache config file effect) I reverted to my original DocumentRoot in httpd.conf anyway (xampp/htdocs) localhost still displays examplePage. Even with xampp turned on (my reverted DocmentRootisn't taking effect) Does anyone know what I may have done and how I can fix it? Update : Its been resolved, thank everyone so much in taskmanager, theres a couple instances of httpd.exe (Apache HTTP Server). I ended these, and opened XAMPP, restarting apache. all references to examplePage in my .conf files that I could find had been commented out or removed. I imagine that the old versions were still in effect for some reason, and manually ending the Apache processes fixed this. As a point of interest, Its still a mystery why those processes were running - I cannot reproduce that situation. I must've stumbled upon a XAMPP bug of some sort.

    Read the article

  • Excel 2010 data validation warning (compatibility mode)

    - by Madmanguruman
    We have some legacy worksheets that were created in Excel 2003, which are used by LabVIEW-based test automation software. The current LabVIEW software can only handle the legacy .xls format, so we're forced to keep these worksheets as-is for the time being. We've migrated to Office 2010 and when working with these worksheets, I see this warning: "The following features in this workbook are not supported by earlier versions of Excel. These features may be lost or degraded when you save this workbook in the currently selected file format. Click Continue to save the workbook anyway. To keep all of your features, click Cancel and then save the file in one of the new file formats." "Significant loss of functionality" "One or more cells in this workbook contain data validation rules which refer to values on other worksheets. These data validation rules will not be saved." When I click 'Find', some cells that do indeed have validation rules are highlighted, but those rules are all on the same worksheet! We're using simple list-based validation, with some cells off to the side containing the valid values (for example, cell B4 has a List with Source "=$D$4:$E$4") This makes no sense to me whatsoever. One, the workbook was created in Excel 2003, so obviously we couldn't implement a feature that doesn't exist. Secondly, the modifications we're making don't involve changing the validation rules at all. Thirdly, the complaint that Excel is making is incorrect! All of the rules are on the same worksheet as the target. As if the story wasn't bizarre enough: I went ahead and saved the worksheet with Excel 2010. I then went to an old computer back in the lab and opened the document with Excel 2003. Guess what - the validations were untouched! My questions are: is this a legitimate bug in Excel 2010, or is this some exotic error in the legacy .xls worksheet that is confusing the heck out of Excel 2010? Has anyone else observed this issue working in compatibility mode?

    Read the article

  • Windows Server 2012 Migration (DNS/AD DS Standard Eval to Essentials OEM) P2V -> Do I need a Secondary Domain Controller during migration?

    - by Aubrey Robertson
    This is my first post on this exchange (although not my first on stack exchange), so please have patience. I am a 3rd year student intern, and I have been tasked with virtualizing the server systems at the company I work for. I have come a long way, and I am almost ready to install the VM Server in migration mode. Here is some information: Source Server: Windows Server 2012 Standard Evaluation DNS Server (local only) Advanced Directory Domain Services File and Storage stuff A few other server roles Destination Server: Windows Server 2012 Essentials OEM (Hyper-V client) Running under a temporary Hyper-V host (will migrate the Hyper-V host back to the old machine after the original server is virtualized as a client). Sitting currently at the "Select Installation Mode" screen. I have been following the guides on Microsoft tech net, and today I spent most of the day getting rid of issues in the Best Practices Analyser on the source machine. I have 3 remaining issues (which are all related): ERROR: DNS: DNS servers on Ethernet (adapter name) should include the loopback address, but not as the first entry (flavour text indicates that, during migration, the DNS server may not be found) WARNING: All domains should have at least two domain controllers for redundancy. WARNING: DNS: Ethernet should be configured to use both a preferred and an alternate DNS Server. All of these issues can be resolved by deploying a secondary domain controller, but I have never done that before (see my concerns below). The main issue here that I am concerned with for installing in migration mode is the FIRST one (the error). If I try and set-up the new server deployment, and the adapter domain controller is listed as localhost, then this may cause the installation to fail. (at least, this is what the Microsoft documentation suggests). But I do not have another IP address to enter here as I have no other local domain controllers. So I did the first obvious thing that came to my mind, and tried to use Google DNS servers as my alternates. That did not work because they couldn't recognize other computers in the "forest". Now I'm no expert when it comes to DNS, so please forgive my ignorance. This DNS server is concerned only with Active Directory stuffs for the local network. If I go ahead with migration, and it fails, then I will just have to go ahead and install a secondary DNS server I suppose. The problem I have here is that I am limited by the amount of Windows Server keys I have available (I have 2); however, I do have access to a Linux box running Debian Wheezy that I set-up two weeks ago as a Mantis server. I could install Windows Server 2012 as a secondary DNS (I think) in a VM and use that, but then it seems like I will be wasting time, and probably the Windows key too, and if there's another way to do it with Linux that would be much better. Even better still, do I even need a secondary DNS server for migration at all? The hints said that during migration the original machine "might" not be found. Thank you for your time and consideration.

    Read the article

  • mod_rewrite REQUEST_FILENAME doesn't contain absolute path

    - by Paul Dixon
    I have a problem with a file test operation in a mod_rewrite RewriteCond entry which is testing whether %{REQUEST_FILENAME} exists. It seems that rather than %{REQUEST_FILENAME} being an absolute path, I'm getting a path which is rooted at the DocumentRoot instead. Configuration I have this inside a <VirtualHost> block in my apache 2.2.9 configuration: RewriteEngine on RewriteLog /tmp/rewrite.log RewriteLogLevel 5 #push virtually everything through our dispatcher script RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^/([^/]*)/?([^/]*) /dispatch.php?_c=$1&_m=$2 [qsa,L] Diagnostics attempted That rule is a common enough idiom for routing requests for non-existent files or directories through a script. Trouble is, it's firing even if a file does exist. If I remove the rule, I can request normal files just fine. But with the rule in place, these requests get directed to dispatch.php Rewrite log trace Here's what I see in the rewrite.log init rewrite engine with requested uri /test.txt applying pattern '^/([^/]*)/?([^/]*)' to uri '/test.txt' RewriteCond: input='/test.txt' pattern='!-f' => matched RewriteCond: input='/test.txt' pattern='!-d' => matched rewrite '/test.txt' -> '/dispatch.php?_c=test.txt&_m=' split uri=/dispatch.php?_c=test.txt&_m= -> uri=/dispatch.php, args=_c=test.txt&_m= local path result: /dispatch.php prefixed with document_root to /path/to/my/public_html/dispatch.php go-ahead with /path/to/my/public_html/dispatch.php [OK] So, it looks to me like the REQUEST_FILENAME is being presented as a path from the document root, rather than the file system root, which is presumably why the file test operator fails. Any pointers for resolving this gratefully received...

    Read the article

  • Outlook, Word, and normal.dot (2003 Edition)

    - by mosiac
    I have one user that for some reason has been having macro issues with her normal.dot file. At first the fix was just remove the file because she isn't actually needing to save anything. This was really a temp fix. We found out that for some reason every time she opened up word it was trying to modify normal.dot but not asking. I set it up to ask so at least we could control the changes going on to normal.dot. There was one file disabled in Word that we enabled because it was a document she never used anymore, making us think that maybe that was the issue. We have automatic antivirus updates and scans so there is little chance of a virus. The issue has stopped as far as just using Word itself. She can open, close, edit, save, etc and never get the dialog. In Outlook however if she clicks reply or forward to an e-mail but decides not to send it, and just close it. She gets the pop up to save changes to normal.dot. This leads me to believe something in outlook about how she is setup to use Word as an e-mail editor is causing the problem. Am I even on the right track here? Short form: Word works fine with normal.dot, as an Outlook mail editor wants to change normal.dot. No idea what to do.

    Read the article

  • Integration of SharePoint 2010 with TFS2010

    - by Kabir Rao
    We have performed following steps as of now- Install TFS2010 10.0.30319.1 (RTM) on Windows Server 2008 R2 Enterprise(app tier) SQL 2008 SP1 with Cumulative update 2 on Windows Server 2008 R2 Enterprise(data tier) Reporting Service is installed on app tier. After this installation worked fine we installed SharePoint 2010 on app tier. After installation we followed http://blogs.msdn.com/b/team_foundation/archive/2010/03/06/configuring-sharepoint-server-2010-beta-for-dashboard-compatibility-with-tfs-2010-beta2-rc.aspx for configuration. We are not able to perform the last step described in the link as following error occured- TF249063: The following Web service is not available: http://apptier:31254/_vti_bin/TeamFoundationIntegrationService.asmx. This Web service is used for the Team Foundation Server Extensions for SharePoint Products. The underlying error is: The remote server returned an error: (404) Not Found.. Verify that the following URL points to a valid SharePoint Web application and that the application is available: http://apptier:31254. If the URL is correct and the Web application is operating normally, verify that a firewall is not blocking access to the Web application. We have also noticed that Document Folder in Team project also have red x. Please help. Thanks upfront.

    Read the article

  • Moving users folder on Windows-7 to another partition - bad idea?

    - by Donat
    Hi, I'd like to re-submit here a question posted by Benjol on Aug 17at 5:57 "Moving users folder on Windows Vista to another partition - bad idea?" (I can't post one than one link until I earn "10 reputation" and removed my "answer" there to post my follow-up questions here). I am anxiously getting ready at long last to to carry out a clean install (using custom install option) from Vista to Windows-7 Home Premium 64bit with the free upgrade I received late October. For my Vista system I successfully set-up last Summer a multi-partitions scheme with Users and Program Data on a a different partition than the operating system (see link below, and its subsequent links in my comment for details). http://tuts4tech.net/2009/08/05/windows-7-move-the-users-and-program-files-directories-to-a-different-partition/comment-page-1/#comment-562 I was planning a similar set-up for windows 7, a little more streamlined, with OS, Program Files on C:, Users and Program Data on D:, and TV media recording on a separate partition. Reading the Question submitted by Benjol, I am second guessing too. Is moving Users and Program Data on a different partition than the default primary partition with OS and Program Files such a good idea? The couple of people I talked to at the official Microsoft Windows 7 booth at CES 2010 gave the same answer to the intention of moving the Users profile folder to another partition. In a nutshell, they all told me that they used to do this in XP and less in Vista but not anymore with Windows 7... "It is stable, after two months still no problem" I had the feeling it was a scripted answer to emphasize how Windows 7 is so stable and efficient... (Will Windows-7 system not become bugged down over the course of several months to a year or two? Only time will tell) Long story short, I share the same view than Benjol expressed with respect to being "able to backup and restore system and user data independently." I just received a 2TB usb2, eSATA external hard drive as a back-up drive, which includes NTI Shadow 4 (4.1.0.150) for back-up solution. I took note of the issue with NTUSER.DAT and I will read more about Volume Shadow Copy Service (VSS) for Windows 7. I am willing to put the effort if placing Users and Program Data on a different partition would allow to restore a fresher OS+Program image when the system gets bugged down. Questions: Is it such a bad idea? What is the "easy route" referred by Benjol in his post? Is it to just relocate folders to another partition using the Folder property tool? (It is not practical for several users and might not provide a straightforward restore process of just OS and Program Files when needed.) I am starting to learn about Windows 7 libraries. Would Windows 7 libraries be another alternative to achieve this? All this reading to decide how to organize the partition scheme for my custom system is starting to be confusing. I apologize for this lengthy Question. It is my first day here on SuperUser and I am just learning how different from a discussion thread it is. Thank you in advance for all your suggestions and comments. Donat

    Read the article

  • Photoshop CS6 Corrupted File recovery

    - by Ben Franchuk
    Last night I was working on a client application mock-up in photoshop, but was goin to take a break from my work so I saved the .PSD file on my internal HDD and put my computer into stand-by mode once the file had finished saving. Unfortunately my computer crashed while it was entering stand-by and shut itself down (photoshop was still open). I did not boot it again to make sure all my files were ok because they had already been saved, but today once I opened up the file again it was extremely corrupted and also completely un-editable (screenshot bellow). so what im asking is there any way to recover my work, or at least some of it? i have put in a good few days work on this project and would hate to have to restart it. the size of the file is 3070 KB, even though it reads as 712 KB in photoshop. i dont know if these file sizes are larger or either smaller than the original non-corrupted file's size, but considering all the layers in the file i suspect it was larger before it corrupted. im using windows XP professional 32bit SP3. both my OS and said .PSD file are located on the same internal HDD (74.4 GB). i do have an external HDD (1.5 TB) but i primarily only use it for movies music and tv shows. i dont know if it was plugged in t the time of me editing the document last, though, if it means anything. i have tried many image and PSd recovery softwares but none have returned any results that may help recover my work. edit: i tried using a photo reccovery software (odboso Photorecovery) that actually seems to recover the corrupted file in question judging by the size of the file, but i cannot recover it because of the licence fee. knowing that the file is still likely on my HDD, what location might it be located?

    Read the article

  • How can I work around problems with certificate configuration in Remote Desktop Services?

    - by Michael Steele
    I am setting up a Remote Desktop Services farm, and am having trouble configuring certificates for it to use. A demonstration of the problem I'm seeing can be found in Step #4. At this point I am convinced that there are problems with the user interface, and am looking for ways around them. Is there any way to configure certificates in Remote Desktop Services so that the settings hold and are reflected in the GUI? If not, is there any way for me to verify that the settings are correct? Step #1 - Create certificate to be used. I've configured a certificate to use with RD Web Access. The certificate is stored with in the Certificates MMC on my RD Connection Broker, and I am configuring the farm from that computer. I found by letting RD Web Access generate its own certificate that the following properties are required: Enhanced Key Usage Server Authentication Client Authentication This may not be required, but the self-signed certificate includes it. Key Usage Digital Signature Key Agreement Subject Alternative Name DNS Name=domain.com Detour about self-signed certificate generation As a quick detour, I was able to work around a problem with creating self-signed certificates using powershell. The documentation for the New-RDCertificate cmdlet gives the following example: PS C:\> $password = ConvertTo-SecureString -string "password" -asplaintext -force New-RDCertificate -Role RDWebAccess -DnsName "test-rdwa.contoso.com" -Password $password -ConnectionBroker rdcb.contoso.com -ExportPath "c:\test-rdwa.pfx" Typing this into the shell will result in an error message claiming that a function, Get-Server cannot be found. Prior to using New-RDCertificate, you must import the RemoteDesktop Module with Import-Module RemoteDesktop. Step #2 - Observe out-of-box behavior The first time you visit the Deployment Properties dialog box by navigating to Server Manager - Remote Desktop Services - Collections and selecting "Edit Deployment Properties" from the "TASKS" dropdown list in the "COLLECTIONS" grouping, you will see the following screen: This window is misleading because the level field is listed as "Not Configured". If I understand correctly all three of the role services are using a self-signed certificate. For the RD Web Access role this can be verified by visiting the website: The certificate being used also appears in the Certificates MMC: Step #3 - Assign new certificate The Deployment Properties dialog box will allow me to select my existing certificate. The certificate must be placed within the local computers Certificates MMC in the "Personal" certificate store. The private key will need to be exportable, and you will need to provide the password. I temporarily exported my certificate to a file named temp.pfx with a password, and then imported it into Remote Desktop Services from there. Once this is done the GUI will indicate that it is ready to accept the new configuration. Once I click the "Apply" button, the GUI indicates success. This can be verified by visiting the RD Web Access web site a second time. There is no certificate error. Step #4 - The GUI fails to maintain its state If the GUI is closed and reopened, all of these settings appear to be lost. Actually, the certificate I configured is still being used. I am able to continue accessing the RD Web Access site without any certificate errors. Oddly, if I use the "Create new certificate..." button to generate a self-signed certificate this window will update to an "Untrusted" level. This setting will then be maintained through the opening and closing of the Deployment Properties dialog box. Is there anything I can do to have my settings appear to stick? I feel like something is wrong when the GUI claims I haven't fully configured certificates.

    Read the article

  • What privilege level is required on a Windows client workstation on an ActiveDomain to break file lo

    - by Mike Burton
    I'm not sure if I should be asking this here or on StackOverflow, but here goes: I'm part of a team maintaining a document management application, and I'm trying to figure out Windows file locking permissions. We use a utility somebody downloaded years ago called psunlock to remotely close all locks on a file. We recently discovered that this does not work across different domains on our VPN. A little bit of digging lead me to the samba manual's discussion of file locking. I still don't really "get it", though. Does anyone have any insight to share into how the process of locking and breaking locks on files works in a network context? My thinking is that privileges are required both on the file appliance and on the client workstations which hold locks. Is that accurate? Can anyone give a more specific version? Ideally I'm looking for something along the lines of A user must have privilege level X in order to break locks held from a client workstation. In practice I'd be happy with a hotlink to a good white paper on the subject.

    Read the article

  • Cheapest way to connect 20-24 Sata II HDDs in a budget storage server?

    - by Joe Hopfgartner
    I need to assemble a high density storage server for as cheap as possible. It's been a while for me and the last systems I integrated didn't even have Sata yet... During my Research I of course stumbled about Nexsan SATA Beast, the BackBlaze storage Pods as well as some ridiculously overpriced HP Proliant or Dell storage solutions. Finally I choose Norco cases as the way to go. My eye is set on the RPC-4020, which is a 4U 19" Rackmount case with 20 Hot Swap 3.5" SATA/SAS Hdd trays (Backplanes included) and room for two 2.5" OS drives as well as a Slim Line CD-Rom. The backplanes connect with a single SATA port for each drive, so there are 20 internal SATA ports to to be connected. They also have redundant power ports which I think is quite nice. The cheapest price I have found is 290$ + 40$ shipping. In europe the cheapest unfortunately is 370€ (500$) + 40 € shipping... A nice alternative would be the RPC-4224 which has SFF-8087 Mini SAS connectors that bundle 4 SATA trays each. But it doesn't seem to be available in Europe (where i am) anywhere. So here comes my problem: What Mainboard/Controller to choose to connect them for as cheap as possible while still having nice data rates? I have to say that the server is intended as a Storage server with 1gps connectivity and the data transfer will be distributed very evenly across all drives. I also don't require any raid functionality. This is all done at application level, I just need JBOD. So for example if I go for the RPC 4020 Model I need to connect 20 Storage + 1 OS + 1 CDROM Sata ports. I searched a bit and stumbled across this very low priced controller: http://www.intel.com/products/server/raid-controllers/SASWT4I/SASWT4I-overview.htm They sell it for 115 € here and the specs say it can control up to 122 hard discs and has 4 Mini SAS connectors. So I would use 4 Mini SAS 36pin - 4 SATA 7pin cables to connect 4 SATA drives to each port and choose a Mainboard taht has 6 SATA on board (for example this one) and hurray, I can connect my 22 SATA devices for as low as about ~ 220 EUR (cpu, ram, psu, case not counted) Question: WOULD THAT WORK? And if not, why? 2nd Question: If I go for the 4220 or 4224 Model, I have internal Mini SAS connectors. Am I right in assuming that the backplane than acts as a "SAS Expander"? And can I just plug these SAS connectors into any SAS port I can find on my controller / mainboard or are there certain requirements? I know that SATA port multipliers only work with controllers that are ready for that. But isn't this expansion already implemented in the SAS standard? I am sorry that this is a very broad question, but I really spent the last week reading up and it seems to be not so clear! Especially all the controlling hardware specifications! 3rd Question: A lot of hardware specs feature "internal channels" and "internal connectors". The connecors are the physical numbers of places where I can plug a cable in. I got that. But are the "internal channels" always the maximum numbers of physical drives that can be used in the end? Or can I enhance this further by Expanders/Fanouts? 4th and last question: What do you think about the setup so far? Do you know any good alternatives? Maby I am completely going the wrong way and some DAS would be way better? Are there any comparable chassis available in europe? Please feel free to say whatever you think is relevant to the subject!

    Read the article

< Previous Page | 566 567 568 569 570 571 572 573 574 575 576 577  | Next Page >