Search Results

Search found 11812 results on 473 pages for 'word processing'.

Page 367/473 | < Previous Page | 363 364 365 366 367 368 369 370 371 372 373 374  | Next Page >

  • How does a web server/the http protocol handle version control and compression?

    - by Sune Rasmussen
    When a client browser requests a file from the web server, I know that some kind of check is performed, because the files needed to serve the web page may already be cached by the web browser. So, if a file exists in the cache, no files are sent. But if the file on the server has changed since the file was cached in the browser, the file is sent and updated anyhow. Then, if you have compression like gzipping enabled on the server, the files that are to be provided to the client must be gzipped on the way, requiring some amount of server side processing. But how is this managed? The logical approach seems to me, that the web server should have a cache as well, containing the newest version of all files that have been requested within a certain time span, thus a compressed version of these files, so that compression would not have to be done each time a files is requested. And also, how are files eventually requested? Does the browser ask for files, each time it encounters one in the HTML code and the specific file is not stored in the local cache, or does it sum all the files that are needed up and ask for the whole bunch at the same time? But that's only guessing from a programming point of view, and I don't really know. If the answers are very different among web server systems, I'm primarily interested in Apache, but other answers are appreciated, too.

    Read the article

  • Moving from WDS to MDT + WDS - Prestaged Computer Name

    - by MSCF
    We previously used just WDS to deploy our images. WDS was setup to request approval for new machines. We used the "Name and Approve" option to name the machines as we added them. If it was pre-existing, it would just use the existing computer name from AD. Then in our unattend.xml file we had Computername=%MACHINENAME%. This picked up the name we gave it during approval and set the computer name accordingly. We are now implementing MDT to manage our images and drivers. But upon testing, we noticed it would assign random computer names. I went into the Unattend.xml for the deploy task sequence and added that value under Specialize amd64_Microsoft-Windows-Shell-Setup_neutral Computername=%MACHINENAME%. But when we try applying the image, it errors out at that point of the install. How can an MDT deployment be configured to leverage the pre-staged name? Some additional info: Error message during the imaging process: Windows could not parse or process the unattend answer file for pass [specialize]. The settings specified in the answer file cannot be applied. The error was detected while processing settings for component [Microsoft-Windows-Shell-Setup].? setuperr.log: 2014-07-22 14:02:13, Error [setup.exe] [Action Queue] : Unattend action failed with exit code 4 2014-07-22 14:02:13, Error [setup.exe] Execution of unattend GCs failed; hr = 0x0; pResults-hrResult = 0x8030000b

    Read the article

  • Sending Adobe PDF attachments from Adobe Reader (in Outlook 2003) takes too long

    - by White Island
    I have a customer who is using Outlook 2003 (Microsoft Online Services) and Adobe reader 9+. When they send a PDF from Adobe reader to Outlook (via the Send as attachment to e-mail feature in Adobe), it freezes for 30 seconds to 5 minutes before the new e-mail pops up with the PDF attachment. I'm pretty sure the issue is on the Outlook side of things, as I've tried Adobe reader 8 and Foxit Reader with the same results (Windows XP/7 doesn't seem to make a difference, either). I tried Outlook in safe mode on the first (Win7) machine I was working on, and the e-mail attachment worked a lot faster, but when I tried to replicate the results on another machine, one wouldn't go into safe mode, the other didn't seem to show a difference. In an effort to fix the problem in Outlook normal mode, I tried disabling all add-ins, Com add-in (Office Communicator is the only one), reading pane, Word 2003 as e-mail editor... but none of these seemed to address the issue. Does anyone have any other ideas? I need to get this resolved as soon as possible, and it doesn't seem practical to make them run in safe mode. :P

    Read the article

  • PostgreSQL update problems with uuid

    - by kdl
    Hi, I am trying to run yum update on my CentOS 5.2 box and keep getting this message: Missing Dependency: libossp-uuid.so.15 is needed by package postgresql-contrib I ran yum update postgresql separately and now it's 8.3.8. I also downloaded uuid-1.6.2 and built it from source, but I still get the same result. yum update -d6 uuid gives me this at the end: --> Running transaction check ---> Package uuid.i386 0:1.6.1-3.el5.kb set to be updated Checking deps for uuid.i386 0-1.6.1-3.el5.kb - u Checking deps for uuid.i386 0-1.5.1-4.rhel5 - None postgresql-contrib requires: libossp-uuid.so.15 --> Processing Dependency: libossp-uuid.so.15 for package: postgresql-contrib Needed Require is not a package name. Looking up: libossp-uuid.so.15 Potential Provider: uuid.i386 0:1.5.1-4.rhel5 Mode is u for provider of libossp-uuid.so.15: uuid.i386 0:1.5.1-4.rhel5 Mode for pkg providing libossp-uuid.so.15: u Cannot find an update path for dep for: libossp-uuid.so.15 Searching pkgSack for dep: libossp-uuid.so.15 Potential match for libossp-uuid.so.15 from uuid - 1.5.1-4.rhel5.i386 Matched uuid - 1.5.1-4.rhel5.i386 to require for libossp-uuid.so.15 uuid - 1.5.1-4.rhel5.i386 is in providing packages but it is already installed, removing. --> Finished Dependency Resolution Dependency Process ending Error: Missing Dependency: libossp-uuid.so.15 is needed by package postgresql-contrib How can I resolve this situation? Thanks

    Read the article

  • Recover files from corrupt filesystem

    - by Emile 81
    My situation: I have an older 80GB IDE internal hdd, with a few files on that I would like very much to recover: some word documents some latex documents (text files) and pictures (png, jpg, eps files) some other text documents and visual studio project files I had backed them (not the latex ones though) up using svn, but have not committed lately, and would loose a lot of work if I cant recover. the hdd seems to have lost its filesystem, i have no idea how it came about. I know it has/had 3 NTFS partitions, i know the files i want are on the second or third partition. I read http://superuser.com/questions/81877/recover-hard-disk-data Partition Find and Mount did not see all the partitions using intelligent scan TestDisk does (i think), I followed the step by step instructions here, but when I try to list the files it says: "Can't open filesystem, filesystem seems damaged." I'm not sure how to proceed here, as TestDisks wiki does not contain this error message afaik. I don't know if the hdd is gonna fail, or some prog has caused the filesystem to be corrupt, the hdd doesnt make a sound, so i guess that's good. I would like some guidance so I don't accidentally cause more damage. (eg. is it ok to let testdisk write the filesystem to disk? I'm pretty the partitions are listed ok, but not 100%)

    Read the article

  • Outgrew MongoDB … now what?

    - by samsmith
    We dump debug and transaction logs into mongodb. We really like mongodb because: Blazing insert perf document oriented Ability to let the engine drop inserts when needed for performance But there is this big problem with mongodb: The index must fit in physical RAM. In practice, this limits us to 80-150gb of raw data (we currently run on a system with 16gb RAM). Sooooo, for us to have 500gb or a tb of data, we would need 50gb or 80gb of RAM. Yes, I know this is possible. We can add servers and use mongo sharding. We can buy a special server box that can take 100 or 200 gb of RAM, but this is the tail wagging the dog! We could spend boucoup $$$ on hardware to run FOSS, when SQL Server Express can handle WAY more data on WAY less hardware than Mongo (SQL Server does not meet our architectural desires, or we would use it!) We are not going to spend huge $ on hardware here, because it is necessary only because of the Mongo architecture, not because of the inherent processing/storage needs. (And sharding? Please! Cost aside, who needs the ongoing complexity of three, five, or more servers to manage a relatively small load?) Bottom line: MongoDB is FOSS, but we gotta spend $$$$$$$ on hardware to run it? We sould rather buy commercial SW! I am sure we are not the first to hit this issue, so we ask the community: Where do we go next? (We already run Mongo v2) Thanks!!

    Read the article

  • Need an expert advice on *X display manager, window manager and composit manager combination

    - by fakemustache
    Hello! I have fought with myself whether or not i should ask this question but I find myself stuck and I need another expert opinion. I can't seem find the right combination of display and window manager (and composit manager). I have tried so many different combinations but most of them don't work for me. I have been working with Linux for a few years now and currently I'm running Gentoo with GDM, Openbox(stand alone, Gnome aware) and xcompmgr. But I have tried Metacity, Awesome and Fluxbox with and without Compiz, but always with GDM. What I want: A lightweight, HIGHLY configurable environment that doesn't rely on mouse-input too much (except for web browsing and image processing). At 95% I work with multiple consoles and desktops on multiple screens. What makes me ask is that most lightweight environments seem somewhat "unfinished" and show unexpected behavior quite often. And of course I want an environment that's not TOO ugly to look at as I use it at an average of 10 hours a day. :) Any thoughts? What do you use in a similar situation? Thanks for any advice! (Please excuse my english as I'm from Germany, btw greetings from Berlin ;)))

    Read the article

  • Excel VBA: select every other cell in a row range to be copied and pasted vertically

    - by terry alexander
    i have a 2200+ page text file. It is delivered from a customer through a data exchange to us with astericks to separate values and tildes (~) to denote the end of a row. The file is sent to me as a text file in Word. Most rows are split in two (1 row covers a full line and part of a second line). i transfer segments (10 page chunks) of it at a time into Excel where, unfortunately, any zeroes that occur at the end of a row get discarded in the "text to columns" procedure. So, i eyeball every "long" row to insure that zeroes were not lost and manually re-enter any that were. Here is a small bit of sample data: SDQ EA 92 1551 378 1601 151 1603 157 1604 83 The "SDQ, EA, and 92" are irrelevant (artifacts of data transmission). i want to use Excel VBA to select 1551, 1601, 1603, and 1604 (these are store numbers) so that i can copy those values and transpose paste them vertically. i will then go back and copy 378, 151, 157, and 83 (sales values) so that i can transpose paste them next to the store numbers. The next two rows of data contain the same store numbers but give the corresponding dollar values. i will only need to copy the dollar values so they can be transpose pasted vertically next to unit values (e.g. 378, 151, 157, and 83). Just being able to put my cursor on the first cell of interest in the row and run a macro to copy every other cell would speed up my work tremendously. i have tried using activecell and offset references to select a range to copy but have not been successful. Does any have any suggestions for me? Thanks in advance for the help.

    Read the article

  • Unable to install Perl Crypt::OpenSSL::RSA module, please help

    - by Willy
    Hi Everyone, I spent several hours but unable to install CPAN Crypt::OpenSSL::RSA module. It's required for Postfix's dkimproxy add-on. What I do is to run the following command in the shell: $ perl -MCPAN -e 'install Crypt::OpenSSL::RSA' When I run this command, several lines displayed and at the end, this is displayed: Checking if your kit is complete... Looks good Warning: prerequisite Crypt::OpenSSL::Random 0 not found. Writing Makefile for Crypt::OpenSSL::RSA ---- Unsatisfied dependencies detected during [I/IR/IROBERTS/Crypt-OpenSSL-RSA-0.26.tar.gz] ----- Crypt::OpenSSL::Random Shall I follow them and prepend them to the queue of modules we are processing right now? [yes] Then I hit enter (yes) and tens of lines generated with error. At the end I get this: ... ... RSA.xs:579: warning: implicit declaration of function ‘RSA_sign’ RSA.xs:579: error: ‘rsaData’ has no member named ‘hashMode’ RSA.xs:579: error: ‘rsaData’ has no member named ‘hashMode’ RSA.xs:579: error: ‘rsaData’ has no member named ‘rsa’ RSA.xs: In function ‘XS_Crypt__OpenSSL__RSA_verify’: RSA.xs:605: error: ‘rsaData’ has no member named ‘rsa’ RSA.xs:610: error: ‘rsaData’ has no member named ‘hashMode’ RSA.xs:611: warning: implicit declaration of function ‘RSA_verify’ RSA.xs:611: error: ‘rsaData’ has no member named ‘hashMode’ RSA.xs:613: error: ‘rsaData’ has no member named ‘hashMode’ RSA.xs:616: error: ‘rsaData’ has no member named ‘rsa’ RSA.xs:619: warning: implicit declaration of function ‘ERR_peek_error’ RSA.xs: In function ‘boot_Crypt__OpenSSL__RSA’: RSA.xs:214: warning: implicit declaration of function ‘ERR_load_crypto_strings’ make: *** [RSA.o] Error 1 /usr/bin/make -- NOT OK Running make test Can't test without successful make Running make install make had returned bad status, install seems impossible What am I doing wrong? Please guide me. Thanks.

    Read the article

  • Perl TDS character sets

    - by skiphoppy
    I'm using the FreeTDS driver with DBD::Sybase, connecting to an MS SQL Server. When I query certain values of certain records, I get this error: DBD::Sybase::st fetchrow_arrayref failed: OpenClient message: LAYER = (0) ORIGIN = (0) SEVERITY = (9) NUMBER = (99) Server , database Message String: WARNING! Some character(s) could not be converted into client's character set. Unconverted bytes were changed to question marks ('?'). This seems to happen for records that contain special Windows character-set characters, such as curly quotes, copied and pasted from people's Outlook and Word messages. Unfortunately, I do not have any control of this database; sanitizing the input on the way in is obviously the way to go, but is not available to me. What FreeTDS settings do I need to change to be able to successfully query these records? Additional information: The query works fine from tsql. I only get this error through Perl's DBD::Sybase interface. (Should I test through something else? I don't have the expertise yet to install PHP or Python. I've got jTDS and can use it, but I think that's a completely different implementation, not an interface to FreeTDS.) Adding client charset = UTF-8 to my freetds.conf file results in "Out of memory!" printed to STDERR.

    Read the article

  • Weird mouse/keyboard freezups when using PowerPoint 2007 with IBM/Lenovo docking station

    - by DanM
    I'm not sure what part of my system is responsible for this, but when using PowerPoint, I have problems when trying to resize drawing objects. I'll be dragging the handle and suddenly, the object will deselect and whatever is behind the object will select and start moving around. Next thing I know, the keyboard won't type anymore, and the only way to fix it is to unplug the USB and plug it back in. In case it's hardware related, I'm using an IMB Thinkpad T60P in a docking station. My keyboard is a Microsoft Natural Keyboard Pro. My OS is Windows XP SP3. I've never noticed this happening in anything besides PowerPoint, and I don't know anyone else who has this problem (even people with similar setups). Any ideas what it could be? Edit Well, it looks like I only get the problem if I plug the mouse into my docking station's USB. If I plug directly into the laptop's USB, everything works fine. And, again, this problem is only with PowerPoint. I tried playing with some drawing objects in Word and had no issue no matter where my mouse was plugged in. I should also mention I tried a different mouse (a standard Microsoft corded mouse instead of my Logitech trackball), but that made no difference. So, I don't think it's anything specific with the trackball or the trackball's driver. I tried searching Google but came up empty, so I'm guessing this problem is something unique to my setup. If you have any thoughts or ideas to try, I'd love to hear them.

    Read the article

  • Apache reaching MaxClients and locking the server

    - by Rodrigo Sieiro
    Hi. I currently have an Apache2 server running with mpm-prefork and mod_php on a OpenVZ VPS with 512M real / 1024M burstable RAM (no swap). After running some tests, I found that the maximum process size Apache gets is 23M, so I've set MaxClients to 25 (23M x 25 = 575 MB, ok for me). I decided to run some load tests on my server, and the results left me puzzled. I'm using ab on my desktop machine requesting the main page from a wordpress blog. When I run ab with 24 concurrent connections, everything seems fine. Sure, CPU goes up, free RAM goes down, and the result is about 2-3s response time per request. But if I run ab with 25 concurrent connections (my server limit), Apache just hangs after a couple of seconds. It starts processing the requests, then it stops responding, CPU goes back to 100% idle and ab times out. Apache log says it reached MaxClients. When this happens, Apache keeps itself locked up with 25 running processes (they're all in "W" if I check server status) and only after the TimeOut setting the processes start to die and the server starts responding again (in my case it's set to 45). My question: is that expected behaviour? Why Apache just dies when it reaches MaxClients? If it works with 24 connections, shouldn't it work with 25, just taking maybe more time to respond each request and queueing up the rest? It sounds kinda strange to me that any kid running ab can alone kill a webserver just by setting the concurrent connections to the servers MaxClients.

    Read the article

  • Apache MaxClients reaching max and locking the server

    - by Rodrigo Sieiro
    Hi. I currently have an Apache2 server running with mpm-prefork and mod_php on a OpenVZ VPS with 512M real / 1024M burstable RAM (no swap). After running some tests, I found that the maximum process size Apache gets is 23M, so I've set MaxClients to 25 (23M x 25 = 575 MB, ok for me). I decided to run some load tests on my server, and the results left me puzzled. I'm using ab on my desktop machine requesting the main page from a wordpress blog. When I run ab with 24 concurrent connections, everything seems fine. Sure, CPU goes up, free RAM goes down, and the result is about 2-3s response time per request. But if I run ab with 25 concurrent connections (my server limit), Apache just hangs after a couple of seconds. It starts processing the requests, then it stops responding, CPU goes back to 100% idle and ab times out. Apache log says it reached MaxClients. When this happens, Apache keeps itself locked up with 25 running processes (they're all in "W" if I check server status) and only after the TimeOut setting the processes start to die and the server starts responding again (in my case it's set to 45). My question: is that expected behaviour? Why Apache just dies when it reaches MaxClients? If it works with 24 connections, shouldn't it work with 25, just taking maybe more time to respond each request and queueing up the rest? It sounds kinda strange to me that any kid running ab can alone kill a webserver just by setting the concurrent connections to the servers MaxClients.

    Read the article

  • data is not posted in $_POST variable using AJAX [migrated]

    - by Oliver
    Im having a problem in one of my script. Server is running in php, and im using AJAX to post data. Here is my script. PHP script: 0){ echo "Search Result :"; for ($x=0;$xProject Name:   ".mysql_result($result,$x,"projname").""; echo "APMS ID:   ".mysql_result($result,$x,"apmsid").""; echo "Prefix/es:   ".mysql_result($result,$x,"projprefix").""; echo "Usage Type:   ".mysql_result($result,$x,"usagetype").""; echo "Rate:   ".mysql_result($result,$x,"projrate").""; echo "Offer Details:   ".mysql_result($result,$x,"offerdetails").""; } }else{ echo "No results found ..."; } }else{ echo "Problems encountered while processing the data ..."; } ? JS Script: function QueryPrefix() { var xmlhttp; var pStr = document.getElementById('Editbox2'); var htmlHolder = document.getElementById('Html1'); var butStr = document.getElementById('Button1'); if (pStr.value.length == 0){ alert("Please enter a value on the box provided!"); return; } pStr.value=""; if (window.XMLHttpRequest) {// code for IE7+, Firefox, Chrome, Opera, Safari xmlhttp=new XMLHttpRequest(); } else {// code for IE6, IE5 xmlhttp=new ActiveXObject("Microsoft.XMLHTTP"); } xmlhttp.onreadystatechange=function() { if (xmlhttp.readyState==4) { htmlHolder.innerHTML=xmlhttp.responseText; butStr.disabled=false; } } butStr.disabled=true; xmlhttp.open("POST","searchutype.php",false); xmlhttp.setRequestHeader("Content-type","application/x-www-form-urlencoded"); xmlhttp.send("pStr=" + pStr.value); }

    Read the article

  • Switch Text Paragraphs in OpenOfficeOrg Writer mailmerge

    - by Glen S. Dalton
    I am using mailmerge to write the same letter with minor differenes to many peolpe. I experienced that switching text paragraphs depending on database values was not easy for me. I ended up putting huge text paragraphs into the database becaus switching did not really work for me. Actually I dont' understand how writer does it and maybe the boolean evaluation is buggy? There is some possibility making paragraphs invisible depending on database fields, but it was frustrating. After marking a paragraph as invisible (depending on a condition) it went invisible in the main document and did not come back, I lost the content. An example in pseudocode of what I want in my mailmerge document: {if [[balance]] 10} We owe you money. Please can you send your bank details. {end if} {if [[balance]] < -10} Please transfer the remaining amount to our banc account 123... {end if} Maybe this could be done with makros? But how to combine makros with mailmerge? Can you tell me what are the pitfalls and how to master them? I once did this with ms word, it was a lot easier. The normal mailmerge (including database fields in the letters) works fine for me in OpenOffice writer.

    Read the article

  • Clicking a link in IE6 doesn't load page (internal DNS entry on our intranet)

    - by Callum
    I have a very strange problem that is only affecting some versions of IE6. The problem does affect IE 6.0.2900.5512, but does not seem to affect 6.0.3790.3959 Basically I work for a company and we have an intranet. While I'm not an expert on "internal DNS pointers", what I was able to do was create a website (let's say about football), and when an employee who is sitting behind the company firewall types the word "football" in to the web address bar of their web browser, they get redirected to a particular server. I am told this is some kind "internally pointing DNS entry". So, I've set one of these up, and I have a placed a link to it on our company intranet page. However, when the link is clicked in IE6.0.2900.5512, the page goes blank. Clicking "refresh" then loads the correct page (the one specified in the link). Can anyone help me out here. I have tried changing the way URL is formed, everything from //football to http://football/ etc. The link works fine in every other browser and IE7+, but unfoturnatly, IE6 is still the most common browser in use at my organisation.

    Read the article

  • CENTOS 6 - How to install php-mysql when php-common @remi is present?

    - by Multitut
    I am having troubles adding mysql support for my php installation, this installation was made using a ready to use-package that came with our VPS. This is my php.info: http://snake.quetzalcoatech.com/info.php I am trying to install php mysql using: yum install php-mysql And get this output: Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirrors.serveraxis.net * extras: mirror.fdcservers.net * updates: bay.uchicago.edu Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package php-mysql.x86_64 0:5.3.3-14.el6_3 will be installed --> Processing Dependency: php-common = 5.3.3-14.el6_3 for package: php-mysql-5.3.3-14.el6_3.x86_64 --> Finished Dependency Resolution Error: Package: php-mysql-5.3.3-14.el6_3.x86_64 (updates) Requires: php-common = 5.3.3-14.el6_3 Installed: php-common-5.3.17-2.el6.remi.x86_64 (@remi) php-common = 5.3.17-2.el6.remi Available: php-common-5.3.3-3.el6_2.8.x86_64 (base) php-common = 5.3.3-3.el6_2.8 Available: php-common-5.3.3-14.el6_3.x86_64 (updates) php-common = 5.3.3-14.el6_3 You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest I am a noob using Linux, so could you tell me which command should I use to install a compatible php-mysql module? Thank you so much!

    Read the article

  • Hosting a javascript api file for third party sites the way sharethis, uservoice, analytics do it.

    - by Dayson
    I'm preparing to launch a service soon which will provide third party websites a widget. The widget requires my javascript file in the website's code. Exactly the same way services like analytics, uservoice, sharethis, getclicky, etc provide you with a javascript snippet to add to your page. Therefore, my javascript file is going to be hotlinked by tons of websites which possibly receive a lot of requests too. I need advice/opinions on the following aspects: What's the right location for hosting this file? Should I use a sub-domain for it? I was thinking of something like http://api.myservice.com/js/foo.js . Remember, once websites start embedding this file, its location CANNOT change under any circumstances. Right now we can afford just one dedicated server. So I have minified my file, enabled gzip and plan to use some good cache control headers through apache. Also, in the near future when the requests pickup, I will use a http proxy like Varnish. Is this a good plan for the near future? Should I be considering a CDN in the future (since we can't afford it now)? If so how do I make sure we're prepared to migrate to it without breaking services. Pros/Cons of moving just this file to a CDN? Also, since its just one javascript file(50kb), any affordable CDN so we could consider it in the beginning itself? Any other word of advice I could use? Anything I shouldn't overlook at this stage which I would regret later? (both in terms of server + javascript ajax limitations) Thanks in advance.

    Read the article

  • Apache2 refuses to process php files - "Snow Leopard" OSX 10.6.4

    - by w-01
    I have a macbook pro i5. my understanding is that by default it should be able to serve php5. i have uncommented the relevant line in /etc/apache2/httpd.conf LoadModule php5_module libexec/apache2/libphp5.so I have restarted apache with sudo apachectl -k restart and when i try to access a file with a php extension, Apache prompts me to download the file. i.e. instead of processing the php and sending me html, it thinks i want to download the file.... when i look in apache error log i see this [Fri Nov 12 10:16:14 2010] [notice] Apache/2.2.14 (Unix) PHP/5.3.2 mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 mod_wsgi/3.2 Python/2.6.1 configured -- resuming normal operations so it looks like php5 is loading properly. I'd like to know either: How do i fix this? or How do I reinstall apache2 so that it's like i just installed the os? thanks in advance update @Zayne - the end of my httpd.conf has Include /private/etc/apache2/other/*.conf and i have a file /etc/apache2/other/php.conf with the contents <IfModule php5_module> AddType application/x-httpd-php .php AddType application/x-httpd-php-source .phps <IfModule dir_module> DirectoryIndex index.html index.php </IfModule> </IfModule> @Zayne I've already copied php.ini.default to php.ini in the same folder. when i run sudo apachectl configtest i get /usr/sbin/apachectl: line 82: ulimit: open files: cannot modify limit: Invalid argument httpd: Could not reliably determine the server's fully qualified domain name, using ::1 for ServerName Syntax OK furthermore i decided to try apachectl -M which shows all loaded modules Most importantly in the list of loaded modules i got Loaded Modules: php5_module (shared) Since the module is being loaded, it seems like the issue has more to do with making apache use php engine to process the php files.... so something wrong with the ifmodule directive?

    Read the article

  • Sending Adobe PDF attachments from Adobe Reader (in Outlook 2003) takes too long

    - by White Island
    I have a customer who is using Outlook 2003 (Microsoft Online Services) and Adobe reader 9+. When they send a PDF from Adobe reader to Outlook (via the Send as attachment to e-mail feature in Adobe), it freezes for 30 seconds to 5 minutes before the new e-mail pops up with the PDF attachment. I'm pretty sure the issue is on the Outlook side of things, as I've tried Adobe reader 8 and Foxit Reader with the same results (Windows XP/7 doesn't seem to make a difference, either). I tried Outlook in safe mode on the first (Win7) machine I was working on, and the e-mail attachment worked a lot faster, but when I tried to replicate the results on another machine, one wouldn't go into safe mode, the other didn't seem to show a difference. In an effort to fix the problem in Outlook normal mode, I tried disabling all add-ins, Com add-in (Office Communicator is the only one), reading pane, Word 2003 as e-mail editor... but none of these seemed to address the issue. Does anyone have any other ideas? I need to get this resolved as soon as possible, and it doesn't seem practical to make them run in safe mode. :P

    Read the article

  • Windows DFS - file locking & replication?

    - by Adam Salkin
    I'm in a small company that has offices on the east and west coasts of America and also various people working from their homes. There are Windows Servers already in the offices. I think that Microsoft Windows DFS will do what I want, but despite reading the web site, I'm really not sure, so I'm hoping that someone can confirm if it will do all the following: (For various personnel / political reasons I know that a proposal for a Microsoft Windows system has more chance of being accepted than any *nix system) Creation of a Folder so that any files in this folder will automatically be available on the servers in all the offices. When anyone opens up one of these shared files on any of servers, the copies on all the servers will automatically be locked. And when they close the file, the updates automatically get copied to the file on all the servers. VPN access to these folders for people working outside the offices. Bandwidth at the main offices varies from 6 Mb/s to 20Mb/s. Files are Excel / Word / AutoCAD ranging in size from 100KB to 4MB. Thank you.

    Read the article

  • Server high memory usage at same time every day

    - by Sam Parmenter
    Right, we moved one of our main sites onto a new AWS box with plenty of grunt as it would allow us more control that we had before and future proof ourselves. About a month ago we started running into issues with high memory usage at the same time every day. In the morning an export is run to export data to a file which is the FTPed to a local machine for processing. The issues were co-inciding with the rough time of the export but when we didn't run the export one day, the server still ran into the same issues. The export has been run at other times in the day since to monitor memory usage to see if it spikes. The conclusion is that the export is fine and barely touches the sides memory wise. No noticeable change in memory usage. When the issue happens, its effect is to kill mysql and require us to restart the process. We think it might be a mysql memory issue, but might just be that mysql is just the first to feel it. Looking at the logs there is no particular query run before the memory usage hits 90%. When it strikes at about 9:20am, the memory usage spikes from a near constant 25% to 98% and very quickly kills mysql to save itself. It usually takes about 3-4 minutes to die. There are no cron jobs running at that time of the day and we haven't noticed a spike in traffic over the period of the issues. Any help would be massively appreciated! thanks.

    Read the article

  • Using multiple computers effectively

    - by Benjamin Oakes
    I have some extra (old) Macs and PCs around the house and a MacBook that's sometimes overworked. I'm looking for tips on using multiple computers effectively. Basically, I'd like to add to the following list. Here's what I'm using so far: Teleport: lets you use a single mouse and keyboard to control several Macs, like Synergy Built-in file sharing: lets me run programs on another Mac, but only maintain one copy of the data Bazaar: distributed version control Mail.app, Thunderbird, etc.: IMAP for my mail accounts TuneConnect: control iTunes on another Mac with a nice interface, using the library on my MacBook (if I choose it by pressing option at startup) over file sharing OmniFocus: syncs across computers pretty seamlessly Web browsing across computers VNC/Remote Desktop Running X-windows programs using ssh -Y hostname for headless operation (but they die when I sleep the connecting computer -- something like GNU screen would be ideal) Plain-old ssh with GNU screen Really, a better idea of what I do might be necessary. Generally though, I'd like to distribute tasks across more than one computer when possible, but not have much overhead in doing so. The perfect solution? An Xgrid-like program that pushes processing across multiple computers automatically and seamlessly (although that seems unlikely). Here's what I have, in case it makes a difference: MacBook (Dual 2.16 GHz, OS X 10.6.3) eMac (1.25 GHz, OS X 10.4.11, soon to be 10.5) Dell Dimension (800 MHz, some version of Ubuntu) -- no dedicated monitor PowerMac G3 (400 MHz, OS X 10.4.11) -- no dedicated monitor iMac G3 DV (400 MHz, OS X 10.4.11) -- currently in the kitchen for recipes, email, web browsing, music, movies (DVDs), etc. (Total, they cost me around $650, mostly for the MacBook. Freecycle is wonderful, just in case you haven't heard of it.) I'm really only using the MacBook and eMac at this point, but I'd like to push more onto it and possibly the PowerMac and Dell.

    Read the article

  • Mac Port error installing gsoap

    - by Kevin
    Hi All, I have installed Mac Ports V1.8.1 no worries. I ran sudo port -v selfupdate no worries. I ran sudo port install gsoap And get the following error message. --- Computing dependencies for gsoap --- Fetching gsoap --- Attempting to fetch gsoap_2.7.13.tar.gz from http://optusnet.dl.sourceforge.net/gsoap2 --- Verifying checksum(s) for gsoap --- Extracting gsoap --- Applying patches to gsoap --- Configuring gsoap Error: Target org.macports.configure returned: configure failure: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_devel_gsoap/work/gsoap-2.7" && ./configure --prefix=/opt/local --enable-samples " returned error 77 Command output: checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... no checking for mawk... no checking for nawk... no checking for awk... awk checking whether make sets $(MAKE)... no checking build system type... i386-apple-darwin10.2.0 checking host system type... i386-apple-darwin10.2.0 checking whether make sets $(MAKE)... (cached) no checking for C++ compiler default output file name... configure: error: C++ compiler cannot create executables See `config.log' for more details. Error: Status 1 encountered during processing. Any ideas as to why it is failing. Regards Kevin

    Read the article

  • F5 Networks iRule/Tcl - Escaping UNICODE 6-character escape sequences so they are processed as and r

    - by openid.malcolmgin.com
    We are trying to get an F5 BIG-IP LTM iRule working properly with SharePoint 2007 in an SSL termination role. This architecture offloads all of the SSL processing to the F5 and the F5 forwards interactive requests/responses to the SharePoint front end servers via HTTP only (over a secure network). For the purposes of this discussion, iRules are parsed by a Tcl interpretation engine on the F5 Networks BIG-IP device. As such, the F5 does two things to traffic passing through it: Redirects any request to port 80 (HTTP) to port 443 (HTTPS) through HTTP 302 redirects and URL rewriting. Rewrites any response to the browser to selectively rewrite URLs embedded within the HTML so that they go to port 443 (HTTPS). This prevents the 302 redirects from breaking DHTML generated by SharePoint. We've got part 1 working fine. The main problem with part 2 is that in the response rewrite because of XML namespaces and other similar issues, not ALL matches for "http:" can be changed to "https:". Some have to remain "http:". Additionally, some of the "http:" URLs are difficult in that they live in SharePoint-generated JavaScript and their slashes (i.e. "/") are actually represented in the HTML by the UNICODE 6-character string, "\u002f". For example, in the case of these tricky ones, the literal string in the outgoing HTML is: http:\u002f\u002fservername.company.com\u002f And should be changed to: https:\u002f\u002fservername.company.com\u002f Currently we can't even figure out how to get a match in a search/replace expression on these UNICODE sequence string literals. It seems that no matter how we slice it, the Tcl interpreter is interpreting the "\u002f" string into the "/" translation before it does anything else. We've tried various combinations of Tcl escaping methods we know about (mainly double-quotes and using an extra "\" to escape the "\" in the UNICODE string) but are looking for more methods, preferably ones that work. Does anyone have any ideas or any pointers to where we can effectively self-educate about this? Thanks very much in advance.

    Read the article

< Previous Page | 363 364 365 366 367 368 369 370 371 372 373 374  | Next Page >