Search Results

Search found 6776 results on 272 pages for 'weblogic jee upgrade'.

Page 238/272 | < Previous Page | 234 235 236 237 238 239 240 241 242 243 244 245  | Next Page >

  • Setting up Web server so it is easy to migrate

    - by Nyxynyx
    Hi I am about to move my site from a VPS to another host's dedicated server. One of my concern is about scaling the site in the future that involves a change of server. Now that I am starting the dedicated server from scratch with only the OS, this means that I need to install the web server stack, including Apache and its mods, PHP, MySQL, PostgreSQL, Tomcat, Solr and a few other softwares like ImageMagick and git. Question: Is there a way for me to setup this new dedicated server such that I can easily migrate the entire site, both the technology stack and the code to the a newer server (upgrade from this new dedicated server) easily without reinstalling and reconfiguring everything? The code for the website is being handled by git and github so thats not a problem. I'm more conerned about the rest of the software required. Side question: The current VPS uses CentOs with cpanel and it seems that many packages are outdated on yum and cpanel interfers with the installation of many packages. Which OS should I go with for the new server? Ubuntu?

    Read the article

  • LVS / IPVS difference in ActiveConn since upgrading

    - by Hans
    I've recently migrated from an old version of LVS / ldirectord (Ultra Monkey) to a new Debian install with ldirectord. Now the amount of Active Connections is usually higher than the amount of Inactive Connections, it used to be the other way around. Basically on the old load balancer the connections looked something like: -> RemoteAddress:Port Forward Weight ActiveConn InActConn -> 10.84.32.21:0 Masq 1 12 252 -> 10.84.32.22:0 Masq 1 18 368 However since migrating it to the new load balancer it looks more like: -> RemoteAddress:Port Forward Weight ActiveConn InActConn -> 10.84.32.21:0 Masq 1 313 141 -> 10.84.32.22:0 Masq 1 276 183 Old load balancer: Debian 3.1 ipvsadm 1.24 ldirectord 1.2.3 New load balancer: Debian 6.0.5 ipvsadm 1.25 ldirectord 1.0.3 (I guess the versioning system changed) Is it because the old load balancer was running a kernel from 2005, and ldirectord from 2004, and things have simply changed in the past 7 - 8 years? Did I miss some sysctl settings that I should be enforcing for it to behave in the same way? Everything appears to be working fine but can anyone see an issue with this behaviour? Thanks in advance! Additional info: I'm using LVS in masquerading mode, the real servers have the load balancer as their gateway. The real servers are running Apache, which hasn't changed during the upgrade. The boxes themselves show roughly the same amount of Inactive Connections shown in ipvsadm.

    Read the article

  • Servers - Buying New vs Buying Second-hand

    - by Django Reinhardt
    We're currently in the process of adding additional servers to our website. We have a pretty simple topology planned: A Firewall/Router Server infront of a Web Application Server and Database Server. Here's a simple (and technically incorrect) diagram that I used in a previous question to illustrate what I mean: We're now wondering about the specs of our two new machines (the Web App and Firewall servers) and whether we can get away with buying a couple of old servers. (Note: Both machines will be running Windows Server 2008 R2.) We're not too concerned about our Firewall/Router server as we're pretty sure it won't be taxed too heavily, but we are interested in our Web App server. I realise that answering this type of question is really difficult without a ton of specifics on users, bandwidth, concurrent sessions, etc, etc., so I just want to focus on the general wisdom on buying old versus new. I had originally specced a new Dell PowerEdge R300 (1U Rack) for our company. In short, because we're going to be caching as much data as possible, I focussed on Processor Speed and Memory: Quad-Core Intel Xeon X3323 2.5Ghz (2x3M Cache) 1333Mhz FSB 16GB DDR2 667Mhz But when I was looking for a cheap second-hand machine for our Firewall/Router, I came across several machines that made our engineer ask a very reasonable question: If we stuck a boat load of RAM in this thing, wouldn't it do for the Web App Server and save us a ton of money in the process? For example, what about a second-hand machine with the following specs: 2x Dual-Core AMD Opteron 2218 2.6Ghz (2MB Cache) 1000Mhz HT 16GB DDR2 667Mhz Would it really be comparable with the more expensive (new) server above? Our engineer postulated that the reason companies upgrade their servers to newer processors is often because they want to reduce their power costs, and that a 2.6Ghz processor was still a 2.6Ghz processor, no matter when it was made. Benchmarks on various sites don't really support this theory, but I was wondering what server admin thought. Thanks for any advice.

    Read the article

  • Ram configuration 4x4 or 2x8?

    - by Carl B
    I am looking to upgrade my Ram to 16gigs and I am wondering if there is any distinct advantage of the way I do it. That being a 4x4 or 2x8 set up. In all my searching there have been a number of pros for each profile. I can find no benchmarck results for either setup as a compaison. So, if there are 2 profiles of the same speed, same voltage, same timing and same cas what would perform better or have a better over all benafit? A few examples from my search - a 4x4 set would lend a benafit in that if one stick failed, you only lose 25% of your Ram vs 50% in a 2x8 set up. a 2x8 set up would have less strain on the memory controler and motherboard. a 2x8 would generate less heat. a 2x8 set up is easier to over clock (not part of my need, but alot of the comparisons circled around the overclocability ease of the 2 stick set up). There is one outstanding benafit that I have found in at least the target company I have looked at and that is price. The 2x8 is nearly half the cost. My motherboard supports a max of 16 gigs and I have a 64 bit OS. Has anyone seen any performance comparisons or is 16 gb just 16 gb no matter how you slice it? And is there any merit to the above pros? Edit: as per the mobo specs - Main Memory • Supports four unbuffered DIMM of 1.5 Volt DDR3 800/1066/1333/1600*/1800*/2133* (OC) DRAM, 16GB Max

    Read the article

  • Apache memory allocation error message

    - by la_f0ka
    I'm trying to set up a medium sized Drupal 7 website on my miniserver but I keep getting a 500 error message. This is what I found in Apache's error log: [Wed Sep 12 15:02:04 2012] [notice] SSL FIPS mode disabled [Wed Sep 12 15:02:04 2012] [warn] No JkShmFile defined in httpd.conf. Using default /usr/local/apache/logs/jk-runtime-status [Wed Sep 12 15:02:04 2012] [notice] Apache/2.2.22 (Unix) mod_ssl/2.2.22 OpenSSL/1.0.0-fips mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 mod_jk/1.2.35 configured -- resuming normal operations [Wed Sep 12 15:02:07 2012] [error] [client 89.16.136.28] /usr/bin/php: error while loading shared libraries: libkrb5support.so.0: failed to map segment from shared object: Cannot allocate memory [Wed Sep 12 15:02:07 2012] [error] [client 89.16.136.28] Premature end of script headers: index.php [Wed Sep 12 15:02:07 2012] [error] [client 89.16.136.28] /usr/bin/php: error while loading shared libraries: libkrb5support.so.0: failed to map segment from shared object: Cannot allocate memory [Wed Sep 12 15:02:07 2012] [error] [client 89.16.136.28] Premature end of script headers: index.php [Wed Sep 12 15:02:07 2012] [error] [client 89.16.136.28] File does not exist: /home/brighton/public_html/favicon.ico [Wed Sep 12 15:02:07 2012] [error] [client 89.16.136.28] /usr/bin/php: error while loading shared libraries: libkrb5support.so.0: failed to map segment from shared object: Cannot allocate memory [Wed Sep 12 15:02:07 2012] [error] [client 89.16.136.28] Premature end of script headers: index.php I contacted support and they just told me I should just upgrade my package (right not I have a 512Mb account), but I am not sure if I'm buying it... even if I'm trying to access a file which only contains phpinfo(); I still get the 500. Any help would be much appreciated, and if there's need of any other information please let me know and I'll update the question. I compiled apache with tomcat because I intend to use Solr... not sure if this is relevant or not.

    Read the article

  • moving files and directories between two machine, via a third, preserving permissions and usernames

    - by Jarmund
    The situation is as follows: Machine A has a file repository accessible via rsync Machine B needs the above mentioned files with all permissions and ownerships intact (including groups etc) Machine C has access to both A and B, but has a completely different set of users. Normally, i would just rsync everything over, directly between A and B, but due to severely limited bandwidth at the moment, i need something different, as rsync times out after building the list of the 430 files (49Mb uncompressed... can be compressed down to ~7Mb). What i've tried so far: rsync everything over from A to C, tar it, copy the tarball over, and then untar it, however, this messes up the ownership and/or the permissions. To rsync it from A to C, i run this command: rsync --numeric-ids --password-file=/root/rsync_pwd_file -oaPvu rsync://[email protected]/portal_2/ ./portal_2/ ...and from the looks of things, they do end up on C with the correct ownerships/permissions/flags/everything (not 100% sure, though.. are there any more switches i can throw in there? did i miss something?) copying the tarball over is simple enough (slow as a one-legged turtle due to the bandwidth, but it checksums out alright) What i'm unsure of is the flags and switches for creating and extracting the tarball, so could someone please provide the full commands for creating a tarball from /root/portal_2 on machine C (with everything intact) and extracting the tarball into /var/ex/portal_2 on machine B? ? Also, are there any other approaches worth mentioning that could allow me to perform this? I have root access to A and C, whereas i only have rsync access to B. PS: I'm running rsync v2.6.9 on machine B, and unfortunately i do not have the oportunity to upgrade to v3

    Read the article

  • Updating ASUS BIOS on Windows 7 64bit

    - by joesavage
    I've recently decided that I want to try and utilize my two graphics cards so that I can have a dual monitor setup. Unfortunately Windows only seems to notice my most recent graphics card installation - and so I've been told that I should look into my BIOS and try to enable two graphics cards. I could not find this setting anywhere in my M2N68-AM Plus v0210 BIOS. After some further research I figured that I should perhaps upgrade my BIOS, so I searched and managed to download the latest version (v1804) as a ROM file. However I am having difficulty figuring out how to install it. I've tried using the Asus EZFlash feature built into my BIOS, but when trying to load up a variety of different ROMs that are for my motherboard/BIOS I get the error: Boot block in file is not valid! I'm not totally sure what I should do to fix this, so I'm looking into other methods of upgrading my BIOS - however I can't really find any solutions that seem to work. Asus Update is for 32-bit only, AFUDOS doesn't appear to work on my Windows 7 64-bit system (I think it's supposed to run in DOS or something - but that just sounds confusing since I know nothing about DOS). Could anybody help me with this?

    Read the article

  • Limited user requires admin rights for plug and play printer?

    - by Kalamane
    I have a small fleet of laptops that aren't part of a domain running Windows XP Pro SP3 as limited users. They are used for printing different shipping documents. I have a script that runs when they start up that uses devcon and prntmngr to detect and install/configure the currently connected usb printers. This lets us deploy the laptops to any printing station with a USB printer and have the printer 'just work' for the end user employee. I've taken the original clone image and have added functionality to it. Since then I've discovered a bit of an issue with using HP LaserJet P1606dn printers. They have started asking for admin rights on setup. This is with and without the script running. Previously they would automatically install because I had installed WHQL plug and play drivers for them. I thought it might have to do with the HP Smart Install Utility but it happens when that is disabled. I don't have a good point to roll back to before this started happening because this was an issue on the image I took initially to start this upgrade. What could be causing this?

    Read the article

  • Is Ubuntu a bad distro for a standalone mysql database server?

    - by DhruvPathak
    I read an article here : http://www.mysqlperformanceblog.com/2011/12/08/which-linux-distribution-for-mysql-server/ On the other end there are Debian and Ubuntu. Both use tool called dpkg for package management. There isn’t a month that I log in to a system based on either distribution where there are no issues with packages consistency. Unfinished installations, unresolved conflicts are so common that it’s just beyond simple negligence. The packaging system is just not robust enough. Another problem is that one broken package may block you from installing or uninstalling anything else. Imagine that someone left system in such shape, you prepared for downtime, stopped MySQL and… error – text editor has not been properly installed, so you cannot upgrade MySQL either until the problem is fixed. In a stressful situation when downtime clock ticks – annoying at best We prefer Ubuntu server because of familiarity and Ubuntu also being development environment. Questions: Is Ubuntu used commonly in production for a mysql database server ? Is it worth the trouble ever to have one distro eg Ubuntu in web server, and another say Red Hat in database server ? Or Is a homogenous server pool a better choice ?

    Read the article

  • Safe to remove Python2.6 files?

    - by darkfeline
    I'm using Linux Mint 11 (will upgrade soon), and I've noticed that, even though I don't have any python2.6 packages installed with apt, there's a bunch of residual python2.6 files scattered around my drive, including, but not limited to, dist-packages in /usr/lib/python2.6 and various /usr/share stuff. Is there any way to test if these files are still being used? I'm tempted to sudo rm -rf the lot of them, but I'm scared it'll break stuff. Also, does anyone have any idea where these files could have come from? I believe I had python2.6 installed once upon a time, but I made sure to --purge them, so there shouldn't be any trace of them left, right? EDIT: after using a quick script to check all of the files, it appears most of them belong to important packages, so I won't try weeding out the few which I know are probably useless. Although I am curious why so many packages have python2.6 files when I don't even have it installed. These files are not associated with any packages and I'm not sure if they are safe to remove: /usr/bin/ipython2.6 /usr/lib/python2.6/dist-packages/distribute-0.6.15.egg-info /usr/lib/python2.6/dist-packages/easy_install.py /usr/lib/python2.6/dist-packages/IPython /usr/lib/python2.6/dist-packages/ipython-0.10.1.egg-info /usr/lib/python2.6/dist-packages/setuptools /usr/lib/python2.6/dist-packages/setuptools.egg-info /usr/lib/python2.6/dist-packages/setuptools.pth /usr/lib/python2.6/dist-packages/site.py /usr/lib/python2.6/dist-packages/wx.pth /usr/local/lib/python2.6 /usr/local/lib/python2.6/dist-packages /usr/local/lib/python2.6/site-packages /usr/share/man/man1/ipython2.6.1.gz

    Read the article

  • Clone a Windows Installation to a 3TB Hard Drive; MBR to GPT

    - by DanBlakemore
    I have Windows 7 Professional 64-bit installed on my desktop. Unfortunately for me and my wallet my hard drive is failing. I have purchased a 3TB hard drive as a replacement for my current 2TB drive. I would like to avoid as much hassle as possible in moving to this new drive so I would like to copy my current partition to the new drive using Gparted. The problem is that I suspect that my current partition is MBR, and I need GPT on my new drive since it is 3TB. Can I simply copy the MBR partition onto the new disk and then convert it to GPT after the fact (can you even convert the type of a partition)? Or would I need to somehow copy the contents of the partition into a GPT partition on the new drive? How do I go about making this transistion? Also, are there any issues I should be wary of booting to a GPT partition? If it matters, my motherboard is 1 year old as of May, 2012. Edit: My motherboard is 1 day old. My old one does not have UEFI compatibility, so I decided to make an upgrade to Intel today given that I would need a UEFI motherboard to use my new HDD. How much can I use a dying hard drive (bad sectors according to Hitachi Drive Fitness Test)? I have assumed not at all, to be safe.

    Read the article

  • Centralized backup method recommendation for SMEs with various OSes

    - by Akinator
    Hi I was wondering what in your opinion is the "best" method for having "everything" backed-up in the following situation. We are a SMEs with 10 computers in total. Three of those computers are MACs The rest are windows (1 vista, 4 win7 and 2 XPs) I'm very open to what the method should be but you should also consider the follwing: Very limited resources Quite "small" bandwidth (4 MBs for all (download) 0.4 MBs (upload, yep, thats it)- though this might get, a little bit better) One of the main thing to back up would be the mails, considerations: All windows computers use outlook, mainly 2003 There is one mac that uses outlook too (for mac of course - not 2011 yet) We also have to backup the files: Not a huge amount Very few very big files Very organizes (by machine) What I would like is to hear your opinions as to which would be the best method (or combination of methods - preferably one of course) considering. We are not sure what do we need and I'm open to suggestions, though an online (cloud based applications) would be great, remember the the bandwidth is unbearable. Last think to consider, it that we would like to do weekly updates (unless the method is very easy of course). Thanks in advance!! I tried to be as specific as possible, but if anything is needed I'll gladly update, please ask for any clarification needed! Please avoid any answers like upgrade all to windows 7 and throw away your macs :) our's may not be an ideal situation, but it is what it is, and right now, it would be impossible for us to change it for a lot of circumstances.

    Read the article

  • How to install (old) packages for Ubuntu 9.04?

    - by wchrisjohnson
    Based on some excellent feedback by Mark here (http://serverfault.com/questions/285598/should-i-clone-a-physical-server-to-create-a-vm-for-a-staging-server), today I was able to use the vmware converter to clone my production server for a staging server. However the nic won't come up no matter what I do. I attempted to inistall vmware tools, as I suspect that the fact that it is not installed might prevent the nic from working. (I have the nic set as a vmxnet3 card in the vm settings). The install failed because there were several dependencies missing as well as the Linux headers. Given that Ubuntu 9.04 has been EOL'd, the packages I need to install to get the vmware tools to install are no longer available. I doubt the ubuntu 9.04 install CD has the packages on it. What are my options? I'd rather not upgrade the version of Ubuntu yet, as the point of the vm right now is to maintain parity with the production server. Might I have better luck resetting the driver to use vmxnet2 instead of the vmxnet3? Thanks in advance! Chris

    Read the article

  • iptables logging not working?

    - by vps_newcomer
    OS: Ubuntu 10.04 Logging daemon: rsyslog For some reason i'm not getting any iptables logs, even thought i don't look through them very often i'd still like to get it working for the sake of it working XD Here is my /etc/ryslog.d/iptables.conf :msg, contains, "[IPTABLES]" -/var/log/iptables.log & ~ My iptables logging prefix is "[IPTABLES]" followed by whatever else (example [IPTABLES] Denied xyz) the /var/log/iptables.log file is being created, however its not getting any entries. I can see the logging entries in dmesg but not in syslog or messages. Whats going on? EDIT: My iptables logging rules: # logging limit LoggingLimit=5/min LoggingPrefix=IPTABLES # Logging chain iptables -N LOG_REJECT iptables -A LOG_REJECT -j LOG # join INPUT to LOG_REJECT iptables -A INPUT -j LOG_REJECT # logging iptables -A LOG_REJECT -p tcp -m limit --limit $LoggingLimit -j LOG --log-prefix "$LoggingPrefix Denied TCP: " #--log-level 7 iptables -A LOG_REJECT -p udp -m limit --limit $LoggingLimit -j LOG --log-prefix "$LoggingPrefix Denied UDP: " #--log-level 7 iptables -A LOG_REJECT -p icmp -m limit --limit $LoggingLimit -j LOG --log-prefix "$LoggingPrefix Denied ICMP: " #--log-level 7 Update: I found a thread that has the same symptoms as i do, apparently is a kernel bug. I am using a VPS so could anyone point me on how to upgrade my kernel or apply a workaround? I couldn't find a 2.6.34 kernel listed in apt-cache. Thread: http://www.linode.com/forums/viewtopic.php?t=5533

    Read the article

  • nVidia performance with newer X and newer driver abysmal with Compiz

    - by Nakedible
    I recently upgraded Debian to Xorg 2.9.4 and installed nvidia-glx from experimental, version 260.19.21. This was somewhat of an uphill battle as the dependencies for the experimental nvidia-glx package are still somewhat broken. I got it to work without forcing the installation of any packages and without modifying the packages. However, after the upgrade compiz performance has been abysmal. I am using the desktop wall plugin and switching viewports is really slow - takes a few seconds for each switch. In addition to this, every effect that compiz does, such as zoom animations for icons when launching applications, takes seconds. The viewport switching speed changes relative to the amount of windows on that virtual screen - empty screens switch almost at normal speed, single browser windows work almost decently, but just 4 rxvt terminals slows the switches down to a crawl. My compiz configuration should be pretty basic. Xorg is likewise configured without anything special - the only "custom" configuration is forcing the driver name to be "nvidia". I've fiddled around with the nvidia-settings and compizconfig trying different VSync settings, but none of those helped. My graphics card is: NVIDIA GPU NVS 3100M (GT218) at PCI:1:0:0 (GPU-0). This is laptop GPU that is from the Geforce GTX 200 series. Graphics card performance should naturally be no problem. EDIT: In the end, nothing really worked, and I got really annoyed with the state of compiz and its support in Debian. Many nVidia driver revisions have passed and I am using Gnome 3 now, so I am accepting the best answers to this question even though the issue was not resolved.

    Read the article

  • Batch-Remove Specific Text from Photo Descriptions

    - by David
    I've recently upgraded to iPhoto '11 (couldn't resist the pricing on the new app store) and as I'm adding more meta-data to my library and generally organizing things (places, faces, etc... I hadn't upgraded since '08) I've noticed something odd in my photos. Every photo in my library has a description (though many are short), but it would appear that somehow the description of one of the photos has been appended to many. I don't know if maybe I accidentally screwed up a batch change at some point in the past, or if the library upgrade somehow messed up, or what else may have happened. But what I need to do is fix these somehow. Now, manually editing is something of a daunting task. Within a library of 21,248 photos, 18,858 of them have this additional text. The one thing I do have going for me is that it's a specific string. If there's a way to "remove this string from everywhere in the library without removing the rest of any given description" then that would be perfect. Is there anything I can do like this? Maybe even manually editing a library file in a text editor? (Would that break anything else in iPhoto if its library was edited outside of the application, even while it's not running?) Does anybody have any ideas?

    Read the article

  • sudo or acl or setuid/setgid?

    - by Xavier Maillard
    for a reason I do not really understand, everyone wants sudo for all and everything. At work we even have as many entries as there are way to read a logfile (head/tail/cat/more, ...). I think, sudo is defeating here. I'd rather use a mix of setgid/setuid directories and add ACL here and there but I really need to know what are the best practices before starting up. Our servers have %admin, %production, %dba, %users -i.e many groups and many users. Each service (mysql, apache, ...) has its own way to install privileges but members of the %production group must be able to consult configuration file or even log files. There is still the solution to add them into the right groups (mysql...) and set the good permission. But I do not want to usermod all users, I do not want to modify standards permissions since it could change after each upgrade. On the other hand, setting acls and/or mixing setuid/setgid on directories is something I could easily do without "defacing" the standard distribution. What do you think about this ? Taking the mysql example, that would look like this: setfacl d:g:production:rx,d:other::---,g:production:rx,other::--- /var/log/mysql /etc/mysql Do you think this is good practise or should I definetely usermod -G mysql and play with standard permissions system ? Thank you

    Read the article

  • Recovery disk Windows 8 HP Pavilion g6

    - by fpghost
    I recently purchased a HP Pavilion g6 laptop running Windows 8. I want to either obtain the Windows 8 ISO or make some kind of recovery disk that would allow me to restore the system if things go wrong. The HP Pavilion comes with the 'HP Recovery Manager' which I thought may do the job, but on running it and putting in a DVD-R as requested it seems to just hang for a number of hours without doing a thing (the disk sounds like it's spinning for a few minutes but then goes silent). I then tried 'recdisc.exe' but I get the error System Repair could not be created The device reported unexpected or invalid data for a command. (0xC0AA02FF) Next I obtained my Windows 8 product key using the software ProduKey thinking this would allow me to go to the Microsoft website and download the Windows 8 ISO, but as far as I can tell all that is available is the upgrade which can be used if one is running something like Windows 7. Can anyone advise? EDIT: after a reboot recdisc.exe did work; I think the problem was due to some Windows updates needing a reboot, but never the less I would like a full Windows 8 ISO if possible.

    Read the article

  • Installing a new Motherboard in a HP xw6200 case

    - by thing2k
    I have a HP xw6200 Workstation, that is rather long in the tooth, and with 2 physical CPUs, it is quite inefficient. So, the plan was to upgrade the internals. Nothing special: AMD Athlon II X4 640 ASUS M4A78KT-M LE (mATX) 2x 2GB DDR3 1333MHz RAM. 3p under my £150 budget The issues: The pin connector for the front panel isn't a good fit, but I can trim it to size. The PSU has a 8-Pin Power connector, unsurprisingly, the new board has a 4 pin socket. The pins do line up, but I would have to cut it in half to fit. Finally, due to the weight of heat-sinks, they are screwed directly into the case. It turns out that these screws also lock the motherboard in place. As to remove it, you remove the heat-sinks, slide the motherboard across and lift it out. I tested the new board for fit, and while it slots in fine, it's not secure. There is nowhere to screw the board down, it is just held in place with plastic standoffs. The only idea I had, was to wedging something between the side of the motherboard and part of the case. Any suggestions?

    Read the article

  • PHP potential issues with compiling 5.3.8 extensions against RHEL 6 / CentOS 6 PHP 5.3.3 package

    - by user101203
    I'm working on getting a Red Hat 6 LAMP server going and while the PHP that comes with it has many extensions we use, it doesn't have all of them. To solve this, I was thinking about either compiling the PHP extensions which come in the ext folder of the downloadable source code of PHP 5.3.3 from php.net same as #1, but using the extensions from the latest PHP version (currently 5.3.8). Do #1 but manually decide which updates to backport from the latest version of the PHP extensions into the older version and then compile the backported result A drawback to #1 is that security and bug fixes come out which we wouldn't be able to take advantage of. A drawback to #3 is that it might be a lot of work Does anyone know what the drawbacks to #2 are? I don't want to go down that route if it might result in some unexpected negative outcomes. Also, are there any other drawbacks to the other options or a better way to go altogether? I want to use the PHP 5.3.3 which comes with the Linux distro because I don't want us to get to a place again where we are forced to upgrade to a new version of PHP to stay on top of security updates like from PHP 5.2.x to 5.3.x and there be backwards incompatible changes (this is the situation we're in now with PHP 5.2.x no longer being supported).

    Read the article

  • Euro character messed up during FTP transfer

    - by djechelon
    My customer is using a very outdated ecommerce management system on my hosting service. For that product, no support is being provided anymore by the vendor. Brief explanation: the shop website, that claims to run under LAMP stack, is built by an old Visual Basic Windows application running on MS Access. The user constructs the shop, defines the HTML template, adds products and categories, etc. Then the VB exe builds the PHP pages (one for each template page) and the SQL script to run on MySQL. It also uploads everything via FTP and runs the installation/upgrade script on its own. The problem Browsing the website, many products' descriptions are cut before the euro sign. For example, what was supposed to be "Product price €1000" becomes "Product price" The analysis MySQL contains a cutted description until the € sign, so it's not PHP fault The Access databases contain full description with € sign, so it's not fault of the webmaster writing bad description or eDisplay cutting them The SQL that will run once the site gets uploaded, stored on my local machine before upload, contains the € sign The same script, after being FTPed by eDisplay and opened with nano from SSH, shows the € sign messed up like this: ^À vsftpd log reports (obfuscated for privacy) Sat Dec 15 11:16:57 2012 22 xxx.xxx.128.13 1112727 /srv/www/domains/xxxxxx.it/htdocs/db.sql b _ i r xxxxxxx ftp 0 * c which seems to be a binary transfer (and also a huge security vulnerability because you can download the whole database from unauthenticated HTTP) The eDisplay internal FTP client provides no option for ascii/binary transfer modes [Add] Trying to manually upload the SQL file via SFTP shows messing up euro [Add2] Trying to manually upload using Xftp client with explicit ASCII mode doesn't fix too It looks like the file gets uploaded as binary. Perhaps on the customer's previous host it all worked fine because that was a Windows host. The server It's an Azure virtual machine running openSUSE 12.2 with both vsftpd and openSSH The question Without asking the customer to manually upload files using FileZilla or replacing € with &euro;, because he refuses, what can I do on server side to prevent vsftpd to screw up euro sign?

    Read the article

  • Fixing Windows install by connecting it's hard drive via USB to a different laptop

    - by Jason
    I tried to upgrade a laptop to SP3, which broke it. I later found out SP3 doesn't work on that 2002 laptop. I can't uninstall SP3, or fix SP2, because the hard drive is now not detected during setup (I've read that's the problem you get). I put the hard drive in a USB drive case and plugged it into my other laptop, and I can read (& write to) the disk okay. (The hard drive won't fit in my other laptop, so I'm using USB.) I need to get that disk back to SP2, or fix whatever files got screwed up causing the disk to not be recognized. I don't want to do a re-install as there are 80GB of files on it I need, and they won't fit on the HD of my other laptop, and also because I no longer have some of the install CDs for software on it. What do I need to do to fix that drive from my other laptop? (I don't want my working laptop (XP SP3) to get screwed with by putting an SP2 disk in the CD drive, or the non-o/s data on the other hard drive screwed with.)

    Read the article

  • Fixing Windows install by connecting its hard drive via USB to a different laptop

    - by Jason
    I tried to upgrade a laptop to SP3, which broke it. I later found out SP3 doesn't work on that 2002 laptop. I can't uninstall SP3, or fix SP2, because the hard drive is now not detected during setup (I've read that's the problem you get). I put the hard drive in a USB drive case and plugged it into my other laptop, and I can read (& write to) the disk okay. (The hard drive won't fit in my other laptop, so I'm using USB.) I need to get that disk back to SP2, or fix whatever files got screwed up causing the disk to not be recognized. I don't want to do a re-install as there are 80GB of files on it I need, and they won't fit on the HD of my other laptop, and also because I no longer have some of the install CDs for software on it. What do I need to do to fix that drive from my other laptop? (I don't want my working laptop (XP SP3) to get screwed with by putting an SP2 disk in the CD drive, or the non-o/s data on the other hard drive screwed with.)

    Read the article

  • Upgrading to Java 7u65 breaks my Deployment Rule Set for Oracle applications

    - by Don Atreides
    My company uses an older version of an Oracle application that requires Java 6u45. Naturally we want to be secure, so we use a Deployment Rule Set to specify 6u45 for that internal application and let other applications use 7u60. Now that we're ready to upgrade the Java 7 half to 7u67, the Oracle application breaks with "Deployment Rule Set required version 1.6.0_45 not available." Of course it is available, it just can't find it for some reason. As a test, I specified that JavaTester.org should use 6u45 also and it works fine with no issues. But when I try to use the same configuration (7u67 and 6u45) against the Oracle application it fails every time. If I downgrade to 7u60, it works. 7u65 or higher, it breaks. The Oracle application hasn't changed so it must be something different in how 7u65+ is handling Deployment Rule Sets or pathing or something. I'm at a complete loss. ruleset.xml: <?xml version="1.0"?> -<ruleset version="1.0+"> -<rule> <id location="*.mycorp.com"/> <action version="1.6.0_45" permission="run"/> </rule> -<rule> <id location="http://javatester.org"/> <action version="1.6.0_45" permission="run"/> </rule> </ruleset>

    Read the article

  • Viability of Mac OS X 10.9 Time Machine Server in office environment

    - by user197609
    Currently we have about 20 Mac OS 10.9 MacBook Pros (almost all with SSDs) backing up to individual USB drives. I'd like to consolidate these to one drobo thunderbolt drive array attached to a Mac Mini server (running 10.9 server) using time machine server. My question is, will this scale to 20 users? Examples I have seen seem to be 5 or 6 users tops, and this isn't easy for me to test (I'd rather not ask everyone to backup to the array and then switch back to USB drives if it brings our network to its knees). My primary concern is saturating our gigabit network, as time machine backs up every hour for every machine, so there would usually be a couple people backing up at any given time. We also have some people occasionally on our 802.11ac network and not on ethernet (usually connected via 802.11n until people upgrade to newer machines), but most of the time people are connected to our thunderbolt displays which have a gigabit ethernet connection on them. Our network topology is one 32 port gigabit switch with 5 smaller gigabit switches at each desk cluster. The mac mini server is connected directly to the top level switch. Update: Failing information from someone who has done this in practice, I suppose my question is really around how switches work. If three or four people are backing up simultaneously, and then other two (different) users transfer a file between each other, will they be able to transfer the file at gigabit speeds?

    Read the article

< Previous Page | 234 235 236 237 238 239 240 241 242 243 244 245  | Next Page >