Search Results

Search found 7787 results on 312 pages for 'apt upgrade'.

Page 122/312 | < Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >

  • How do I best handle a needed patch for Perl/Tk?

    - by Streamline
    I am making a change to Perl/Tk for an application that has its own resident Perl and modules installation (so we can drop the app in and go). I've found a problem I am experiencing that I just stumbled on what seems to be the patch I need here: http://osdir.com/ml/lang.perl.tk/2004-10/msg00030.html Bug confirmed. Here's the patch against Tk804.027: --- Tk-804.027/pTk/mTk/generic/tkEntry.c Sat Mar 20 19:54:48 2004 +++ Tk-804.027-perl5.8.3d/pTk/mTk/generic/tkEntry.c Tue Oct 19 22:50:31 2004 @@ -3478,6 +3478,18 @@ Tcl_DStringFree(&script); #else + switch (type) { + case VALIDATE_INSERT: + type = 1; + break; + case VALIDATE_DELETE: + type = 0; + break; + default: + type = -1; + break; + } + code = LangDoCallback(entryPtr-interp, entryPtr-validateCmd, 1, 5, "%s %s %s %d %d", new, change, entryPtr-string, index, type); if (code != TCL_OK && code != TCL_RETURN) { Regards, Slaven I'd like to apply this patch or if there is a newer version of the Perl/Tk module I can upgrade to that includes this patch already that doesn't require I change the version of Perl, do that. Here is what I can find from the installation for this app: perl -v = 5.8.4 $Tk::version = '8.4' $Tk::patchlevel = '8.4' $Tk::VERSION = '804.027' So.. 1a) if there is a newer Tk VERSION that includes the patch in the link above, how do I upgrade just that module in the specific Perl installation location for this app? 1b) how do I know if that upgrade is compatible with 5.8.4 of Perl (I don't want to upgrade perl at this point) 2) if not, how do I apply that patch defined in that link?

    Read the article

  • Documentation for java.util.concurrent.locks.ReentrantReadWriteLock

    - by Andrei Taptunov
    Disclaimer: I'm not very good at Java and just comparing read/writer locks between C# and Java to understand this topic better & decisions behind both implementations. There is JavaDoc about ReentrantReadWriteLock. It states the following about upgrade/downgrade for locks: Lock downgrading ... However, upgrading from a read lock to the write lock is not possible. It also has the following example that shows manual upgrade from read lock to write lock: // Here is a code sketch showing how to exploit reentrancy // to perform lock downgrading after updating a cache void processCachedData() { rwl.readLock().lock(); if (!cacheValid) { // upgrade lock manually #1: rwl.readLock().unlock(); // must unlock first to obtain writelock #2: rwl.writeLock().lock(); if (!cacheValid) { // recheck ... } ... } use(data); rwl.readLock().unlock(); Does it mean that actually the sample from above may not behave correctly in some cases - I mean there is no lock between lines #1 & #2 and underlying structure is exposed to changes from other threads. So it can not be considered as the correct way to upgrade the lock or do I miss something here?

    Read the article

  • How to Detect in Windows Registry if user has .Net Framework installed?

    - by Sarah Weinberger
    How do I detect in the Windows Registry if a user has .Net Framework installed? I am not looking for a .Net based solution, as the query is from InnoSetup. I know from reading another post here on Stack Overflow that .Net Framework is an inplace upgrade to 4.0. I already know how to check if a user has version 4.0 installed on the system, namely by checking the following: function FindFramework(): Boolean; var bVer4x0: Boolean; bVer4x0Client: Boolean; bVer4x0Full: Boolean; bSuccess: Boolean; iInstalled: Cardinal; begin Result := False; bVer4x0Client := False; bVer4x0Full := False; bVer4x0 := RegKeyExists(HKLM, 'SOFTWARE\Microsoft\.NETFramework\policy\v4.0'); bSuccess := RegQueryDWordValue(HKLM, 'Software\Microsoft\NET Framework Setup\NDP\v4 \Client', 'Install', iInstalled); if (1 = iInstalled) AND (True = bSuccess) then bVer4x0Client := True; bSuccess := RegQueryDWordValue(HKLM, 'Software\Microsoft\NET Framework Setup\NDP\v4 \Full', 'Install', iInstalled); if (1 = iInstalled) AND (True = bSuccess) then bVer4x0Full := True; if (True = bVer4x0Full) then begin Result := True; end; end; I checked the registry and there is no v4.5 folder, which makes sense if .Net Framework 4.5 is an inplace upgrade. Still, the Control Panel Programs and Features includes the listing. I know that probably "issuing dotNetFx45_Full_setup.exe /q" will have no bad effect if installing on a system that already has version 4.5, but I still would like to not install the upgrade if the upgrade already exists, faster and less problems.

    Read the article

  • Now Available &ndash; Windows Azure SDK 1.6

    - by Shaun
    Microsoft has just announced the Windows Azure SDK 1.6 and the Windows Azure Tools for Visual Studio 1.6. Now people can download the latest product through the WebPI. After you downloaded and installed the SDK you will find that The SDK 1.6 can be stayed side by side with the SDK 1.5, which means you can still using the 1.5 assemblies. But the Visual Studio Tools would be upgraded to 1.6. Different from the previous SDK, in this version it includes 4 components: Windows Azure Authoring Tools, Windows Azure Emulators, Windows Azure Libraries for .NET 1.6 and the Windows Azure Tools for Microsoft Visual Studio 2010. There are some significant upgrades in this version, which are Publishing Enhancement: More easily connect to the Windows Azure when publish your application by retrieving a publish setting file. It will let you configure some settings of the deployment, without getting back to the developer portal. Multi-profiles: The publish settings, cloud configuration files, etc. will be stored in one or more MSBuild files. It will be much easier to switch the settings between vary build environments. MSBuild Command-line Build Support. In-Place Upgrade Support.   Publishing Enhancement So let’s have a look about the new features of the publishing. Just create a new Windows Azure project in Visual Studio 2010 with a MVC 3 Web Role, and right-click the Windows Azure project node in the solution explorer, then select Publish, we will find the new publish dialog. In this version the first thing we need to do is to connect to our Windows Azure subscription. Click the “Sign in to download credentials” link, we will be navigated to the login page to provide the Live ID. The Windows Azure Tool will generate a certificate file and uploaded to the subscriptions those belong to us. Then we will download a PUBLISHSETTINGS file, which contains the credentials and subscriptions information. The Visual Studio Tool will generate a certificate and deployed to the subscriptions you have as the Management Certificate. The VS Tool will use this certificate to connect to the subscription in the next step. In the next step, I would back to the Visual Studio (the publish dialog should be stilling opened) and click the Import button, select the PUBLISHSETTINGS file I had just downloaded. Then all my subscriptions will be shown in the dropdown list. Select a subscription that I want the application to be published and press the Next button, then we can select the hosted service, environment, build configuration and service configuration shown in the dialog. In this version we can create a new hosted service directly here rather than go back to the developer portal. Just select the <Create New …> item in the hosted service. What we need to do is to provide the hosted service name and the location. Once clicked the OK, after several seconds the hosted service will be established. If we went to the developer portal we will find the new hosted service in my subscription. a) Currently we cannot select the Affinity Group when create a new hosted service through the Visual Studio Publish dialog. b) Although we can specify the hosted service name and DNS prefixing through the developer portal, we cannot do so from the VS Tool, which means the DNS prefixing would be the same as what we specified for the hosted service name. For example, we specified our hosted service name as “Sdk16Demo”, so the public URL would be http://sdk16demo.cloudapp.net/. After created a new hosted service we can select the cloud environment (production or staging), the build configuration (release or debug), and the service configuration (cloud or local). And we can set the Remote Desktop by check the related checkbox as well. One thing should be note is that, in this version when we set the Remote Desktop settings we don’t need to specify a certificate by default. This is because the Visual Studio will generate a new certificate for us by default. But we can still specify an existing certificate for RDC, by clicking the “More Options” button. Visual Studio Tool will create another certificate for the Remote Desktop connection. It will NOT use the certificate that managing the subscription. We also can select the “Advanced Settings” page to specify the deployment label, storage account, IntelliTrace and .NET profiling information, etc.. Press Next button, the dialog will display all settings I had just specified and it will save them as a new profile. The last step is to click the Publish button. Since we enabled the Remote Desktop feature, the first step of publishing was uploading the certificate. And then it will verify the storage account we specified and upload the package, then finally created the website in Windows Azure.   Multi-Profiles After published, if we back to the Visual Studio we can find a AZUREPUBXML file under the Profiles folder in the Azure project. It includes all settings we specified before. If we publish this project again, we can just use the current settings (hosted service, environment, RDC, etc.) from this profile without input them again. And this is very useful when we have more than one deployment settings. For example it would be able to have one AZUREPUBXML profile for deploying to testing environment (debug building, less roles with RDC and IntelliTrace) and one for production (release building, more roles but without IntelliTrace).   In-Place Upgrade Support Let’s change some codes in the MVC pages and click the Publish menu from the azure project node. No need to specify any settings,  here we can use the pervious settings by loading the azure profile file (AZUREPUBXML). After clicked the Publish button the VS Tool brought a dialog to us to indicate that there’s a deployment available in the hosted service environment, and prompt to REPLACE it or not. Notice that in this version, the dialog tool said “replace” rather than “delete”, which means by default the VS Tool will use In-Place Upgrade when we deploy to a hosted service that has a deployment already exist. After click Yes the VS Tool will upload the package and perform the In-Place Upgrade. If we back to the developer portal we can find that the status of the hosted service was turned to “Updating…”. But in the previous SDK, it will try to delete the whole deployment and publish a new one.   Summary When the Microsoft announced the features that allows the changing VM size via In-Place Upgrade, they also mentioned that in the next few versions the user experience of publishing the azure application would be improved. The target was trying to accomplish the whole publish experience in Visual Studio, which means no need to touch developer portal any more. In the SDK 1.6 we can see from the new publish dialog, as a developer we can do the whole process, includes creating hosted service, specifying the environment, configuration, remote desktop, etc. values without going back the the developer portal.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • PeopleSoft Upgrades, Fusion, & BI for Leading European PeopleSoft Applications Customers

    - by Mark Rosenberg
    With so many industry-leading services firms around the globe managing their businesses with PeopleSoft, it’s always an adventure setting up times and meetings for us to keep in touch with them, especially those outside of North America who often do not get to join us at Oracle OpenWorld. Fortunately, during the first two weeks of May, Nigel Woodland (Oracle’s Service Industries Director for the EMEA region) and I successfully blocked off our calendars to visit seven different customers spanning four countries in Western Europe. We met executives and leaders at four Staffing industry firms, two Professional Services firms that engage in consulting and auditing, and a Financial Services firm. As we shared the latest information regarding product capabilities and plans, we also gained valuable insight into the hot technology topics facing these businesses. What we heard was both informative and inspiring, and I suspect other Oracle PeopleSoft applications customers can benefit from one or more of the following observations from our trip. Great IT Plans Get Executed When You Respect the Users Each of our visits followed roughly the same pattern. After introductions, Nigel outlined Oracle’s product and technology strategy, including a discussion of how we at Oracle invest in each layer of the “technology stack” to provide customers with unprecedented business management capabilities and choice. Then, I provided the specifics of the PeopleSoft product line’s investment strategy, detailing the dramatic number of rich usability and functionality enhancements added to release 9.1 since its general availability in 2009 and the game-changing capabilities slated for 9.2. What was most exciting about each of these discussions was that shortly after my talking about what customers can do with release 9.1 right now to drive up user productivity and satisfaction, I saw the wheels turning in the minds of our audiences. Business analyst and end user-configurable tools and technologies, such as WorkCenters and the Related Action Framework, that provide the ability to tailor a “central command center” to the exact needs of each recruiter, biller, and every other role in the organization were exactly what each of our customers had been looking for. Every one of our audiences agreed that these tools which demonstrate a respect for the user would finally help IT pole vault over the wall of resistance that users had often raised in the past. With these new user-focused capabilities, IT is positioned to definitively partner with the business, instead of drag the business along, to unlock the value of their investment in PeopleSoft. This topic of respecting the user emerged during our very first visit, which was at Vital Services Group at their Head Office “The Mill” in Manchester, England. (If you are a student of architecture and are ever in Manchester, you should stop in to see this amazingly renovated old mill building.) I had just finished explaining our PeopleSoft 9.2 roadmap, and Mike Code, PeopleSoft Systems Manager for this innovative staffing company, said, “Mark, the new features you’ve shown us in 9.1/9.2 are very relevant to our business. As we forge ahead with the 9.1 upgrade, the ability to configure a targeted user interface with WorkCenters, Related Actions, Pivot Grids, and Alerts will enable us to satisfy the business that this upgrade is for them and will deliver tangible benefits. In fact, you’ve highlighted that we need to start talking to the business to keep up the momentum to start reviewing the 9.2 upgrade after we get to 9.1, because as much as 9.1 and PeopleTools 8.52 offers, what you’ve shown us for 9.2 is what we’ve envisioned was ultimately possible with our investment in PeopleSoft applications.” We also received valuable feedback about our investment for the Staffing industry when we visited with Hans Wanders, CIO of Randstad (the second largest Staffing company in the world) in the Netherlands. After our visit, Hans noted, “It was very interesting to see how the PeopleSoft applications have developed. I was truly impressed by many of the new developments.” Hans and Mike, sincere thanks for the validation that our team’s hard work and dedication to “respecting the users” is worth the effort! Co-existence of PeopleSoft and Fusion Applications Just Makes Sense As a “product person,” one of the most rewarding things about visiting customers is that they actually want to talk to me. Sometimes, they want to discuss a product area that we need to enhance; other times, they are interested in learning how to extract more value from their applications; and still others, they want to tell me how they are using the applications to drive real value for the business. During this trip, I was very pleased to hear that several of our customers not only thought the co-existence of Fusion applications alongside PeopleSoft applications made sense in theory, but also that they were aggressively looking at how to deploy one or more Fusion applications alongside their PeopleSoft HCM and FSCM applications. The most common deployment plan in the works by three of the organizations is to upgrade to PeopleSoft 9.1 or 9.2, and then adopt one of the new Fusion HCM applications, such as Fusion Performance Management or the full suite of  Fusion Talent Management. For example, during an applications upgrade planning discussion with the staffing company Hays plc., Mark Thomas, who is Hays’ UK IT Director, commented, “We are very excited about where we can go with the latest versions of the PeopleSoft applications in conjunction with Fusion Talent Management.” Needless to say, this news was very encouraging, because it reiterated that our applications investment strategy makes good business sense for our customers. Next Generation Business Intelligence Is the Key to the Future The third, and perhaps most exciting, lesson I learned during this journey is that our audiences already know that the latest generation of Business Intelligence technologies will be the “secret sauce” for organizations to transform business in radical ways. While a number of the organizations we visited on the trip have deployed or are deploying Oracle Business Intelligence Enterprise Edition and the associated analytics applications to provide dashboards of easy-to-understand, user-configurable metrics that help optimize business performance according to current operating procedures, what’s most exciting to them is being able to use Business Intelligence to change the way an organization does business, grows revenue, and makes a profit. In particular, several executives we met asked whether we can help them minimize the need to have perfectly structured data and at the same time generate analytics that improve order fulfillment decision-making. To them, the path to future growth lies in having the ability to analyze unstructured data rapidly and intuitively and leveraging technology’s ability to detect patterns that a human cannot reasonably be expected to see. For illustrative purposes, here is a good example of a business problem where analyzing a combination of structured and unstructured data can produce better results. If you have a resource manager trying to decide which person would be the best fit for an assignment in terms of ensuring (a) client satisfaction, (b) the individual’s satisfaction with the work, (c) least travel distance, and (d) highest margin, you traditionally compare resource qualifications to assignment needs, calculate margins on past work with the client, and measure distances. To perform these comparisons, you are likely to need the organization to have profiles setup, people ranked against profiles, margin targets setup, margins measured, distances setup, distances measured, and more. As you can imagine, this requires organizations to plan and implement data setup, capture, and quality management initiatives to ensure that dependable information is available to support resourcing analysis and decisions. In the fast-paced, tight-budget world in which most organizations operate today, the effort and discipline required to maintain high-quality, structured data like those described in the above example are certainly not desirable and in some cases are not feasible. You can imagine how intrigued our audiences were when I informed them that we are ready to help them analyze volumes of unstructured data, detect trends, and produce recommendations. Our discussions delved into examples of how the firms could leverage Oracle’s Secure Enterprise Search and Endeca technologies to keyword search against, compare, and learn from unstructured resource and assignment data. We also considered examples of how they could employ Oracle Real-Time Decisions to generate statistically significant recommendations based on similar resourcing scenarios that have produced the desired satisfaction and profit margin results. --- Although I had almost no time for sight-seeing during this trip to Europe, I have to say that it may have been one of the most energizing and engaging trips of my career. Showing these dedicated customers how they can give every user a uniquely tailored set of tools and address business problems in ways that have to date been impossible made the journey across the Atlantic more than worth it. If any of these three topics intrigue you, I’d recommend you contact your Oracle applications representative to arrange for more detailed discussions with the appropriate members of our organization.

    Read the article

  • virtualbox installing iso Error "could not find a valid v7 on sda"

    - by MountainX
    I decided to try this suggestion to install Android x86 on Virtual Box. I'm following those steps and I'm at #4. I used the android-x86-4.0-RC1-amd_brazos.iso (because I have a ThinkPad that this might be compatible with and it seemed as good as any of the other choices...) I'm getting an endless series of errors: .[numbers] VFS: could not find a valid v7 on sda. .[numbers] VFS: could not find a valid v7 on sda. repeating... For storage, I made a VDI image sized at 4.0 GB (dynamically allocated) attached on SATA Port 0. Background and more details: I'm running Kubuntu 12.04. I installed VirtualBox 4.1.12 after adding deb http://download.virtualbox.org/virtualbox/debian precise contrib to my sources. EDIT: Now I'm wondering if I installed the right package. I verified that I'm running 4.1.12. But I installed it with apt-get install virtualbox instead of the recommended apt-get install virtualbox-4.1. I checked just now and see this: apt-cache search virtualbox virtualbox - x86 virtualization solution - base binaries virtualbox-4.1 - Oracle VM VirtualBox But when I run VirtualBox, I get the Oracle VM VirtualBox Manager version 4.1.12, so I think I'm OK. I did see one minor issue (possibly related to this question) when installing VB, but in my case I don't think it is actually an error at all: * No suitable module for running kernel found [fail] invoke-rc.d: initscript virtualbox, action "restart" failed. VirtualBox installed and seems to be running fine. I just can't get the ISO image to install. The error is as shown above.

    Read the article

  • Ubuntu boot hangs after message "Running /scripts/init-bottom ... done"

    - by Douglas B. Staple
    I've been trying to copy a Proxmox container based on the Ubuntu Precise Standard template to a VirtualBox VM. I am now stuck at a point where my new Ubuntu/VirtualBox VM hangs after the message "Running /scripts/init-bottom ... done" during boot. I started by installing Ubuntu Server 12.04.4 LTS on a VirtualBox VM. Ubuntu Server 12.04.4 LTS was the closest "official" Ubuntu ISO to the Proxmox container OS I could find. I installed all updates on both the Proxmox container and on the VirtualBox VM. The idea was to get same version kernal running on the ProxMox container and VirtualBox VM. sudo apt-get update ; sudo apt-get upgrade ; sudo apt-get dist-upgrade sudo reboot rsync the entire proxmox container to a temporary directory in the VirtualBox VM: cd / mkdir /tmp/backup rsync -e ssh -av --exclude={/dev,/proc,/sys,/tmp,/run,/mnt,/media,/lost+found,/boot,/selinux} root@my_proxmox_container_hostname:/ /tmp/backup Shut down the virtual machine, and boot the VM with a bootable linux image. I used the Desktop image of Ubuntu 12.04 LTS, ubuntu-12.04.4-desktop-i386.iso Drop to a root prompt. Mount the VM root filesystem: sudo mount /dev/sda1 /mnt Remove files from most of /mnt cd /mnt sudo rm -rf bin etc home lib opt sbin root usr var Move all of the files from /mnt/backup into /mnt sudo mv /mnt/tmp/backup/* /mnt Rebooted system. For me, at this point the system freezes after starting, after the message: Running /scripts/init-bottom ... done I've tried reinstalling GRUB and all manner of other thing. I am almost ready to give up.

    Read the article

  • Installing PHP4 on a Debian (lenny) 7 32bit box

    - by Asim
    I am trying to install PHP4 on a Debian 7 32bit box but I ran into the following root@php4:~# apt-get update Get:1 http://snapshot.debian.org lenny Release.gpg [189 B] Hit http://snapshot.debian.org lenny Release Ign http://snapshot.debian.org lenny Release Hit http://snapshot.debian.org lenny/main Sources/DiffIndex Hit http://snapshot.debian.org lenny/main i386 Packages/DiffIndex Hit http://ftp.us.debian.org wheezy Release.gpg Hit http://security.debian.org wheezy/updates Release.gpg Ign http://snapshot.debian.org lenny/main Translation-en_US Ign http://snapshot.debian.org lenny/main Translation-en Hit http://ftp.us.debian.org wheezy Release Hit http://security.debian.org wheezy/updates Release Hit http://ftp.us.debian.org wheezy/main i386 Packages Hit http://ftp.us.debian.org wheezy/main Translation-en Hit http://security.debian.org wheezy/updates/main i386 Packages Hit http://security.debian.org wheezy/updates/main Translation-en Fetched 189 B in 0s (229 B/s) Reading package lists... Done W: GPG error: http://snapshot.debian.org lenny Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY A70DAF536070D3A1 I did the following to fix it gpg --keyserver hkp://subkeys.pgp.net --recv-keys A70DAF536070D3A1 gpg --export --armor A70DAF536070D3A1 | sudo apt-key add - Now I get the following KEYEXPIRED error and unsure how to fix. Even Google does not help root@php4:~# apt-get update Get:1 http://snapshot.debian.org lenny Release.gpg [189 B] Hit http://snapshot.debian.org lenny Release Ign http://snapshot.debian.org lenny Release Hit http://snapshot.debian.org lenny/main Sources/DiffIndex Hit http://security.debian.org wheezy/updates Release.gpg Hit http://snapshot.debian.org lenny/main i386 Packages/DiffIndex Hit http://ftp.us.debian.org wheezy Release.gpg Hit http://security.debian.org wheezy/updates Release Ign http://snapshot.debian.org lenny/main Translation-en_US Hit http://ftp.us.debian.org wheezy Release Hit http://security.debian.org wheezy/updates/main i386 Packages Ign http://snapshot.debian.org lenny/main Translation-en Hit http://ftp.us.debian.org wheezy/main i386 Packages Hit http://security.debian.org wheezy/updates/main Translation-en Hit http://ftp.us.debian.org wheezy/main Translation-en Fetched 189 B in 0s (275 B/s) Reading package lists... Done W: GPG error: http://snapshot.debian.org lenny Release: The following signatures were invalid: KEYEXPIRED 1246455239 Any help?

    Read the article

  • Remote Scripted Installation of Sun/Oracle JRE

    - by chrisbunney
    I'm attempting to automate the installation of a Debian server (debian 6.0 squeeze 64bit). Part of the installation requires the Sun JRE package to be installed. This package has a licence agreement, which has to be accepted. I have a script which uses the following lines to accept and install the JRE: echo "sun-java6-bin shared/accepted-sun-dlj-v1-1 boolean true" | debconf-set-selections apt-get install -y sun-java6-jre This works fine when executing the script locally. However, I need to execute the script remotely using the ssh command, e.g.: ssh -i keyFile root@hostname './myScript' This doesn't work. In particular, it fails on apt-get install -y sun-java6-jre. It would seem that in spite of me setting the licence agreement to accepted, when run remotely in this manner it is ignored. Despite setting the value to true, I still get prompted to manually accept the agreement when I run this command: ssh -i keyFile root@hostname 'apt-get install -y sun-java6-jre' I suspect it is something to do with environment that is taken care of when running a proper terminal session, but have no idea what to try next to fix it. So, what do I have to do to get this command (and hence my deployment script) to run correctly when executing it remotely? Or is there an alternative way that allows me to install the JRE remotely by another means? Edit 0: I have compared the output of env when executed remotely via ssh and when executed via a local terminal session. The only difference between the outputs is that the local terminal session has the additional value TERM=xterm.

    Read the article

  • How can I manually install pecl_http on Ubuntu 9.10?

    - by Richard
    This is essentially a repost of http://stackoverflow.com/questions/4159369/ubuntu-9-04-pecl-extension-downloads-but-does-not-install. Hoping maybe someone can help me here. I've done this: sudo apt-get install php-pear sudo apt-get install php5-dev sudo apt-get install libcurl3-openssl-dev which installs fine. However, the next step: sudo pecl install pecl_http Doesn't install the extension, but merely downloads it. There are no error messages. So I have unpacked it and built it myself per http://php.net/manual/en/install.pecl.phpize.php Essentially: cd pecl_http phpize ./configure make make install I also make test'd to check all ok - and it failed one test: HttpRequest, which is kind of fundamental to this package. And indeed this doesn't work: $r = new HttpRequest('http://www.google.com'); $r->send; echo $r->getResponseCode(); No request is sent, the response code is zero, but no errors either. How can I get this damn thing installed? Is this a bug? Am I doing something wrong? Any alternatives, workarounds? Help appreciated. Thanks

    Read the article

  • ubuntu: Installed php-mcrypt but it doesn't show up in phpinfo()

    - by jules
    A web app I'm trying to install on my ubuntu 10.04 LTS requires mcrypt, and is generating this error: Fatal error: Call to undefined function mcrypt_module_open(). I know this is the same question as this one: Installed php-mcrypt but it doesn't show up in phpinfo(), but I tried several things, none of which worked, and have additional questions. I would comment on the original thread but don't have enough reputation to do so; forgive me for the duplicate question. My versions of php and mcrypt are (both installed via apt-get): php: 5.3.2-1ubuntu4.10 mcrypt: 5.3.2-0ubuntu Doing a php -m shows that the mcrypt module is installed. I installed mcrypt and php5-mcrypt via apt-get. Also, I'm using nginx as my web server. I have tried reinstalling mcrypt and restarting nginx, but still can't get mcrypt to show up on phpinfo() and calls to mcrypt are still broken. Here is some more info: $ php -i | grep "mcrypt" /etc/php5/cli/conf.d/mcrypt.ini, mcrypt mcrypt support => enabled mcrypt.algorithms_dir => no value => no value mcrypt.modes_dir => no value => no value I also checked that mcrypt is on in /etc/php5/cli/conf.d/mcrypt.ini and /etc/php5/cgi/conf.d/mcrypt.ini. Lastly, I'm using fastCGI with nginx. I googled around and saw suggestions to restart php5-fpm. I couldn't find php5-fpm in apt-get, I'm not sure if I still need php5-fpm since I already have fastCGI. Is there anything else I'm missing?

    Read the article

  • Unable to get prosody running on Ubuntu 10.04 (lua issues)

    - by user90374
    All this is performed on Ubuntu 10.04.4 LTS Server I installed LUA 5.1.4 following this procedure - http://ubuntuforums.org/showthread.php?t=1874860 I installed prosody following this command (after downloading the package) - sudo dpkg -i prosody_0.8.2-1_i386.deb After installation, I get the following error: I have tried to use as suggested luarock and sudo apt-get install to fix these. But still it keeps showing me these errors. Selecting previously deselected package prosody. (Reading database ... 59416 files and directories currently installed.) Unpacking prosody (from prosody_0.8.2-1_i386.deb) ... Setting up prosody (0.8.2-1) ... * Starting Prosody XMPP Server prosody ************** Prosody was unable to find luaexpat This package can be obtained in the following ways: Source: www[dot]keplerproject[dot]org/luaexpat/ Debian/Ubuntu: sudo apt-get install liblua5.1-expat0 luarocks: luarocks install luaexpat luaexpat is required for Prosody to run, so we will now exit. More help can be found on our website, at prosody[dot]im/doc/depends ************ Prosody was unable to find luasocket This package can be obtained in the following ways: Source: www[dot]tecgraf[dot]puc-rio[dot]br/~diego/professional/luasocket/ Debian/Ubuntu: sudo apt-get install liblua5.1-socket2 luarocks: luarocks install luasocket luasocket is required for Prosody to run, so we will now exit. More help can be found on our website, at prosody[dot]im/doc/depends ************ Prosody was unable to find LuaSec This package can be obtained in the following ways: Source: www[dot]inf[dot]puc-rio[dot]br/~brunoos/luasec/ Debian/Ubuntu: prosody[dot]im/download/start#debian_and_ubuntu luarocks: luarocks install luasec SSL/TLS support will not be available More help can be found on our website, at prosody[dot]im/doc/depends [fail] invoke-rc.d: initscript prosody, action "start" failed. dpkg: error processing prosody (--install): subprocess installed post-installation script returned error exit status 1 Processing triggers for man-db ... Processing triggers for ureadahead ... Errors were encountered while processing: prosody Thanks a lot for your patience and answers.

    Read the article

  • Nginx and automatic updates

    - by Desmond Hume
    I'm on Ubuntu 12.04.1 with unattended-upgrades configured for automatic security updates, and I installed Nginx by first adding deb http://nginx.org/packages/ubuntu/ lucid nginx deb-src http://nginx.org/packages/ubuntu/ lucid nginx to /etc/apt/sources.list file, just as was suggested by the official wiki, and then by sudo apt-get update sudo apt-get install nginx which installed Nginx with all the standard modules. But now I think I could make good use of one or two of the Nginx optional modules, like the gzip precompression module or some security-related one. So far, I see two ways of adding an optional module to Nginx, one is compiling and installing from the source code and the other is described in this article. So, which of the ways should I choose so that automatic updates still run for and apply to Nginx and its optional modules? Or should I create a cron job with a command/script specific for Nginx instead of using unattended-upgrades utility? Can I choose between volume updates and security-only updates to be automatically applied to the standard and optional modules? And finally, is there a possibility to automatically update Nginx's modules on the fly (without any connections having been dropped), like the documentation suggests it's possible with sudo kill -USR2 $( cat /run/nginx.pid ) P.S. Actually I'm not certain if unattended-upgrades utility would automatically update the standard modules in the first place, not enough time has passed since Nginx was installed to say for sure.

    Read the article

  • Which isn't working on linode servers (Ubuntu 10.04)?

    - by chrisjlee
    Currently trying to configure a linode server running on ubuntu 10.04. I utilized a stackscript (Default drupal profile) which seemed to run successfully. The log indicate so as well. Then ssh'd into the server (as root) to try to configure php. When i run a which php, which php5 they both return nothing. A which python returns something though. I know where the default path to php but i usually just like to use it as confirmation that php exists. Do i have to modify some configurations to enable which to work? Also tab completion doesn't seem to work for when i apt-get install? Update: Thanks for the suggestions guys. I've ran a couple commands and no luck either: [ root@ ~ ] $ dpkg -l |grep php [ root@ ~ ] $ apt-get install php5-cli Reading package lists... Done Building dependency tree Reading state information... Done Package php5-cli is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package php5-cli has no installation candidate Then i tried installing php and php cli: [ root@ ~ ] $ sudo apt-get install php5 php5-cli sudo: unable to resolve host xxxxxxx Reading package lists... Done Building dependency tree Reading state information... Done Package php5 is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package php5 has no installation candidate

    Read the article

  • How can I upgradge from Ubuntu Intrepid Ibex 8.10 to Jaunty 9.04 when old-releases no longer has the necessary packages?

    - by tommy chheng
    I changed my sources.list to: deb http://old-releases.ubuntu.com/ubuntu/ intrepid main restricted universe multiverse deb http://old-releases.ubuntu.com/ubuntu/ intrepid-updates main restricted universe multiverse deb http://old-releases.ubuntu.com/ubuntu/ intrepid-security main restricted universe multiverse I tried installing sudo apt-get install update-manager-core but i get this error: 1 upgraded, 3 newly installed, 0 to remove and 40 not upgraded. Need to get 2506kB/2555kB of archives. After this operation, 4346kB of additional disk space will be used. Do you want to continue [Y/n]? Y Err http://old-releases.ubuntu.com intrepid-updates/main update-manager-core 1:0.93.34 404 Not Found Err http://old-releases.ubuntu.com intrepid-security/main dpkg 1.14.20ubuntu6.3 404 Not Found Failed to fetch http://old-releases.ubuntu.com/ubuntu/pool/main/d/dpkg/dpkg_1.14.20ubuntu6.3_amd64.deb 404 Not Found Failed to fetch http://old-releases.ubuntu.com/ubuntu/pool/main/u/update-manager/update-manager-core_0.93.34_amd64.deb 404 Not Found E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? Running apt-get update or --fix-missing returns the same errors. How can I successfully upgrade from Intrepid Ibex to Jaunty 9.04?

    Read the article

  • ubuntu package installation

    - by mohit
    what is difference between "apt-get install apache2" and "aptitude install apache2" in ubuntu and where each of them install the package.. apt-get install ... , i guess nstall in /etc/ what about aptitude install ....

    Read the article

  • how to update rubygems in ubuntu 10.4?

    - by fayer
    when i run "gem -v" i get 1.3.5 i want to upgrade that version cause the bundler gem has to run on a higher version. i have run "apt-get install rubygems" but the version when typing "gem -v" is still 1.3.5. i cant use "gem update --system" on ubuntu, it tells me to update gem using apt-get. please help

    Read the article

  • Installing twisted.mail.smtp

    - by user3506985
    I am using Ubuntu 14.04 and trying to install twisted.mail.smtp using the following commnands -sudo add-apt-repository ppa:jesstess/twisted-12.1-testing -sudo apt-get update There are no errors in the installation,but when I specify the command that is from twisted.mail.smtp import ESMTPSenderFactory I am getting the following error Error: ImportError: No module named mail.smtp Please help me out

    Read the article

  • NVIDIA graphics driver in Ubuntu 12.04

    - by user924501
    So my overall goal is that I want to be able to code with CUDA enabled applications. However, upon many days of searching, using installation walkthroughs, and reinstalling countless times after driver failure... I'm now here as a last resort. I cannot get Ubuntu 12.04 LTS to install the NVIDIA 295.59 driver for my GeForce GT 540M NVIDIA graphics card. My main system specs is as follows... (I believe having the Intel processor may be the problem) DELL Laptop XPS L502X Intel® Core™ i7-2620M CPU @ 2.70GHz × 4 Intel Integrated Graphics 64 bit NVIDIA GeForce GT 540M Ubuntu 12.04 LTS All other specs are irrelevant unless I forgot something? Methods Tried: aptitude install nvidia-current (all packages) Results: Nothing really happened. Nothing in the additional drivers menu appeared, nor was the NVIDIA X Server settings application allowing access because it thought there was no NVIDIA X Server installed. Downloaded driver from nvidia.com. Set nomodeset in the grub boot menu through /boot/grub/grub.cfg Went to console and turned off lightdm. Installed the driver, but it said the pre-install failed? (mean anything?) Started up lightdm again. Results: NVIDIA X Server settings still didn't notice it. Even tried to do nvidia-xconfig multiple times. I also went into the config file to make sure the driver setting said "nvidia". aptitude install nvidia-173 (all packages) Results: Couldn't find the xorg-video-abi-10 virtual package. It was nowhere to be found and the ubuntu forums everywhere had no answers. Lots of people were having this problem. This is easily done in windows, simply download the driver and debug in visual studio with no problems at all. I'd like clear step-by-step instructions on how I should go about this. I'm relatively new to linux but I can find my way around pretty well so you aren't talking to a straight-up beginner. Also, if you think another thread may have the answer please post because I was having a hard time looking for my specific type of problem. TL;DR I want to have access to my GPU so I can code with CUDA while in Ubuntu 12.04 on my 64 bit laptop that also has Intel integrated graphics on the processor. Solution: sudo apt-add-repository ppa:ubuntu-x-swat/x-updates && sudo apt-get update && sudo apt-get upgrade && sudo apt-get install nvidia-current

    Read the article

  • Proliant Support Pack and Ubuntu on older HP Proliants

    - by snl
    We have a couple of oldish HP Proliant servers -- one DL385 G1 and one DL360 G5, to be exact -- that we'd like to upgrade from CentOS 5 to Ubuntu LucidLynx. The problem is that HP doesn't offer Ubuntu Proliant Support Packs for these particular models. Would you upgrade regardless, skipping the PSPs altogether? Are there alternative hardware monitoring tools that would match the functionality of the PSPs? Is there a hack to install the PSP RPMs on an Ubuntu system?

    Read the article

  • Can an SATA hard drive image be restored onto an SSD?

    - by Bryan Parker
    I'm currently using an SATA hard drive on my primary dev machine, but planning to upgrade to an SSD at some point soon. I use TrueImage on a regular basis to make backups, and to upgrade my harddrive without reinstalling everything. Will I be able to restore and boot onto an SSD? Will there be a performance hit or other issues to watch out for?

    Read the article

  • How to resolve a "driver failure" error in the Cisco VPN client connecting from a Windows 7 client

    - by JosephStyons
    I have recently upgraded my laptop from Windows Vista SP1 to Windows 7 Professional. After the upgrade, if I try to use the Cisco VPN client to connect to a network, I get this message: Secure VPN Connection terminated locally by the Client. Reason 440: Driver Failure. Prior to the upgrade, I was able to connect with no problems. The version of the client I am using is 5.0.05.0290.

    Read the article

  • Performance comparison between LDAP servers

    - by pablo
    Has anyone ever compared different LDAP servers? I am currently planning to upgrade ADAM to another server, and I'd like to know how do they perform. Currently the options that I am researching are: Active Directory (LDS) OpenLDAP Red Hat Directory Server OpenDS OpenDJ edit: I am looking for any kind of measurement that has been done. Anything as basic as reads/writes per second. I am looking for any quantitative measures to support choosing any of the servers above for my upgrade.

    Read the article

  • PCI-e v2.0 video card in a PCI-e v1.0 slot? [closed]

    - by Kevin
    Possible Duplicate: Will a PCI-E V2.0 Graphics Card work with a PCI-E V1.0 Motherboard? Will a PCI-e 2.0 card work in a PCI-e 1.0 motherboard? Currently I've got an older ASUS M2N-E motherboard, and I would like get a newer video card (GTX 280 or similar) which will last me a while after I upgrade my motherboard at some point in the future. I currently have a 6600 GT which has served me well for a long time, but it's time to upgrade.

    Read the article

< Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >