Search Results

Search found 59995 results on 2400 pages for 'web experience management'.

Page 445/2400 | < Previous Page | 441 442 443 444 445 446 447 448 449 450 451 452  | Next Page >

  • Shared Web Host Fails On RoundCube Install

    - by johny why
    something in RoundCube's htaccess causes non-specific error on RoundCube install. here's my info.php: http://tr.im/yn5K no access to apache root, since it's shared hosting. if i clear the htaccess, no error, but RoundCube needs the htaccess stuff to function.

    Read the article

  • What's the strategy to implement a "knowledge base" in my company.

    - by Oscar Reyes
    In my current work we think we can get benefit from having a knowledge base, so the next time someone has a question/problem etc, that base can be consulted and an answer will show up. Also, it will reduce the risk from having people leaving the company with the knowledge and we would have to start all over again. My question is, what strategy can we follow to implement/buy/get/build/etc this knowledge base? Are there software ready for this? Would it be better to have something build by ourselves ( we have some programmers ) This is an small company ( < 30 ) and the base should be accessible from outside the office ( when the employees are with the customer etc.) so I guess a webapp is in order.

    Read the article

  • beanstalk using php-git on windows client

    - by ntidote
    I am trying to install beanstalk for php using git. I am using a Windows Client machine. I am done with the prerequisite installations , credentials setup. I am following the link http://docs.amazonwebservices.com/elasticbeanstalk/latest/dg/create_deploy_PHP.sdlc.html The following step does not workout (i use git bash for git related commands) From your Git repository directory, type the following command. git aws.config This gives the error git : 'aws.config' is not a git command. Please suggest how to deal with the issue.

    Read the article

  • Looking for the best ec2 setup for 3 sites totaling in 1.5 mil in traffic monthly

    - by john h.
    I am looking to consolidate our current aws setup of 2 Large ubuntu ec2 servers and 2 large RDS server for our 3 websites that have a total of about 1.5 million hits a month and increasing every month with the majority of traffic (1 mil) to one forum site in the group and the rest of traffic to an ecommerce site and a small wordpress site. So here is my question/thought? Would it be better for us to combine the two ec2 large servers to just one and same with the 2 RDS servers so we run all three sites off one large ec2 and one RDS. -or- Should we setup maybe 2-3 smaller ec2 servers load balenced and a single RDS. -or- Something completely different setup? One concern is that if one site crashes it takes with it the others. It happened in the past but I am pretty sure its because of the forum software and not the server setup. -john

    Read the article

  • How do you transfer AWS RDS snap shot to a different AWS account

    - by Webmonger
    Hi I have an RDS database that I need to transfer a snapshot of to another AWS account. I understand there are issues being able to do this between availability zones so I'm really unsure if this is possible. The RDS instance is mySql. If it's not possible to transfer the snapshot please could you explain how to transfer the data from one RDS instance to another without downloading any if the contents(The DB is over 200GB). Thanks in advance

    Read the article

  • Fn + Right Arrow on MacBook Pro does not work as END Key for SuperUser web site

    - by George2
    Hello everyone, I am using a MacBook Pro running Mac OS X 10.5. I am new to this development environment, and previously worked on Windows. I have tried that Fn + Right arrow on Mac works as "END" key on Windows -- which makes cursor move to the end of line, for most application like Word for Mac -- I have tried it works. But I tried Fn + Right Arrow does not work for current question input box of SuperUser. I am wondering how to solve this issue and why? Does anyone have the same issue for me? thanks in advance, George

    Read the article

  • How to disable Mac OS X from using swap when there still is "Inactive" memory?

    - by Motin
    A common phenomena in my day to day usage (and several other's according to various posts throughout the internet) of OS X, the system seems to become slow whenever there is no more "Free" memory available. Supposedly, this is due to swapping, since heavy disk activity is apparent and that vm_stat reports many pageouts. (Correct me from wrong) However, the amount of "Inactive" ram is typically around 12.5%-25% of all available memory (^1.) when swapping starts/occurs/ends. According to http://support.apple.com/kb/ht1342 : Inactive memory This information in memory is not actively being used, but was recently used. For example, if you've been using Mail and then quit it, the RAM that Mail was using is marked as Inactive memory. This Inactive memory is available for use by another application, just like Free memory. However, if you open Mail before its Inactive memory is used by a different application, Mail will open quicker because its Inactive memory is converted to Active memory, instead of loading Mail from the slower hard disk. And according to http://developer.apple.com/library/mac/#documentation/Performance/Conceptual/ManagingMemory/Articles/AboutMemory.html : The inactive list contains pages that are currently resident in physical memory but have not been accessed recently. These pages contain valid data but may be released from memory at any time. So, basically: When a program has quit, it's memory becomes marked as Inactive and should be claimable at any time. Still, OS X will prefer to start swapping out memory to the Swap file instead of just claiming this memory, whenever the "Free" memory gets to low. Why? What is the advantage of this behavior over, say, instantly releasing Inactive memory and not even touch the swap file? Some sources (^2.) indicate that OS X would page out the "Inactive" memory to swap before releasing it, but that doesn't make sense now does it if the memory may be released from memory at any time? Swapping is expensive, releasing is cheap, right? Can this behavior be changed using some preference or known hack? (Preferably one that doesn't include disabling swap/dynamic_pager altogether and restarting...) I do appreciate the purge command, as well as the concept of Repairing disk permissions to force some Free memory, but those are ways to painfully force more Free memory than to actually fixing the swap/release decision logic... Btw a similar question was asked here: http://forums.macnn.com/90/mac-os-x/434650/why-does-os-x-swap-when/ and here: http://hintsforums.macworld.com/showthread.php?t=87688 but even though the OPs re-asked the core question, none of the replies addresses an answer to it... ^1. UPDATE 17-mar-2012 Since I first posted this question, I have gone from 4gb to 8gb of installed ram, and the problem remains. The amount of "Inactive" ram was 0.5gb-1.0gb before and is now typically around 1.0-2.0GB when swapping starts/occurs/ends, ie it seems that around 12.5%-25% of the ram is preserved as Inactive by osx kernel logic. ^2. For instance http://apple.stackexchange.com/questions/4288/what-does-it-mean-if-i-have-lots-of-inactive-memory-at-the-end-of-a-work-day : Once all your memory is used (free memory is 0), the OS will write out inactive memory to the swapfile to make more room in active memory. UPDATE 17-mar-2012 Here is a round-up of the methods that have been suggested to help so far: The purge command "Used to approximate initial boot conditions with a cold disk buffer cache for performance analysis. It does not affect anonymous memory that has been allocated through malloc, vm_allocate, etc". This is useful to prevent osx to swap-out the disk cache (which is ridiculous that osx actually does so in the first place), but with the downside that the disk cache is released, meaning that if the disk cache was not about to be swapped out, one would simply end up with a cold disk buffer cache, probably affecting performance negatively. The FreeMemory app and/or Repairing disk permissions to force some Free memory Doesn't help releasing any memory, only moving some gigabytes of memory contents from ram to the hd. In the end, this causes lots of swap-ins when I attempt to use the applications that were open while freeing memory, as a lot of its vm is now on swap. Speeding up swap-allocation using dynamicpagerwrapper Seems a good thing to do in order to speed up swap-usage, but does not address the problem of osx swapping in the first place while there is still inactive memory. Disabling swap by disabling dynamicpager and restarting This will force osx not to use swap to the price of the system hanging when all memory is used. Not a viable alternative... Disabling swap using a hacked dynamicpager Similar to disabling dynamicpager above, some excerpts from the comments to the blog post indicate that this is not a viable solution: "The Inactive Memory is high as usual". "when your system is running out of memory, the whole os hangs...", "if you consume the whole amount of memory of the mac, the machine will likely hang" To sum up, I am still unaware of a way of disabling Mac OS X from using swap when there still is "Inactive" memory. If it isn't possible, maybe at least there is an explanation somewhere of why osx prefers to swap out memory that may be released from memory at any time?

    Read the article

  • Openfire on EC2 with Jingle

    - by Bjorn Roche
    I would like to run Openfire (or another XMPP server) on EC2. At the moment this is just for testing, so easy setup and configuration are important, as is low cost. At some point, however, if things go well, it will be important to scale this. Ideally, it would be nice to not have to switch software when the scaling happens, but if a switch needs to happen later it certainly can. My requirements are: basic XMPP services, including muc and pubsub. Logins controlled from an external API. Preferably, when a user attempts to connect, the XMPP server checks with the api to see if their username and password are correct, but I can also have the API keep the XMPP server up to date on new users, deleted users, pasword changes and so on. I see Openfire has a "user service" API. Not ideal, but it looks workable. Jingle, including relay and STUN. It's not at all clear to me if the Jingle Nodes plugin takes care of this. I'm a bit confused about what's required to set this up, and I'd rather know in advance than be confused along the way :). eg It seems like STUN servers require more than one IP address. Can Openfire do all this for me, including stun and media relay on a single machine? Is this hard to configure on EC2 with Openfire? What are the basic steps? Would this be easier with something else like, say Tigase? What about database? Should I use amazon's database service, or run a db on the same machine? Would the server be compatible with a service like http://www.siteuptime.com/ Thanks!

    Read the article

  • How do I resolve BSOD: PAGE_FAULT_IN_NONPAGED_AREA?

    - by Burnzy
    I have been trouble shooting this for a few days and cannot fix this anyhow. Computer specifications Mobo: ASUS Sabertooth X58 LGA 1366 Intel X58 SATA 6Gb/s USB 3.0 ATX Intel Motherboard CPU: Intel(R) Core(TM) i7 CPU 920 (Bloomfield) @ 2.67 ( no OC ) RAM: 6144MB RAM GPU: 2x NVIDIA GeForce GTS 250 1Go in SLI (sli is not enabled anyway at the moment anyway) Drives: OCZ RevoDrive OCZSSDPX-1RVD0120 PCI-E x4 120GB PCI Express MLC Internal SSD [RAID-0]. (I know this could potentilly cause trouble but I had the BSOD before using this drive) Seagate Barracuda 7200.11 ST31500341AS 1.5TB 7200 RPM 32MB Cache SATA 3.0Gb/s 3.5" Internal Hard Drive - Bare Drive Click here for a log of a crash I just had. Click here for a log of a crash I had 30 minutes later, note that it's another driver. Some info Occurence: It seems pretty random so far, haven't noticed any kind of pattern I tried: Windows memory diagnostic (went smoothly at 1066mhz) As I said, it was still happening on my HDD, so when I bought the revodrive I install a new OS on there and still got the error, I believed it happened and I had no drivers installed at that point (not 100% sure) Change the following registry value to 1 (true): HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SessionManager \MemoryManagement\ClearPageFileAtShutdown Tried to lower even more ram clock Made sure ram timing was set to recommended by manufacturer Verified if motherboard was in good physical condition (yes and its brand new) There is one thing to note, when I got the new motherboard, I installed the new drivers WITHOUT formatting and the I removed the motherboard drivers that I could remove from the control panel (pretty much the first things that have been installed). Could this cause an issue even ON THE OTHER drive (revodrive). Hopefully someone can help me, I am getting tired of this, spending so much money and cannot get this to work correctly. If you need any other information let me know, thank you!

    Read the article

  • Using hdparm for better performance on Web Servers

    - by Rishav
    I just heard about using hdparams to optimize the Hard Disk Performance of a server ? Is this common practice ? What file systems do you use ? I generally deploy on the second last release of Ubuntu for stability reasons, do you some other filesystems or use distributed file systems from the get go ? Do the hdparam settings change for different File systems ? I haven't tried this yet, so how much difference do changes like this make ?

    Read the article

  • Why can I resolve this hostname but not a cname to this hostname?

    - by Joe Hopfgartner
    if a run dig against a hostname, i get the according cname, however i get an NXDOMAIN error (non existent domain). if i run dig against the cname i got, I can resolve it to an IP address successfully. It is reproduceable. On the system I am currently on it is always the case, on other systems it sometimes works and sometimes not, and on other systems it seems to work all the time. If i run using a nameserver i specify (for example googles public nameserver) i can successfully resolve the hostname. i would just blame the local system, but it seems i am not having the only one problems the 2nd domain (unrestrict.me) is hosted on amazon route 53 nameservers. the 1st one on another dns server which has proofen to be fully functional and reliable over the years. i once switchted with the other domain to amazon dns as well, everything seemed to work, also various dns health check tests reported fine, however i recieved a lot of support tickets that dns resolution would not work. is amazon just "bad" or am i doing something wrong? i did not tamper with the domain in any way on the local system (in case of caching or making a custom dns view or whatever...) joe@joe:~$ dig scorpion.premiumize.me ; <<>> DiG 9.8.1-P1 <<>> scorpion.premiumize.me ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 10222 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;scorpion.premiumize.me. IN A ;; ANSWER SECTION: scorpion.premiumize.me. 180 IN CNAME alpha.nue.scorpion.unrestrict.me. ;; Query time: 28 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Mon Jun 18 10:28:39 2012 ;; MSG SIZE rcvd: 84 joe@joe:~$ dig alpha.nue.scorpion.unrestrict.me ; <<>> DiG 9.8.1-P1 <<>> alpha.nue.scorpion.unrestrict.me ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 25381 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;alpha.nue.scorpion.unrestrict.me. IN A ;; ANSWER SECTION: alpha.nue.scorpion.unrestrict.me. 300 IN A 78.46.25.130 ;; Query time: 48 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Mon Jun 18 10:28:47 2012 ;; MSG SIZE rcvd: 66 joe@joe:~$

    Read the article

  • Increase application performance on Amazon AWS

    - by Honus Wagner
    I've got a client with an MVC v1 (.NET) application running on a micro instance. On this instance, I've got .NET, IIS 7.5, and MS SQL Server 2008 running to handle the application. The client has reported that it is taking nearly 10 seconds to process each request. Even loading the initial login page takes about that long, then logging in takes that long, etc etc. The currently running instance specs are as follows: 615 MB RAM Intel Xenon CPU E5430 @ 2.66GHz 2.78 GHz 64-Bit Is the memory availability the issue? or is it the processing power? I forsee two options: Change to a larget instance Set up a 2-tier architecture with two micro instances Which of these will give the application better performance? Thanks in advance.

    Read the article

  • How do I configure IIS 7 (discount asp.net) to point subdomains at application subdirectories?

    - by m4bwav
    I have an discount asp.net account, which uses IIS 7 and I want to configure subdomains to point at specific applications on the site. For instance: 's1.site.com' would run the application at 'site.com/serverone'. I would like the subdomain to be opaque so that users do not have to deal with the /serverone part. There was a similar question on server fault, but it involves creating a whole new site for each subdomain, where as I would rather just create independent websites.

    Read the article

  • Certain SFTP user cannot connect

    - by trobrock
    I have my Ubuntu Server set up so users with the group of sftponly can connect with sftp, but have a shell of /bin/false, and they connect to their home directories. This is working fine with three of the user accounts I have. But I added a new user account today the same way that I added the others and it will not successfully connect. sftp -vvv user@hostname debug1: Next authentication method: password user@hostname's password: debug3: packet_send2: adding 48 (len 73 padlen 7 extra_pad 64) debug2: we sent a password packet, wait for reply debug1: Authentication succeeded (password). debug2: fd 5 setting O_NONBLOCK debug3: fd 6 is O_NONBLOCK debug1: channel 0: new [client-session] debug3: ssh_session2_open: channel_new: 0 debug2: channel 0: send open debug1: Requesting [email protected] debug1: Entering interactive session. debug1: channel 0: free: client-session, nchannels 1 debug3: channel 0: status: The following connections are open: #0 client-session (t3 r-1 i0/0 o0/0 fd 5/6 cfd -1) debug3: channel 0: close_fds r 5 w 6 e 7 c -1 debug1: fd 0 clearing O_NONBLOCK debug3: fd 1 is not O_NONBLOCK Connection to hostname closed by remote host. Transferred: sent 2176, received 1848 bytes, in 0.0 seconds Bytes per second: sent 127453.3, received 108241.6 debug1: Exit status -1 Connection closed For a successful user: sftp -vvv good_user@hostname debug1: Next authentication method: password good_user@hostname's password: debug3: packet_send2: adding 48 (len 63 padlen 17 extra_pad 64) debug2: we sent a password packet, wait for reply debug1: Authentication succeeded (password). debug2: fd 5 setting O_NONBLOCK debug3: fd 6 is O_NONBLOCK debug1: channel 0: new [client-session] debug3: ssh_session2_open: channel_new: 0 debug2: channel 0: send open debug1: Requesting [email protected] debug1: Entering interactive session. debug2: callback start debug2: client_session2_setup: id 0 debug1: Sending subsystem: sftp debug2: channel 0: request subsystem confirm 1 debug2: fd 3 setting TCP_NODELAY debug2: callback done debug2: channel 0: open confirm rwindow 0 rmax 32768 debug2: channel 0: rcvd adjust 2097152 debug2: channel_input_status_confirm: type 99 id 0 debug2: subsystem request accepted on channel 0 debug2: Remote version: 3 debug2: Server supports extension "[email protected]" revision 1 debug2: Server supports extension "[email protected]" revision 2 debug2: Server supports extension "[email protected]" revision 2 debug3: Sent message fd 3 T:16 I:1 debug3: SSH_FXP_REALPATH . -> / sftp> I cannot figure out why one user will work and the other wont, I have restart the ssh service after adding the user. I have even removed the user and added them again to be sure I am adding it correctly.

    Read the article

  • How to get httrack to work with SSL on mac os x? (libssl.so not found)

    - by cwd
    I'm trying to use httrack website copier but the program is running and reporting "no-ssl" (ie: it does not have the capability to copy secure sites). From looking over this thread, it seems that the problem is either when I make & configure the program, or when I run the program, it is not finding the lib-ssl / open-ssl that I have installed. I think it is looking for /var/root/lib/libssl.so.1.0 The user on that forum states that he created a symlink which allowed httrack to find the ssl library in the non-default location. Perhaps that's what I need to do - but where do I create the link from and to? I'm not seeing that I have any libssl.so files installed on my system. Do I need the development package? If so, how do I install that? I used macports to install the current version of openssl that I have. I'm running OS X 10.6. Reserch I have run this command to try and debug: dtruss httrack 2&1 | grep ssl and that outputs this: stat64("libssl.so.1.0\0", 0x7FFF5FBFEE30, 0x7FFF5FBFF470) = -1 Err#2 stat64("/var/root/lib/libssl.so.1.0\0", 0x7FFF5FBFF210, 0x7FFF5FBFF470) = -1 Err#2 stat64("/usr/local/lib/libssl.so.1.0\0", 0x7FFF5FBFF210, 0x7FFF5FBFF470) = -1 Err#2 stat64("/usr/lib/libssl.so.1.0\0", 0x7FFF5FBFF220, 0x7FFF5FBFF470) = -1 Err#2 stat64("libssl.so.1\0", 0x7FFF5FBFEE30, 0x7FFF5FBFF470) = -1 Err#2 stat64("/var/root/lib/libssl.so.1\0", 0x7FFF5FBFF220, 0x7FFF5FBFF470) = -1 Err#2 stat64("/usr/local/lib/libssl.so.1\0", 0x7FFF5FBFF220, 0x7FFF5FBFF470) = -1 Err#2 stat64("/usr/lib/libssl.so.1\0", 0x7FFF5FBFF220, 0x7FFF5FBFF470) = -1 Err#2 stat64("libssl.so.1.0.0\0", 0x7FFF5FBFEE30, 0x7FFF5FBFF470) = -1 Err#2 stat64("/var/root/lib/libssl.so.1.0.0\0", 0x7FFF5FBFF210, 0x7FFF5FBFF470) = -1 Err#2 stat64("/usr/local/lib/libssl.so.1.0.0\0", 0x7FFF5FBFF210, 0x7FFF5FBFF470) = -1 Err#2 stat64("/usr/lib/libssl.so.1.0.0\0", 0x7FFF5FBFF220, 0x7FFF5FBFF470) = -1 Err#2 stat64("libssl.so.0\0", 0x7FFF5FBFEE30, 0x7FFF5FBFF470) = -1 Err#2 stat64("/var/root/lib/libssl.so.0\0", 0x7FFF5FBFF220, 0x7FFF5FBFF470) = -1 Err#2 stat64("/usr/local/lib/libssl.so.0\0", 0x7FFF5FBFF220, 0x7FFF5FBFF470) = -1 Err#2 stat64("/usr/lib/libssl.so.0\0", 0x7FFF5FBFF220, 0x7FFF5FBFF470) = -1 Err#2 stat64("libssl.so.0.9\0", 0x7FFF5FBFEE30, 0x7FFF5FBFF470) = -1 Err#2 stat64("/var/root/lib/libssl.so.0.9\0", 0x7FFF5FBFF210, 0x7FFF5FBFF470) = -1 Err#2 stat64("/usr/local/lib/libssl.so.0.9\0", 0x7FFF5FBFF210, 0x7FFF5FBFF470) = -1 Err#2 stat64("/usr/lib/libssl.so.0.9\0", 0x7FFF5FBFF220, 0x7FFF5FBFF470) = -1 Err#2 stat64("libssl.so.0.9.8p\0", 0x7FFF5FBFEE30, 0x7FFF5FBFF470) = -1 Err#2 stat64("/var/root/lib/libssl.so.0.9.8p\0", 0x7FFF5FBFF210, 0x7FFF5FBFF470) = -1 Err#2 stat64("/usr/local/lib/libssl.so.0.9.8p\0", 0x7FFF5FBFF210, 0x7FFF5FBFF470) = -1 Err#2 stat64("/usr/lib/libssl.so.0.9.8p\0", 0x7FFF5FBFF220, 0x7FFF5FBFF470) = -1 Err#2 stat64("libssl.so.0.9.8o\0", 0x7FFF5FBFEE30, 0x7FFF5FBFF470) = -1 Err#2 stat64("/var/root/lib/libssl.so.0.9.8o\0", 0x7FFF5FBFF210, 0x7FFF5FBFF470) = -1 Err#2 stat64("/usr/local/lib/libssl.so.0.9.8o\0", 0x7FFF5FBFF210, 0x7FFF5FBFF470) = -1 Err#2 stat64("/usr/lib/libssl.so.0.9.8o\0", 0x7FFF5FBFF220, 0x7FFF5FBFF470) = -1 Err#2 stat64("libssl.so.0.9.8n\0", 0x7FFF5FBFEE30, 0x7FFF5FBFF470) = -1 Err#2 stat64("/var/root/lib/libssl.so.0.9.8n\0", 0x7FFF5FBFF210, 0x7FFF5FBFF470) = -1 Err#2 stat64("/usr/local/lib/libssl.so.0.9.8n\0", 0x7FFF5FBFF210, 0x7FFF5FBFF470) = -1 Err#2 stat64("/usr/lib/libssl.so.0.9.8n\0", 0x7FFF5FBFF220, 0x7FFF5FBFF470) = -1 Err#2 stat64("libssl.so.0.9.8m\0", 0x7FFF5FBFEE30, 0x7FFF5FBFF470) = -1 Err#2 stat64("/var/root/lib/libssl.so.0.9.8m\0", 0x7FFF5FBFF210, 0x7FFF5FBFF470) = -1 Err#2 stat64("/usr/local/lib/libssl.so.0.9.8m\0", 0x7FFF5FBFF210, 0x7FFF5FBFF470) = -1 Err#2 stat64("/usr/lib/libssl.so.0.9.8m\0", 0x7FFF5FBFF220, 0x7FFF5FBFF470) = -1 Err#2 stat64("libssl.so.0.9.8l\0", 0x7FFF5FBFEE30, 0x7FFF5FBFF470) = -1 Err#2 stat64("/var/root/lib/libssl.so.0.9.8l\0", 0x7FFF5FBFF210, 0x7FFF5FBFF470) = -1 Err#2 stat64("/usr/local/lib/libssl.so.0.9.8l\0", 0x7FFF5FBFF210, 0x7FFF5FBFF470) = -1 Err#2 stat64("/usr/lib/libssl.so.0.9.8l\0", 0x7FFF5FBFF220, 0x7FFF5FBFF470) = -1 Err#2 stat64("libssl.so.0.9.8k\0", 0x7FFF5FBFEE30, 0x7FFF5FBFF470) = -1 Err#2 stat64("/var/root/lib/libssl.so.0.9.8k\0", 0x7FFF5FBFF210, 0x7FFF5FBFF470) = -1 Err#2 stat64("/usr/local/lib/libssl.so.0.9.8k\0", 0x7FFF5FBFF210, 0x7FFF5FBFF470) = -1 Err#2 stat64("/usr/lib/libssl.so.0.9.8k\0", 0x7FFF5FBFF220, 0x7FFF5FBFF470) = -1 Err#2 stat64("libssl.so.0.9.8j\0", 0x7FFF5FBFEE30, 0x7FFF5FBFF470) = -1 Err#2 stat64("/var/root/lib/libssl.so.0.9.8j\0", 0x7FFF5FBFF210, 0x7FFF5FBFF470) = -1 Err#2 stat64("/usr/local/lib/libssl.so.0.9.8j\0", 0x7FFF5FBFF210, 0x7FFF5FBFF470) = -1 Err#2 stat64("/usr/lib/libssl.so.0.9.8j\0", 0x7FFF5FBFF220, 0x7FFF5FBFF470) = -1 Err#2 stat64("libssl.so.0.9.8g\0", 0x7FFF5FBFEE30, 0x7FFF5FBFF470) = -1 Err#2 stat64("/var/root/lib/libssl.so.0.9.8g\0", 0x7FFF5FBFF210, 0x7FFF5FBFF470) = -1 Err#2 stat64("/usr/local/lib/libssl.so.0.9.8g\0", 0x7FFF5FBFF210, 0x7FFF5FBFF470) = -1 Err#2 stat64("/usr/lib/libssl.so.0.9.8g\0", 0x7FFF5FBFF220, 0x7FFF5FBFF470) = -1 Err#2 stat64("libssl.so.0.9.8b\0", 0x7FFF5FBFEE30, 0x7FFF5FBFF470) = -1 Err#2 stat64("/var/root/lib/libssl.so.0.9.8b\0", 0x7FFF5FBFF210, 0x7FFF5FBFF470) = -1 Err#2 stat64("/usr/local/lib/libssl.so.0.9.8b\0", 0x7FFF5FBFF210, 0x7FFF5FBFF470) = -1 Err#2 stat64("/usr/lib/libssl.so.0.9.8b\0", 0x7FFF5FBFF220, 0x7FFF5FBFF470) = -1 Err#2 stat64("libssl.so.0.9.8\0", 0x7FFF5FBFEE30, 0x7FFF5FBFF470) = -1 Err#2 stat64("/var/root/lib/libssl.so.0.9.8\0", 0x7FFF5FBFF210, 0x7FFF5FBFF470) = -1 Err#2 stat64("/usr/local/lib/libssl.so.0.9.8\0", 0x7FFF5FBFF210, 0x7FFF5FBFF470) = -1 Err#2 stat64("/usr/lib/libssl.so.0.9.8\0", 0x7FFF5FBFF220, 0x7FFF5FBFF470) = -1 Err#2 stat64("libssl.so.0.9.7\0", 0x7FFF5FBFEE30, 0x7FFF5FBFF470) = -1 Err#2 stat64("/var/root/lib/libssl.so.0.9.7\0", 0x7FFF5FBFF210, 0x7FFF5FBFF470) = -1 Err#2 stat64("/usr/local/lib/libssl.so.0.9.7\0", 0x7FFF5FBFF210, 0x7FFF5FBFF470) = -1 Err#2 stat64("/usr/lib/libssl.so.0.9.7\0", 0x7FFF5FBFF220, 0x7FFF5FBFF470) = -1 Err#2 stat64("libssl.so.0.9.6\0", 0x7FFF5FBFEE30, 0x7FFF5FBFF470) = -1 Err#2 stat64("/var/root/lib/libssl.so.0.9.6\0", 0x7FFF5FBFF210, 0x7FFF5FBFF470) = -1 Err#2 stat64("/usr/local/lib/libssl.so.0.9.6\0", 0x7FFF5FBFF210, 0x7FFF5FBFF470) = -1 Err#2 stat64("/usr/lib/libssl.so.0.9.6\0", 0x7FFF5FBFF220, 0x7FFF5FBFF470) = -1 Err#2 stat64("libssl.so\0", 0x7FFF5FBFEE30, 0x7FFF5FBFF470) = -1 Err#2 stat64("/var/root/lib/libssl.so\0", 0x7FFF5FBFF220, 0x7FFF5FBFF470) = -1 Err#2 stat64("/usr/local/lib/libssl.so\0", 0x7FFF5FBFF220, 0x7FFF5FBFF470) = -1 Err#2 stat64("/usr/lib/libssl.so\0", 0x7FFF5FBFF220, 0x7FFF5FBFF470) = -1 Err#2 I have already used mac ports to install open-ssl: port installed The following ports are currently installed: beecrypt @4.2.1_2 (active) cpio @2.10_0 (active) expat @2.0.1_1 (active) flex @2.5.35_0 (active) gettext @0.18.1.1_2 (active) gperf @3.0.4_0 (active) icu @4.6_0 (active) libiconv @1.13.1_0 (active) mysql5 @5.1.53_0 (active) ncurses @5.9_0 (active) ncursesw @5.8_0 (active) neon @0.29.5_0 (active) openssl @1.0.0c_0 (active) perl5.8 @5.8.9_3 (active) popt @1.16_0 (active) python24 @2.4.6_7 (active) readline @6.1.002_0 (active) rpm @4.4.9_10 (active) sqlite3 @3.7.3_0 (active) zlib @1.2.5_0 (active) Here are the install locations: locate libssl /opt/local/lib/libssl.1.0.0.dylib /opt/local/lib/libssl.a /opt/local/lib/libssl.dylib /opt/local/lib/pkgconfig/libssl.pc /opt/local/var/macports/software/openssl/1.0.0c_0/opt/local/lib/libssl.1.0.0.dylib /opt/local/var/macports/software/openssl/1.0.0c_0/opt/local/lib/libssl.a /opt/local/var/macports/software/openssl/1.0.0c_0/opt/local/lib/libssl.dylib /opt/local/var/macports/software/openssl/1.0.0c_0/opt/local/lib/pkgconfig/libssl.pc /usr/lib/libssl.0.9.7.dylib /usr/lib/libssl.0.9.8.dylib /usr/lib/libssl.0.9.dylib /usr/lib/libssl.dylib /usr/lib/pkgconfig/libssl.pc What should I do next? More Info I tried the solution below: $ DYLD_INSERT_LIBRARIES="/opt/local/lib/libssl.1.0.0.dylib" httrack Welcome to HTTrack Website Copier (Offline Browser) 3.44-1-nossl Copyright (C) Xavier Roche and other contributors To see the option list, enter a blank line or try httrack --help It is still not able to load the ssl lib: 3.44-1-nossl

    Read the article

  • local cache for NAS or network folder

    - by HugoRune
    I am planning to build a network attached storage (NAS) server. Is there a way to cache frequently acccessed files from the remote storage automatically on the local PC? (I am not looking for a way to sync whole folders like rsync, but rather something that automatically and transparently caches the last accessed 50 gb of files.) Ideally I am searching for something that caches writes as well as reads, since only one pc will be accessing the server (and one day of lost changes if the local cache is damaged would be acceptable) I looked into windows offline files, but as far as I could tell this requires manual interaction to disconnect the server or go into offline mode in order to use the cache. The server would probably be running Linux or freeNAS, the pc runs Windows xp, but could be upgraded to 7 if required.

    Read the article

  • Which Online Notepad to use?

    - by user1413
    What is the best online notepad? I once used Google Notebook but Google no longer supports it and I don't want to put my stuff on an unsupported site. I now use Luminotes but I find editing of the text to be infuriating. What is better?

    Read the article

  • Connecting to RDS database from EC2 instance using bind9 CNAME alias

    - by mptre
    I'm trying to get internal DNS up and running on a EC2 instance. The main goal is to be able to define CNAME aliases for other AWS services. For example: Instead of using the RDS endpoint, which might change over time, an alias mysql.company.int can be used instead. I'm using bind9 and here's my config files: /etc/bind/named.conf.local zone "company.int" { type master; file "/etc/bind/db.company.int"; }; /etc/bind/db.company.int ; $TTL 3600 @ IN SOA company.int. company.localhost. ( 20120617 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; @ IN NS company.int. @ IN A 127.0.0.1 @ IN AAAA ::1 ; CNAME mysql IN CNAME xxxx.eu-west-1.rds.amazonaws.com. The dig command ensures me my alias is working as excepted: $ dig mysql.company.int ... ;; ANSWER SECTION: mysql.company.int. 3600 IN CNAME xxxx.eu-west-1.rds.amazonaws.com. xxxx.eu-west-1.rds.amazonaws.com. 60 IN CNAME ec2-yyy-yy-yy-yyy.eu-west-1.compute.amazonaws.com. ec2-yyy-yy-yy-yyy.eu-west-1.compute.amazonaws.com. 589575 IN A zzz.zz.zz.zzz ... As far as I can understand a reverse zone isn't needed for a simple CNAME alias. However when I try to connect to MySQL using my newly created alias the operation is giving me a timeout. $ mysql -uuser -ppassword -hmysql.company.int ERROR 2003 (HY000): Can't connect to MySQL server on 'mysql.company.int' (110) Any ideas? Thanks in advantage!

    Read the article

  • AWS forwarding email to a gmail account

    - by user2433617
    So I registered a domain name. I then set up a static webpage using aws (S3 and Rout53). Now what I want to do is forward any email I get from that custom domain name to a personal email address I have set up. I can't seem to figure out how to do this. I have these record sets already: A NS SOA CNAME I believe I have to set up an MX record but not sure how. say I have the custom domain [email protected] and I want to redirect all email to [email protected]. The personal email account is a gmail (google accounts) email address. Thanks.

    Read the article

  • Unable to login to Amazon EC2 compute server

    - by MasterGaurav
    I am unable to login to the EC2 server. Here's the log of the connection-attempt: $ ssh -v -i ec2-key-incoleg-x002.pem [email protected] OpenSSH_5.6p1, OpenSSL 0.9.8p 16 Nov 2010 debug1: Reading configuration data /home/gvaish/.ssh/config debug1: Applying options for * debug1: Connecting to ec2-50-16-0-207.compute-1.amazonaws.com [50.16.0.207] port 22. debug1: Connection established. debug1: identity file ec2-key-incoleg-x002.pem type -1 debug1: identity file ec2-key-incoleg-x002.pem-cert type -1 debug1: identity file /home/gvaish/.ssh/id_rsa type -1 debug1: identity file /home/gvaish/.ssh/id_rsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3 debug1: match: OpenSSH_5.3 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.6 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'ec2-50-16-0-207.compute-1.amazonaws.com' is known and matches the RSA host key. debug1: Found key in /home/gvaish/.ssh/known_hosts:8 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: ec2-key-incoleg-x002.pem debug1: read PEM private key done: type RSA debug1: Authentications that can continue: publickey debug1: Trying private key: /home/gvaish/.ssh/id_rsa debug1: No more authentication methods to try. Permission denied (publickey). What can be the possible reason? How do I fix the issue?

    Read the article

  • Given a debian source package - How do I install the build-deps?

    - by Rory McCann
    I have a debian (well technically ubuntu) source package, i.e. the .dsc, the .tar.gz, etc., I want to build this. The dpkg-buildpackage fails, since I don't have all the build dependencies. Normally I'd use apt-get build-dep, but this package isn't in apt. Is there a 'clean', 'proper' way to install all the build dependencies, given a source package. I know I could just open the debian/control file, but I'm curious if there's a 'proper' way.

    Read the article

  • Why am I having DNS problems going through Network Solutions DNS to Amazon AWS?

    - by BestPractices
    Network Solutions appears to have an issue with AWS hostnames. This AWS ELB has been out there for months and is resolvable from every major DNS provider but network solutions. Any idea as to why? WORKING (4.2.2.2 DNS) $ nslookup testloadbalancer-1761726467.us-west-2.elb.amazonaws.com Server: 4.2.2.5 Address: 4.2.2.5#53 Non-authoritative answer: Name: testloadbalancer-1761726467.us-west-2.elb.amazonaws.com Address: 50.112.251.201 NOT WORKING (Network Solutions DNS) $ nslookup testloadbalancer-1761726467.us-west-2.elb.amazonaws.com ns1.worldnic.com Server: ns1.worldnic.com Address: 205.178.190.1#53 ** server can't find testloadbalancer-1761726467.us-west-2.elb.amazonaws.com.localhost: SERVFAIL

    Read the article

  • SQL Server 2008 R2 upgrade fails on upgrade rule check

    - by Tim
    I'm trying to upgrade an evaluation instance of SQL Server 2008 to a fully licensed instance of SQL Server 2008 R2. I made it most of the way through the installer, but I'm getting stopped at the Upgrade Rules page - the SQL Server Analysis Services Upgrade Service Functional Check is failing. The specific error I get: Rule "SQL Server Analysis Services Upgrade Service Functional Check" failed. The current instance of the SQL Server Analysis Services service cannot be upgraded because the Analysis Services service is disabled or not online. Please start the service and then run the upgrade rules check again. Simple enough - just need to start the service. Here's where it gets troublesome. When I open Services and go to start the SQL Server Analysis Services (MSSQLSERVER) service, it provides me the following message: The SQL Server Analysis Services (MSSQLSERVER) service on Local Computer started and then stopped. Some services stop automatically if they are not in use by other services or programs. Trying from the command line as Administrator yields: PS C:\Windows\System32 net start MSSQLServerOLAPService The SQL Server Analysis Services (MSSQLSERVER) service is starting... The SQL Server Analysis Services (MSSQLSERVER) service could not be started. The service did not report an error. More help is available by typing NET HELPMSG 3534. I've tried changing the logon setting of this service to Administrator, a user with admin privileges, and both the Local System and Network Service accounts - nothing works. In addition, when I look at the service through the SQL Server Configuration Manager (also run as Administrator), attempting to change the logon setting for the service results in the message: The server threw an exception. [0x80010105] I have no need for analysis services themselves - all I need is for this one service to be running long enough to do the R2 upgrade, then it can shut down again. Any thoughts on how to get the Analysis Services service running? Update: Checking the event log, I found an error logged to the Application log from the MSSQLServerOLAPService. It has event ID 0, task category (289), and says: The service cannot be started: XML parsing failed at line 1, column 4: Unrecognized input signature.

    Read the article

  • Windows File System Analysis

    - by bouvierr
    I am looking for a FREE tool to perform analyses on the NTFS file system of my Windows 7 PC. I want to easily see the amount of data distributed throught out the entire file system. The following applications seem very good, but they are not free and probably overkill for my requirements: FolderSizes 5 MailMeter Windows File System Reporting Tool I am aware that some applications (like Folder Size 2.5) can add a column in Windows Explorer to show the size of each folder, but I am looking for something more like a reporting tool. Thank you for your suggestions.

    Read the article

  • Debian Simple Gui for adding/removing users for protective directories

    - by ErocM
    We have a hosted site with a directory that is password protected. I need to have a user who knows very little about computers, maintain the users that have access to this directory. The list is going to get big, according to our customer database. My question is 2 fold: Is there a simple gui program that I can have this user utilize to be able to maintain the users without having to teach them how to use ssh and UNIX? Am I going about this the right way? Is there a better way to do this? Thanks for your help!

    Read the article

< Previous Page | 441 442 443 444 445 446 447 448 449 450 451 452  | Next Page >