Search Results

Search found 1324 results on 53 pages for 'ec2'.

Page 53/53 | < Previous Page | 49 50 51 52 53 

  • Google maps sometimes does not return a geocoded value for string

    - by XGreen
    Hi Guys, I have the following code: It basically looks into a HTML list and geocodes and markets each item. it does it correctly 8 out of ten but sometimes I get an error I set for show in the console. I can't think of anything. Any thoughts is much appreciated. $(function () { var map = null; var geocoder = null; function initialize() { if (GBrowserIsCompatible()) { // Specifies that the element with the ID map is the container for the map map = new GMap2(document.getElementById("map")); // Sets an initial map positon (which mainly gets ignored after reading the adderesses list) map.setCenter(new GLatLng(37.4419, -122.1419), 13); // Instatiates the google Geocoder class geocoder = new GClientGeocoder(); map.addControl(new GSmallMapControl()); // Sets map zooming controls on the map map.enableScrollWheelZoom(); // Allows the mouse wheel to control the map while on it } } function showAddress(address, linkHTML) { if (geocoder) { geocoder.getLatLng(address, function (point) { if (!point) { console.log('Geocoder did not return a location for ' + address); } else { map.setCenter(point, 8); var marker = new GMarker(point); map.addOverlay(marker); // Assigns the click event to each marker to open its balloon GEvent.addListener(marker, "click", function () { marker.openInfoWindowHtml(linkHTML); }); } } ); } } // end of show address function initialize(); // This iterates through the text of each address and tells the map // to show its location on the map. An internal error is thrown if // the location is not found. $.each($('.addresses li a'), function () { var addressAnchor = $(this); showAddress(addressAnchor.text(), $(this).parent().html()); }); }); which looks into this HTML: <ul class="addresses"> <li><a href="#">Central London</a></li> <li><a href="#">London WC1</a></li> <li><a href="#">London Shoreditch</a></li> <li><a href="#">London EC1</a></li> <li><a href="#">London EC2</a></li> <li><a href="#">London EC3</a></li> <li><a href="#">London EC4</a></li> </ul>

    Read the article

  • Twitter traffic might not be what it seems

    - by Piet
    Are you using bit.ly stats to measure interest in the links you post on twitter? I’ve been hearing for a while about people claiming to get the majority of their traffic originating from twitter these days. Now, I’ve been playing with the twitter ruby gem recently, doing various experiments which I’ll not go into detail here because they could be regarded as spamming… if I’d conduct them on a large scale, that is. It’s scary to see people actually engaging with @replies crafted with some regular expressions and eliza-like trickery on status updates found using the twitter api. I’m wondering how Twitter is going to contain the coming spam-flood. When posting links I used bit.ly as url shortener, since this one seems to be the de-facto standard on twitter. A nice thing about bit.ly is that it shows some basic stats about the redirects it performs for your shortened links. To my surprise, most links posted almost immediately resulted in several visitors. Now, seeing that I was posting the links together with some information concerning what the link is about, I concluded that the people who were actually clicking the links should be very targeted visitors. This felt a bit like free adwords, and I suddenly started to understand why everyone was raving about getting traffic from twitter. How wrong I was! (and I think several 1000 online marketers with me) On the destination site I used a traffic logging solution that works by including a little javascript snippet in your pages. It seemed that somehow all visitors disappeared after the bit.ly redirect and before getting to the site, because I was hardly seeing any visitors there. So I started investigating what was happening: by looking at the logfiles of the destination site, and by making my own ’shortened’ urls by doing redirects using a very short domain name I own. This way, I could check the apache access_log before the redirects. Most user agents turned out to be bots without a doubt. Here’s an excerpt of user-agents awk’ed from apache’s access_log for a time period of about one hour, right after posting some links: AideRSS 2.0 (postrank.com) Java/1.6.0_13 Java/1.6.0_14 libwww-perl/5.816 MLBot (www.metadatalabs.com/mlbot) Mozilla/4.0 (compatible;MSIE 5.01; Windows -NT 5.0 - real-url.org) Mozilla/5.0 (compatible; Twitturls; +http://twitturls.com) Mozilla/5.0 (compatible; Viralheat Bot/1.0; +http://www.viralheat.com/) Mozilla/5.0 (Danger hiptop 4.6; U; rv:1.7.12) Gecko/20050920 Mozilla/5.0 (X11; U; Linux i686; en-us; rv:1.9.0.2) Gecko/2008092313 Ubuntu/9.04 (jaunty) Firefox/3.5 OpenCalaisSemanticProxy PycURL/7.18.2 PycURL/7.19.3 Python-urllib/1.17 Twingly Recon twitmatic Twitturly / v0.6 Wget/1.10.2 (Red Hat modified) Wget/1.11.1 (Red Hat modified) Of the few user-agents that seem ‘real’ at first, half are originating from an ip-address used by Amazon EC2. And I doubt people are setting op proxies on there. Oh yeah, Googlebot (the real deal, from a legit google owned address) is sucking up posted links like fresh oysters. I guess google is trying to make sure in advance to never be beaten by twitter in the ‘realtime search’ department. Actually, I think it’d be almost stupid NOT to post any new pages/posts/websites on Twitter, it must be one of the fastest ways to get a Googlebot visit. Same experiment with a real, established twitter account Now, because I was posting the url’s either as ’status’ messages or directed @people, on a test-account with hardly any (human) followers, I checked again using the twitter accounts from a commercial site I’m involved with. These accounts all have between 500 and 1000 targeted (I think) followers. I checked the destination access_logs and also added ‘my’ redirect after the bit.ly redirect: same results, although seemingly a bit higher real visitor/bot ratio. Btw: one of these account was ‘punished’ with a 1 week lock recently because the same (1 one!) status update was sent that was sent right before using another account. They got an email explaining the lock because the account didn’t act according to their TOS. I can’t find anything in their TOS about it, can you? I don’t think Twitter is on the right track punishing a legit account, knowing the trickery I had been doing with it’s api went totally unpunished. I might be wrong though, I often am. On the other hand: this commercial site reported targeted traffic and actual signups from visitors coming from Twitter. The ones that are really real visitors are also very targeted. I’m just not sure if the amount of work involved could hold up against an adwords campaign. Reposting the same link over and over again helps On thing I noticed: It helps to keep on reposting the same links with regular intervals. I guess most people only look at their first page when checking out recent posts of the ones they’re following, or don’t look too far back when performing a search. Now, this probably isn’t according to the twitter TOS. Actually, it might be spamming but no-one is obligated to follow anyone else of course. This way, I was getting more real visitors and less bots. To my surprise (when my programmer’s hat is on) there were still repeated visits from the same bots coming from the same ip-addresses. Did they expect to find something else when visiting for a 2nd or 3rd time? (actually,this gave me an idea: you can’t change a link once it’s posted, but you can change where it redirects to) Most bots were smart enough not to follow the same link again though. Are you successful in getting real visitors from Twitter? Are you only relying on bit.ly to provide traffic stats?

    Read the article

  • MySQL – Scalability on Amazon RDS: Scale out to multiple RDS instances

    - by Pinal Dave
    Today, I’d like to discuss getting better MySQL scalability on Amazon RDS. The question of the day: “What can you do when a MySQL database needs to scale write-intensive workloads beyond the capabilities of the largest available machine on Amazon RDS?” Let’s take a look. In a typical EC2/RDS set-up, users connect to app servers from their mobile devices and tablets, computers, browsers, etc.  Then app servers connect to an RDS instance (web/cloud services) and in some cases they might leverage some read-only replicas.   Figure 1. A typical RDS instance is a single-instance database, with read replicas.  This is not very good at handling high write-based throughput. As your application becomes more popular you can expect an increasing number of users, more transactions, and more accumulated data.  User interactions can become more challenging as the application adds more sophisticated capabilities. The result of all this positive activity: your MySQL database will inevitably begin to experience scalability pressures. What can you do? Broadly speaking, there are four options available to improve MySQL scalability on RDS. 1. Larger RDS Instances – If you’re not already using the maximum available RDS instance, you can always scale up – to larger hardware.  Bigger CPUs, more compute power, more memory et cetera. But the largest available RDS instance is still limited.  And they get expensive. “High-Memory Quadruple Extra Large DB Instance”: 68 GB of memory 26 ECUs (8 virtual cores with 3.25 ECUs each) 64-bit platform High I/O Capacity Provisioned IOPS Optimized: 1000Mbps 2. Provisioned IOPs – You can get provisioned IOPs and higher throughput on the I/O level. However, there is a hard limit with a maximum instance size and maximum number of provisioned IOPs you can buy from Amazon and you simply cannot scale beyond these hardware specifications. 3. Leverage Read Replicas – If your application permits, you can leverage read replicas to offload some reads from the master databases. But there are a limited number of replicas you can utilize and Amazon generally requires some modifications to your existing application. And read-replicas don’t help with write-intensive applications. 4. Multiple Database Instances – Amazon offers a fourth option: “You can implement partitioning,thereby spreading your data across multiple database Instances” (Link) However, Amazon does not offer any guidance or facilities to help you with this. “Multiple database instances” is not an RDS feature.  And Amazon doesn’t explain how to implement this idea. In fact, when asked, this is the response on an Amazon forum: Q: Is there any documents that describe the partition DB across multiple RDS? I need to use DB with more 1TB but exist a limitation during the create process, but I read in the any FAQ that you need to partition database, but I don’t find any documents that describe it. A: “DB partitioning/sharding is not an official feature of Amazon RDS or MySQL, but a technique to scale out database by using multiple database instances. The appropriate way to split data depends on the characteristics of the application or data set. Therefore, there is no concrete and specific guidance.” So now what? The answer is to scale out with ScaleBase. Amazon RDS with ScaleBase: What you get – MySQL Scalability! ScaleBase is specifically designed to scale out a single MySQL RDS instance into multiple MySQL instances. Critically, this is accomplished with no changes to your application code.  Your application continues to “see” one database.   ScaleBase does all the work of managing and enforcing an optimized data distribution policy to create multiple MySQL instances. With ScaleBase, data distribution, transactions, concurrency control, and two-phase commit are all 100% transparent and 100% ACID-compliant, so applications, services and tooling continue to interact with your distributed RDS as if it were a single MySQL instance. The result: now you can cost-effectively leverage multiple MySQL RDS instance to scale out write-intensive workloads to an unlimited number of users, transactions, and data. Amazon RDS with ScaleBase: What you keep – Everything! And how does this change your Amazon environment? 1. Keep your application, unchanged – There is no change your application development life-cycle at all.  You still use your existing development tools, frameworks and libraries.  Application quality assurance and testing cycles stay the same. And, critically, you stay with an ACID-compliant MySQL environment. 2. Keep your RDS value-added services – The value-added services that you rely on are all still available. Amazon will continue to handle database maintenance and updates for you. You can still leverage High Availability via Multi A-Z.  And, if it benefits youra application throughput, you can still use read replicas. 3. Keep your RDS administration – Finally the RDS monitoring and provisioning tools you rely on still work as they did before. With your one large MySQL instance, now split into multiple instances, you can actually use less expensive, smallersmaller available RDS hardware and continue to see better database performance. Conclusion Amazon RDS is a tremendous service, but it doesn’t offer solutions to scale beyond a single MySQL instance. Larger RDS instances get more expensive.  And when you max-out on the available hardware, you’re stuck.  Amazon recommends scaling out your single instance into multiple instances for transaction-intensive apps, but offers no services or guidance to help you. This is where ScaleBase comes in to save the day. It gives you a simple and effective way to create multiple MySQL RDS instances, while removing all the complexities typically caused by “DIY” sharding andwith no changes to your applications . With ScaleBase you continue to leverage the AWS/RDS ecosystem: commodity hardware and value added services like read replicas, multi A-Z, maintenance/updates and administration with monitoring tools and provisioning. SCALEBASE ON AMAZON If you’re curious to try ScaleBase on Amazon, it can be found here – Download NOW. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Network authentication + roaming home directory - which technology should I look into using?

    - by Brian
    I'm looking into software which provides a user with a single identity across multiple computers. That is, a user should have the same permissions on each computer, and the user should have access to all of his or her files (roaming home directory) on each computer. There seem to be many solutions for this general idea, but I'm trying to determine the best one for me. Here are some details along with requirements: The network of machines are Amazon EC2 instances running Ubuntu. We access the machines with SSH. Some machines on this LAN may have different uses, but I am only discussing machines for a certain use (running a multi-tenancy platform). The system will not necessarily have a constant amount of machines. We may have to permanently or temporarily alter the amount of machines running. This is the the reason why I'm looking into centralized authentication/storage. The implementation of this effect should be a secure one. We're unsure if users will have direct shell access, but their software will potentially be running (under restricted Linux user names, of course) on our systems, which is as good as direct shell access. Let's assume that their software could potentially be malicious for the sake of security. I have heard of several technologies/combinations to achieve my goal, but I'm unsure of the ramifications of each. An older ServerFault post recommended NFS & NIS, though the combination has security problems according to this old article by Symantec. The article suggests moving to NIS+, but, as it is old, this Wikipedia article has cited statements suggesting a trending away from NIS+ by Sun. The recommended replacement is another thing I have heard of... LDAP. It looks like LDAP can be used to save user information in a centralized location on a network. NFS would still need to be used to cover the 'roaming home folder' requirement, but I see references of them being used together. Since the Symantec article pointed out security problems in both NIS and NFS, is there software to replace NFS, or should I heed that article's suggestions for locking it down? I'm tending toward LDAP because another fundamental piece of our architecture, RabbitMQ, has a authentication/authorization plugin for LDAP. RabbitMQ will be accessible in a restricted manner to users on the system, so I would like to tie the security systems together if possible. Kerberos is another secure authentication protocol that I have heard of. I learned a bit about it some years ago in a cryptography class but don't remember much about it. I have seen suggestions online that it can be combined with LDAP in several ways. Is this necessary? What are the security risks of LDAP without Kerberos? I also remember Kerberos being used in another piece of software developed by Carnegie Mellon University... Andrew File System, or AFS. OpenAFS is available for use, though its setup seems a bit complicated. At my university, AFS provides both requirements... I can log in to any machine, and my "AFS folder" is always available (at least when I acquire an AFS token). Along with suggestions for which path I should look into, does anybody have any guides which were particularly helpful? As the bold text pointed out, LDAP looks to be the best choice, but I'm particularly interested in the implementation details (Keberos? NFS?) with respect to security.

    Read the article

  • Week in Geek: IPv6 Capable Smartphones Compromise User Privacy Edition

    - by Asian Angel
    This week we learned how to “clone a disk, resize static windows, and create system function shortcuts”, use 45 different services, sites, and apps to help read favorite sites, add MP3 support to Audacity (for saving in MP3 format), install a Wii game loader for easy backups and fast load times, create a Blue Screen of Death in any color, and more. Photo by legofenris. Weekly News Links Photo by The H Security. IPv6: Smartphones compromise users’ privacy Since version 4 of the iOS operating system, Apple’s iPhones, iPads and iPods have been capable of handling IPv6, and most Android devices have been capable since version 2.1. However, the operating systems transfer an ID that discloses information about their users. Dumb phones can be attacked too Much of the discussion of security threats to mobile phones revolves around smartphones, but researchers have found that less advanced “feature phones,” still used by the majority of people around the world, also are vulnerable to attack. SCADA exploit – the dragon awakes The recent publication of an exploit for KingView, a software package for visualising industrial process control systems, appears to be having an effect. Threatpost reports that both the Chinese vendor Wellintech and Chinese CERT (CN-CERT) have now reacted. Sophos: Spam to get more malicious Spam is becoming more malicious in nature as trickery tactics change in line with current user interests, according to a new report released Tuesday by Sophos. Global spam traffic rebounds as Rustock wakes Spam is on the rise after the Rustock botnet awoke from its Christmas slumber, according to Symantec. Cracking WPA keys in the cloud At the forthcoming Black Hat conference, blogger Thomas Roth plans to demonstrate how weak WPA PSKs can be cracked quickly and easily using Amazon’s Elastic Compute Cloud (EC2) service. Microsoft Security Advisory: Vulnerability in Internet Explorer could allow remote code execution Provides a link to more details about the vulnerability and shows a work-around/fix for the problem. Adobe plans to make it easier to delete Flash cookies in web browsers The new API, NPAPI:ClearSiteData, will allow Flash cookies – also known as Local Shared Objects (LSO) – to be deleted directly in the browser’s settings. Firefox beta getting new database standard The ninth beta version of Firefox is set to get support for a standard called IndexedDB that provides a database interface useful for offline data storage and other tasks needing information on a browser’s computer. MetroPCS accused of blocking certain Net content MetroPCS is violating the FCC’s recently approved Net neutrality rules by blocking certain Internet content, say several public interest groups. Server and Tools chief Muglia to leave Microsoft in summer 2011 Microsoft veteran and Server & Tools Business (STB) President Bob Muglia is leaving Microsoft, according to an email that CEO Steve Ballmer sent to employees on January 10. Report: DOJ nearing decision on Google-ITA The U.S. Department of Justice is gearing up for a possible formal antitrust investigation into whether or not Google should be allowed to purchase travel software company ITA Software, according to a report. South Korea says Google Street View broke law Police in South Korea reportedly say Google broke the country’s law when its Street View service captured personal data from unsecure Wi-Fi networks. The backlash over Google’s HTML5 video bet Choosing strategies based on what you believe to be long-term benefits is generally a good idea when running a business, but if you manage to alienate the world in the process, the long term may become irrelevant. Google answers critics on HTML5 Web video move Google responded to critics of its decision to drop support for a popular HTML5 video codec by declaring that a royalty-supported standard for Web video will hold the Web hostage. Random TinyHacker Links A Special GiveAway: a Great Book & Great Security Software The team from 7 Tutorials has a special giveaway running during the month of January. Signed copies of their latest book, full 1-year licenses of BitDefender Internet Security 2011 and free 3-month trials for everyone willing to participate. One Click Rooting For Android Phones Here’s a nice tool that helps you root your Android phone effortlessly. New Angry Birds Free version 1.0 Available in the App Store. Google Code University Learn programming at Google Code University. Capture and Share Your Favorite Part Of a YouTube Video SnipSnip.it lets you share only the part of the video that you like. Super User Questions More great questions and answers from this past week’s popular topics at Super User. What are the Windows A: and B: drives used for? Does OS X support linux-like features? What is the easiest way to make a backup of an entire hard disk? Will shifting from Wireless to Wired network result in better performance? Is it legal to install Windows 7 Home Premium Retail inside VMware virtual machine? How-To Geek Weekly Article Recap Enjoy reading through our hottest articles from this past week. The 50 Best Ways to Disable Built-in Windows Features You Don’t Want The Best of CES (Consumer Electronics Show) in 2011 How to Upgrade Windows 7 Easily (And Understand Whether You Should) The Worst of CES (Consumer Electronics Show) in 2011 The How-To Geek Guide to Audio Editing: Basic Noise Removal One Year Ago on How-To Geek More great articles from one year ago filled with helpful geeky goodness for you to enjoy. Share Text & Images the Easy Way with JustPaste.it Start Portable Firefox in Safe Mode Firefox 3.6 Release Candidate Available, Here’s How to Fix Your Incompatible Extensions Protect Your Computer from “Little Hands” with KidSafe Lock Prying Eyes Out of Your Minimized Windows Custom Crocheted Cylon-Cthulhu Hybrid What happens when you let your Cylon Centurion figure and your crocheted Cthulhu spend too many lonely nights together? A Cylon-Cthulhu hybrid, of course! You can get your own from the Cthulhu Chick store over on Etsy. Note: This is not an ad…Ruth is a friend of ours, and this Cylon-Cthulhu hybrid makes the perfect guard for the new MVP trophy in our office. The Geek Note Whether it is a geeky indoor project or just getting outside, we hope that you and your families have a terrific fun-filled weekend! Remember to keep sending those great tips in to us at [email protected]. Photo by qwrrty. Latest Features How-To Geek ETC How to Upgrade Windows 7 Easily (And Understand Whether You Should) The How-To Geek Guide to Audio Editing: Basic Noise Removal Install a Wii Game Loader for Easy Backups and Fast Load Times The Best of CES (Consumer Electronics Show) in 2011 The Worst of CES (Consumer Electronics Show) in 2011 HTG Projects: How to Create Your Own Custom Papercraft Toy Firefox 4.0 Beta 9 Available for Download – Get Your Copy Now The Frustrations of a Computer Literate Watching a Newbie Use a Computer [Humorous Video] Season0nPass Jailbreaks Current Gen Apple TVs IBM’s Jeopardy Playing Computer Watson Shows The Pros How It’s Done [Video] Tranquil Juice Drop Abstract Wallpaper Pulse Is a Sleek Newsreader for iOS and Android Devices

    Read the article

  • How can I tell Firefox to ignore unprintable characters?

    - by BrianH
    Edit: Summary Apparently the intended character to display in this case is an "en-dash". This page has a table half way down that shows that for the &ndash;, some software will convert the correct hex code of 2013 to 0096. (look at the first row in the table). This answer on Stackoverflow explains that somehow this is a mixup between Windows-1252 and UTF-8 This blog article enforces this: Character 150 (0x96) is the unicode character "START OF GUARDED AREA" in the non-displayed C1 control character range, but in the Windows-1252 encoding it's mapped to to the displayable character 0x2013 "en-dash" (a short dash). Others have struggled with this when producing content, as this answer on Stackoverflow shows how to replace 0x0096 with 0x2013. Google must realize this, because as stated in my original question below, Google's cached version of the Amazon page has &ndash; so it seems they are automatically correcting these mistakes on pages they cache. I have tried setting my encoding to Windows-1252 but that does not help. So now I guess my question is, how can I tell Firefox to ignore unprintable characters like these? Original content below: (Firefox 3.6.13 on Windows XP) Every once in a while I notice an odd character on certain web pages when browsing the web. It is a outline of a box with a 4-digit number inside. And example of a page that has these characters is: http://aws.amazon.com/ec2/#highlights After each section heading (Elastic, Completely Controlled, ...) I see a box with the number "0096" inside. I looked at the cached version on Google, and google has &ndash; in it's place, so I'm guessing I should be seeing a dash there instead of the box with the numbers in it. I have tried changing the character encoding in Firefox but haven't been able to find one that shows these characters correctly. Is there a way to allow Firefox to view these characters? Thanks in advance! Edit - adding a screen shot of the "special" characters: Edit #2 - tried in Ubuntu - new screenshots I logged into my Ubuntu desktop and browsed to the amazon page in Chrome and Firefox. Chrome completely ignores character, even if I inspect or view page source. Firefox in Unbutu displays the character exactly like Firefox on my Windows XP box. I copied the character and played around with it at the command line - here is a screenshot of the results: It looks like I can paste the character into this post as well: `` It is definitely not isolated to Windows XP. I tried setting the character encoding for my terminal to Windows 1252 (from Dennis' comment below), but then it just displays this character as a question mark. I pulled the webpage down with wget and with curl, and both outputs show this characters as: <96> It makes me wonder if this character renders correctly for anyone? It appears webkit just ignores it, my IE6 ignores it, Firefox displays the box with the numbers in it. I would have to imagine the design team at Amazon can see it correctly? It's not a huge deal to get these characters displaying correctly, but it would be nice to know if there is a solution to this.

    Read the article

  • http request via iptables --to-destination ip redirect results in no response

    - by Wouter Vegter
    I have two Ubuntu servers with each having their own ip addresses. Let's call them server1 and server2, having respectively ip 1.1.1.1 and 2.2.2.2 I have a nginx running on server2. The sole purpose I want server1 to have is to redirect all incoming http (so port 80) requests to server2 without clients noticing that their request is being redirected. I tried the following command on server1: iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to-destination 2.2.2.2 But when I enter 1.1.1.1 in my browser I get no respond: the page keeps trying to load without giving any message or error message (I get a time-out after 2-3 mins). But when I do remove the above iptables rule I immediately do get a "page not found error" when I enter 1.1.1.1 in my browser; so something is working but not as it should: when I enter 1.1.1.1 I want the html page to load that is hosted on 2.2.2.2 Because when i enter 2.2.2.2 in my browser I do see the webpage loaded. Could anyone please help me with this? I am searching quite some time (on severfault & Google) on this now so that's why I ask. Many thanks for reading my question! Update: Thank you all for you information. Unfortunately I still get no response I have the following iptables configuration: root@ip-10-48-238-216:/home/ubuntu# sudo iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination root@ip-10-48-238-216:/home/ubuntu# sudo iptables -t nat -L Chain PREROUTING (policy ACCEPT) target prot opt source destination DNAT tcp -- anywhere anywhere tcp dpt:www to:2.2.2.2 Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain POSTROUTING (policy ACCEPT) target prot opt source destination When i run tcpdump and do request via chrome to 1.1.1.1 i get the following root@ip-10-48-238-216:/home/ubuntu# sudo tcpdump -i eth0 port 80 -vv tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes 13:56:18.346625 IP (tos 0x0, ttl 52, id 12055, offset 0, flags [DF], proto TCP (6), length 60) 212-123-161-112.ip.telfort.nl.16386 ip-10-48-238-216.eu-west-1.compute.internal.www: Flags [S], cksum 0xb398 (correct), seq 2639758575, win 5840, options [mss 1460,sackOK,TS val 1223672 ecr 0,nop,wscale 6], length 0 13:56:18.346662 IP (tos 0x0, ttl 51, id 12055, offset 0, flags [DF], proto TCP (6), length 60) 212-123-161-112.ip.telfort.nl.16386 ww1dc1.shopreme.com.www: Flags [S], cksum 0x9ee0 (correct), seq 2639758575, win 5840, options [mss 1460,sackOK,TS val 1223672 ecr 0,nop,wscale 6], length 0 13:56:18.598747 IP (tos 0x0, ttl 52, id 10138, offset 0, flags [DF], proto TCP (6), length 60) 212-123-161-112.ip.telfort.nl.16387 ip-10-48-238-216.eu-west-1.compute.internal.www: Flags [S], cksum 0xac40 (correct), seq 2645658541, win 5840, options [mss 1460,sackOK,TS val 1223735 ecr 0,nop,wscale 6], length 0 13:56:18.598777 IP (tos 0x0, ttl 51, id 10138, offset 0, flags [DF], proto TCP (6), length 60) 212-123-161-112.ip.telfort.nl.16387 ww1dc1.shopreme.com.www: Flags [S], cksum 0x9788 (correct), seq 2645658541, win 5840, options [mss 1460,sackOK,TS val 1223735 ecr 0,nop,wscale 6], length 0 ^C 4 packets captured 4 packets received by filter 0 packets dropped by kernel the mentioned address relate to the following 212-123-161-112.ip.telfort.nl.16386 : my personal computer ww1dc1.shopreme.com.www : dns of server2 (2.2.2.2) ip-10-48-238-216.eu-west-1.compute.internal.www : amazon web services ec2 internal address of server1 (1.1.1.1) However, the tcpdump log on server2 (2.2.2.2) stays empty and I get no response back in my browser. I am able to ping from server1 to server2. And net.ipv4.ip_forward is set to 1 and so is /proc/sys/net/ipv4/ip_forward Could there be anything else that is missing?

    Read the article

  • virtual host setup: can't access wordpress site without www

    - by two7s_clash
    I would like to access my site both with and without using the www. Currently it only works with. Leaving out the www just goes to a blank page. Also, wp-admin just loads a blank page too. I have set an A record for mysite.com and www.mysite.com, both pointing to my static Bitnami IP. I also have a subdomain mapped to another directory that is working just fine (conference.mysite.com and www.conference.mysite.com). I'm using a Bitnami stack on an AWS EC2 micro instance. Here is my httpd.conf: ServerRoot "/opt/bitnami/apache2" Listen 80 LoadModule authn_file_module modules/mod_authn_file.so blah blah blah.... LoadModule php5_module modules/libphp5.so <IfModule !mpm_netware_module> <IfModule !mpm_winnt_module> User daemon Group daemon </IfModule> </IfModule> ServerAdmin [email protected] ServerName localhost:80 DocumentRoot "/opt/bitnami/apps/wordpress1/htdocs/" <Directory /> Options FollowSymLinks AllowOverride None Order deny,allow Allow from all </Directory> <Directory "/opt/bitnami/apps/wordpress1/htdocs/"> Options Indexes MultiViews +FollowSymLinks LanguagePriority en AllowOverride All Order allow,deny Allow from all </Directory> <IfModule dir_module> DirectoryIndex index.html index.php </IfModule> <FilesMatch "^\\.ht"> Order allow,deny Deny from all Satisfy All </FilesMatch> ErrorLog "logs/error_log" LogLevel warn <IfModule log_config_module> LogFormat "%h %l %u %t \\"%r\\" %>s %b \\"%{Referer}i\\" \\"%{User-Agent}i\\"" combined LogFormat "%h %l %u %t \\"%r\\" %>s %b" common <IfModule logio_module> LogFormat "%h %l %u %t \\"%r\\" %>s %b \\"%{Referer}i\\" \\"%{User-Agent}i\\" %I %O" combinedio </IfModule> CustomLog "logs/access_log" common </IfModule> <IfModule alias_module> ScriptAlias /cgi-bin/ "/opt/bitnami/apache2/cgi-bin/" </IfModule> <Directory "/opt/bitnami/apache2/cgi-bin"> AllowOverride None Options None Order allow,deny Allow from all </Directory> DefaultType text/plain <IfModule mime_module> TypesConfig conf/mime.types AddType application/x-compress .Z AddType application/x-gzip .gz .tgz </IfModule> Include conf/extra/httpd-mpm.conf <IfModule ssl_module> SSLRandomSeed startup builtin SSLRandomSeed connect builtin </IfModule> AddType application/x-httpd-php .php .phtml LoadModule wsgi_module modules/mod_wsgi.so WSGIPythonHome /opt/bitnami/python ServerSignature Off ServerTokens Prod AddType application/x-httpd-php .php PHPIniDir "/opt/bitnami/php/etc" Include "/opt/bitnami/apps/phpmyadmin/conf/phpmyadmin.conf" ExtendedStatus On <Location /server-status> SetHandler server-status Order Deny,Allow Deny from all Allow from localhost </Location> Include "/opt/bitnami/apache2/conf/bitnami/httpd.conf" Include "/opt/bitnami/apps/virtualhost.conf" Here is my virtual hosts file: NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin xx DocumentRoot "/opt/bitnami/apps/wordpress1/htdocs" ServerName mbird.com ServerAlias www.mbird.com ErrorLog "logs/wordpress-error_log" CustomLog "logs/wordpress-access_log" common </VirtualHost> <Directory "/opt/bitnami/apps/wordpress1/htdocs"> Options Indexes MultiViews +FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> ### WordPress conference.mbird.com configuration ### <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "/opt/bitnami/apps/wordpress/htdocs" ServerName conference.mbird.com ServerAlias www.conference.mbird.com ErrorLog "logs/confwordpress-error_log" CustomLog "logs/confwordpress-access_log" common </VirtualHost> <Directory "/opt/bitnami/apps/wordpress/htdocs"> Options Indexes MultiViews +FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> ###

    Read the article

  • Christmas in the Clouds

    - by andrewbrust
    I have been spending the last 2 weeks immersing myself in a number of Windows Azure and SQL Azure technologies.  And in setting up a new business (I’ll speak more about that in the future), I have also become a customer of Microsoft’s BPOS (Business Productivity Online Services).  In short, it has been a fortnight of Microsoft cloud computing. On the Azure side, I’ve looked, of course, at Web Roles and Worker Roles.  But I’ve also looked at Azure Storage’s REST API (including coding to it directly), I’ve looked at Azure Drive and the new VM Role; I’ve looked quite a bit at SQL Azure (including the project “Houston” Silverlight UI) and I’ve looked at SQL Azure labs’ OData service too. I’ve also looked at DataMarket and its integration with both PowerPivot and native Excel.  Then there’s AppFabric Caching, SQL Azure Reporting (what I could learn of it) and the Visual Studio tooling for Azure, including the storage of certificate-based credentials.  And to round it out with some user stuff, on the BPOS side, I’ve been working with Exchange Online, SharePoint Online and LiveMeeting. I have to say I like a lot of what I’ve been seeing.  Azure’s not perfect, and BPOS certainly isn’t either.  But there’s good stuff in all these products, and there’s a lot of value. Azure Goes Deep Most people know that Web and Worker roles put the platform in charge of spinning virtual machines up and down, and keeping them up to date. But you can go way beyond that now.  The still-in-beta VM Role gives you the power to craft the machine (much as does Amazon’s EC2), though it takes away the platform’s self-managing attributes.  It still spins instances up and down, making drive storage non-durable, but Azure Drive gives you the ability to store VHD files as blobs and mount them as virtual hard drives that are readable and writeable.  Whether with Azure Storage or SQL Azure, Azure does data.  And OData is everywhere.  Azure Table Storage supports an OData Interface.  So does SQL Azure and so does DataMarket (the former project “Dallas”).  That means that Azure data repositories aren’t just straightforward to provision and configure…they’re also easy to program against, from just about any programming environment, in a RESTful manner.  And for more .NET-centric implementations, Azure AppFabric caching takes the technology formerly known as “Velocity” and throws it up into the cloud, speeding data access even more. Snapping in Place Once you get the hang of it, this stuff just starts to work in a way that becomes natural to understand.  I wasn’t expecting that, and I was really happy to discover it. In retrospect, I am not surprised, because I think the various Azure teams are the center of gravity for Redmond’s innovation right now.  The products belie this and so do my observations of the product teams’ motivation and high morale.  It is really good to see this; Microsoft needs to lead somewhere, and they need to be seen as the underdog while doing so.  With Azure, both requirements are in place.   BPOS: Bad Acronym, Easy Setup BPOS is about products you already know; Exchange, SharePoint, Live Meeting and Office Communications Server.  As such, it’s hard not to be underwhelmed by BPOS.  Until you realize how easy it makes it to get all that stuff set up.  I would say that from sign-up to productive use took me about 45 minutes…and that included the time necessary to wrestle with my DNS provider, set up Outlook and my SmartPhone up to talk to the Exchange account, create my SharePoint site collection, and configure the Outlook Conferencing add-in to talk to the provisioned Live Meeting account. Never before did I think setting up my own Exchange mail could come anywhere close to the simplicity of setting up an SMTP/POP account, and yet BPOS actually made it faster.   What I want from my Azure Christmas Next Year Not everything about Microsoft’s cloud is good.  I close this post with a list of things I’d like to see addressed: BPOS offerings are still based on the 2007 Wave of Microsoft server technologies.  We need to get to 2010, and fast.  Arguably, the 2010 products should have been released to the off-premises channel before the on-premise sone.  Office 365 can’t come fast enough. Azure’s Internet tooling and domain naming, is scattered and confusing.  Deployed ASP.NET applications go to cloudapp.net; SQL Azure and Azure storage work off windows.net.  The Azure portal and Project Houston are at azure.com.  Then there’s appfabriclabs.com and sqlazurelabs.com.  There is a new Silverlight portal that replaces most, but not all of the HTML ones.  And Project Houston is Silvelright-based too, though separate from the Silverlight portal tooling. Microsoft is the king off tooling.  They should not make me keep an entire OneNote notebook full of portal links, account names, access keys, assemblies and namespaces and do so much CTRL-C/CTRL-V work.  I’d like to see more project templates, have them automatically reference the appropriate assemblies, generate the right using/Imports statements and prime my config files with the right markup.  Then I want a UI that lets me log in with my Live ID and pick the appropriate project, database, namespace and key string to get set up fast. Beta programs, if they’re open, should onboard me quickly.  I know the process is difficult and everyone’s going as fast as they can.  But I don’t know why it’s so difficult or why it takes so long.  Getting developers up to speed on new features quickly helps popularize the platform.  Make this a priority. Make Azure accessible from the simplicity platforms, i.e. ASP.NET Web Pages (Razor) and LightSwitch.  Support .NET 4 now.  Make WebMatrix, IIS Express and SQL Compact work with the Azure development fabric. Have HTML helpers make Azure programming easier.  Have LightSwitch work with SQL Azure and not require SQL Express.  LightSwitch has some promising Azure integration now.  But we need more.  WebMatrix has none and that’s just silly, now that the Extra Small Instance is being introduced. The Windows Azure Platform Training Kit is great.  But I want Microsoft to make it even better and I want them to evangelize it much more aggressively.  There’s a lot of good material on Azure development out there, but it’s scattered in the same way that the platform is.   The Training Kit ties a lot of disparate stuff together nicely.  Make it known. Should Old Acquaintance Be Forgot All in all, diving deep into Azure was a good way to end the year.  Diving deeper into Azure should a great way to spend next year, not just for me, but for Microsoft too.

    Read the article

  • FTP Upload works from local command line / remote GUI client but not from PHP script

    - by MrOodles
    I originally posted this question at StackOverflow, but I'm beginning to think it's more of a server question. I have installed ProFTPd on an EC2 instance running Ubuntu 10.10. I have managed my proftpd.conf file as well as my server permissions to be able to connect and upload/move files using FTP both remotely using Filezilla, and on the server itself when connecting to 127.0.0.1. The problem I'm running into is when I try to upload/install a file using Joomla's interface. I give Joomla the same login information that I give to Filezilla, and the connection is made in the same fashion. The ftp.log file actually shows that Joomla is able to login to the server: localhost UNKNOWN nobody [17/Jan/2011:14:09:17 +0000] "USER ftpuser" 331 - localhost UNKNOWN ftpuser [17/Jan/2011:14:09:17 +0000] "PASS (hidden)" 230 - localhost UNKNOWN ftpuser [17/Jan/2011:14:09:17 +0000] "PASV" 227 - localhost UNKNOWN ftpuser [17/Jan/2011:14:09:17 +0000] "TYPE I" 200 - localhost UNKNOWN ftpuser [17/Jan/2011:14:09:17 +0000] "STOR /directory/store/location/file.zip" 550 - But it fails when attempting the STOR command. I have traced the problem in the Joomla code to the PHP FTP module. The code (with my trace statements added): if (@ftp_put($this->_conn, $remote, $local, $mode) === false) { echo "\n FTP PUT failed."; echo "\n Remote: $remote ; Local: $local ; Mode: $mode - Either ASCII: ".FTP_ASCII." or Binary: ".FTP_BINARY; echo "\n The user: ".exec("whoami"); JError::raiseWarning('35', 'JFTP::store: Bad response' ); return false; } Trace ouputs: FTP PUT failed. Remote: /directory/store/location/file.zip ; Local: /tmp/phpwuccp4 ; Mode: 2 - Either ASCII: 1 or Binary: 2 The user: www-data And in case you were curious, here is an example of the FTP log when using Filezilla: my_client_ip UNKNOWN nobody [17/Jan/2011:16:45:55 +0000] "USER ftpuser" 331 - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:45:55 +0000] "PASS (hidden)" 230 - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:45:55 +0000] "OPTS UTF8 ON" - - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:45:55 +0000] "PWD" 257 - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:45:55 +0000] "TYPE I" 200 - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:45:55 +0000] "PASV" 227 - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:45:55 +0000] "MLSD" 226 3405 my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:06 +0000] "CWD location" 250 3405 my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:06 +0000] "PWD" 257 3405 my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:06 +0000] "PASV" 227 3405 my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:07 +0000] "MLSD" 226 3757 my_client_ip UNKNOWN nobody [17/Jan/2011:16:46:37 +0000] "USER ftpuser" 331 - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:37 +0000] "PASS (hidden)" 230 - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:37 +0000] "OPTS UTF8 ON" - - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:37 +0000] "CWD /location" 250 - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:37 +0000] "PWD" 257 - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:37 +0000] "TYPE I" 200 - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:37 +0000] "PASV" 227 - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:39 +0000] "STOR file.zip" 226 125317 my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:39 +0000] "PASV" 227 - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:39 +0000] "MLSD" 226 497

    Read the article

  • CPU Utilization LAMP stack

    - by Max
    We've got an ec2 m2.4xlarge running Magento (centos 5.6, httpd 2.2, php 5.2.17 with eaccelerator 0.9.5.3, mysql 5.1.52). Right now we're getting a large traffic spike, and our top looks like this: top - 09:41:29 up 31 days, 1:12, 1 user, load average: 120.01, 129.03, 113.23 Tasks: 1190 total, 18 running, 1172 sleeping, 0 stopped, 0 zombie Cpu(s): 97.3%us, 1.8%sy, 0.0%ni, 0.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.4%st Mem: 71687720k total, 36898928k used, 34788792k free, 49692k buffers Swap: 880737784k total, 0k used, 880737784k free, 1586524k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2433 mysql 15 0 23.6g 4.5g 7112 S 564.7 6.6 33607:34 mysqld 24046 apache 16 0 411m 65m 28m S 26.4 0.1 0:09.05 httpd 24360 apache 15 0 410m 60m 25m S 26.4 0.1 0:03.65 httpd 24993 apache 16 0 410m 57m 21m S 26.1 0.1 0:01.41 httpd 24838 apache 16 0 428m 74m 20m S 24.8 0.1 0:02.37 httpd 24359 apache 16 0 411m 62m 26m R 22.3 0.1 0:08.12 httpd 23850 apache 15 0 411m 64m 27m S 16.8 0.1 0:14.54 httpd 25229 apache 16 0 404m 46m 17m R 10.2 0.1 0:00.71 httpd 14594 apache 15 0 404m 63m 34m S 8.4 0.1 1:10.26 httpd 24955 apache 16 0 404m 50m 21m R 8.4 0.1 0:01.66 httpd 24313 apache 16 0 399m 46m 22m R 8.1 0.1 0:02.30 httpd 25119 apache 16 0 411m 59m 23m S 6.8 0.1 0:01.45 httpd Questions: Would giving msyqld more memory help it cache queries and react faster? If so, how? Other than splitting mysql and php to separate servers (which we're about to do) is there anything else we could/should be doing? Thanks! UPDATE: Here's our my.cnf along with the output of mysqltuner. It looks like a cache problem. Thanks again! # cat /etc/my.cnf [client] port = **** socket = /var/lib/mysql/mysql.sock [mysqld] datadir=/mnt/persistent/mysql port=**** socket=/var/lib/mysql/mysql.sock key_buffer = 512M max_allowed_packet = 64M table_cache = 1024 sort_buffer_size = 8M read_buffer_size = 4M read_rnd_buffer_size = 2M myisam_sort_buffer_size = 64M thread_cache_size = 128M tmp_table_size = 128M join_buffer_size = 1M query_cache_limit = 2M query_cache_size= 64M query_cache_type = 1 max_connections = 1000 thread_stack = 128K thread_concurrency = 48 log-bin=mysql-bin server-id = 1 wait_timeout = 300 innodb_data_home_dir = /mnt/persistent/mysql/ innodb_data_file_path = ibdata1:10M:autoextend innodb_buffer_pool_size = 20G innodb_additional_mem_pool_size = 20M innodb_log_file_size = 64M innodb_log_buffer_size = 8M innodb_flush_log_at_trx_commit = 1 innodb_lock_wait_timeout = 50 innodb_thread_concurrency = 48 ft_min_word_len=3 [myisamchk] ft_min_word_len=3 key_buffer = 128M sort_buffer_size = 128M read_buffer = 2M write_buffer = 2M # ./mysqltuner.pl >> MySQLTuner 1.2.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.52-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB +Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 2G (Tables: 26) [--] Data in InnoDB tables: 749M (Tables: 250) [!!] Total fragmented tables: 262 -------- Security Recommendations ------------------------------------------- -------- Performance Metrics ------------------------------------------------- [--] Up for: 31d 2h 30m 38s (680M q [253.371 qps], 2M conn, TX: 4825B, RX: 236B) [--] Reads / Writes: 89% / 11% [--] Total buffers: 20.6G global + 15.1M per thread (1000 max threads) [OK] Maximum possible memory usage: 35.4G (51% of installed RAM) [OK] Slow queries: 0% (35K/680M) [OK] Highest usage of available connections: 53% (537/1000) [OK] Key buffer size / total MyISAM indexes: 512.0M/457.2M [OK] Key buffer hit rate: 100.0% (9B cached / 264K reads) [OK] Query cache efficiency: 42.3% (260M cached / 615M selects) [!!] Query cache prunes per day: 4384652 [OK] Sorts requiring temporary tables: 0% (1K temp sorts / 38M sorts) [!!] Joins performed without indexes: 100404 [OK] Temporary tables created on disk: 17% (7M on disk / 45M total) [OK] Thread cache hit rate: 99% (537 created / 2M connections) [!!] Table cache hit rate: 0% (1K open / 946K opened) [OK] Open file limit used: 9% (453/5K) [OK] Table locks acquired immediately: 99% (758M immediate / 758M locks) [OK] InnoDB data size / buffer pool: 749.3M/20.0G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Enable the slow query log to troubleshoot bad queries Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Variables to adjust: query_cache_size (> 64M) join_buffer_size (> 1.0M, or always use indexes with joins) table_cache (> 1024)

    Read the article

  • Configuring nginx server to handle requests from multiple domains

    - by KillABug
    Use Case:- I am working on a web application which allows to create HTML templates and publish them on amazon S3.Now to publish the websites I use nginx as a proxy server. What the proxy server does is,when a user enters the website URL,I want to identify how to check if the request comes from my application i.e app.mysite.com(This won't change) and route it to apache for regular access,if its coming from some other domain like a regular URL www.mysite.com(This needs to be handled dynamically.Can be random) it goes to the S3 bucket that hosts the template. My current configuration is: user nginx; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; charset utf-8; keepalive_timeout 65; server_tokens off; sendfile on; tcp_nopush on; tcp_nodelay off; Default Server Block to catch undefined host names server { listen 80; server_name app.mysite.com; access_log off; error_log off; location / { proxy_pass http://127.0.0.1:8080; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_redirect off; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; client_max_body_size 10m; client_body_buffer_size 128k; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; } } } Load all the sites include /etc/nginx/conf.d/*.conf; Updates as I was not clear enough :- My question is how can I handle both the domains in the config file.My nginx is a proxy server on port 80 on an EC2 instance.This also hosts my application that runs on apache on a differnet port.So any request coming for my application will come from a domain app.mysite.com and I also want to proxy the hosted templates on S3 which are inside a bucket say sites.mysite.com/coolsite.com/index.html.So if someone hits coolsite.com I want to proxy it to the folder sites.mysite.com/coolsite.com/index.html and not to app.syartee.com.Hope I am clear The other server block: # Server for S3 server { # Listen on port 80 for all IPs associated with your machine listen 80; # Catch all other server names server_name _; //I want it to handle other domains then app.mysite.com # This code gets the host without www. in front and places it inside # the $host_without_www variable # If someone requests www.coolsite.com, then $host_without_www will have the value coolsite.com set $host_without_www $host; if ($host ~* www\.(.*)) { set $host_without_www $1; } location / { # This code rewrites the original request, and adds the host without www in front # E.g. if someone requests # /directory/file.ext?param=value # from the coolsite.com site the request is rewritten to # /coolsite.com/directory/file.ext?param=value set $foo 'http://sites.mysite.com'; # echo "$foo"; rewrite ^(.*)$ $foo/$host_without_www$1 break; # The rewritten request is passed to S3 proxy_pass http://sites.mysite.com; include /etc/nginx/proxy_params; } } Also I understand I will have to make the DNS changes in the cname of the domain.I guess I will have to add app.mysite.com under the CNAME of the template domain name?Please correct if wrong. Thank you for your time

    Read the article

  • x11vnc working in Ubuntu 10.10

    - by pablorc
    I'm trying to start x11vnc in a Ubuntu 10.10 (my server is in Amazon EC2), but I have the next error $ sudo x11vnc -forever -usepw -httpdir /usr/share/vnc-java/ -httpport 5900 -auth /usr/sbin/gdm 25/11/2010 13:29:51 passing arg to libvncserver: -httpport 25/11/2010 13:29:51 passing arg to libvncserver: 5900 25/11/2010 13:29:51 -usepw: found /home/ubuntu/.vnc/passwd 25/11/2010 13:29:51 x11vnc version: 0.9.10 lastmod: 2010-04-28 pid: 3504 25/11/2010 13:29:51 XOpenDisplay(":0.0") failed. 25/11/2010 13:29:51 Trying again with XAUTHLOCALHOSTNAME=localhost ... 25/11/2010 13:29:51 *************************************** 25/11/2010 13:29:51 *** XOpenDisplay failed (:0.0) *** x11vnc was unable to open the X DISPLAY: ":0.0", it cannot continue. *** There may be "Xlib:" error messages above with details about the failure. Some tips and guidelines: ** An X server (the one you wish to view) must be running before x11vnc is started: x11vnc does not start the X server. (however, see the -create option if that is what you really want). ** You must use -display <disp>, -OR- set and export your $DISPLAY environment variable to refer to the display of the desired X server. - Usually the display is simply ":0" (in fact x11vnc uses this if you forget to specify it), but in some multi-user situations it could be ":1", ":2", or even ":137". Ask your administrator or a guru if you are having difficulty determining what your X DISPLAY is. ** Next, you need to have sufficient permissions (Xauthority) to connect to the X DISPLAY. Here are some Tips: - Often, you just need to run x11vnc as the user logged into the X session. So make sure to be that user when you type x11vnc. - Being root is usually not enough because the incorrect MIT-MAGIC-COOKIE file may be accessed. The cookie file contains the secret key that allows x11vnc to connect to the desired X DISPLAY. - You can explicitly indicate which MIT-MAGIC-COOKIE file should be used by the -auth option, e.g.: x11vnc -auth /home/someuser/.Xauthority -display :0 x11vnc -auth /tmp/.gdmzndVlR -display :0 you must have read permission for the auth file. See also '-auth guess' and '-findauth' discussed below. ** If NO ONE is logged into an X session yet, but there is a greeter login program like "gdm", "kdm", "xdm", or "dtlogin" running, you will need to find and use the raw display manager MIT-MAGIC-COOKIE file. Some examples for various display managers: gdm: -auth /var/gdm/:0.Xauth -auth /var/lib/gdm/:0.Xauth kdm: -auth /var/lib/kdm/A:0-crWk72 -auth /var/run/xauth/A:0-crWk72 xdm: -auth /var/lib/xdm/authdir/authfiles/A:0-XQvaJk dtlogin: -auth /var/dt/A:0-UgaaXa Sometimes the command "ps wwwwaux | grep auth" can reveal the file location. Starting with x11vnc 0.9.9 you can have it try to guess by using: -auth guess (see also the x11vnc -findauth option.) Only root will have read permission for the file, and so x11vnc must be run as root (or copy it). The random characters in the filenames will of course change and the directory the cookie file resides in is system dependent. See also: http://www.karlrunge.com/x11vnc/faq.html I've already tried with some -auth options but the error persist. I have gdm running. Thank you in advance

    Read the article

  • FOUR questions to ask if you are implementing DATABASE-AS-A-SERVICE

    - by Sudip Datta
    During my ongoing tenure at Oracle, I have met all types of DBAs. Happy DBAs, unhappy DBAs, proud DBAs, risk-loving DBAs, cautious DBAs. These days, as Database-as-a-Service (DBaaS) becomes more mainstream, I find some complacent DBAs who are basking in their achievement of having implemented DBaaS. Some others, however, are not that happy. They grudgingly complain that they did not have much of a say in the implementation, they simply had to follow what their cloud architects (mostly infrastructure admins) offered them. In most cases it would be a database wrapped inside a VM that would be labeled as “Database as a Service”. In other cases, it would be existing brute-force automation simply exposed in a portal. As much as I think that there is more to DBaaS than those approaches and often get tempted to propose Enterprise Manager 12c, I try to be objective. Neither do I want to dampen the spirit of the happy ones, nor do I want to stoke the pain of the unhappy ones. As I mentioned in my previous post, I don’t deny vanilla automation could be useful. I like virtualization too for what it has helped us accomplish in terms of resource management, but we need to scrutinize its merit on a case-by-case basis and apply it meaningfully. For DBAs who either claim to have implemented DBaaS or are planning to do so, I simply want to provide four key questions to ponder about: 1. Does it make life easier for your end users? Database-as-a-Service can have several types of end users. Junior DBAs, QA Engineers, Developers- each having their own skillset. The objective of DBaaS is to make their life simple, so that they can focus on their core responsibilities without having to worry about additional stuff. For example, if you are a Developer using Oracle Application Express (APEX), you want to deal with schema, objects and PL/SQL code and not with datafiles or listener configuration. If you are a QA Engineer needing database copies for functional testing, you do not want to deal with underlying operating system patching and compliance issues. The question to ask, therefore, is, whether DBaaS makes life easier for those users. It is often convenient to give them VM shells to deal with a la Amazon EC2 IaaS, but is that what they really want? Is it a productive use of a developer's time if he needs to apply RPM errata to his Linux operating system. Asking him to keep the underlying operating system current is like making a guest responsible for a restaurant's decor. 2. Does it make life easier for your administrators? Cloud, in general, is supposed to free administrators from attending to mundane tasks like provisioning services for every single end user request. It is supposed to enable a readily consumable platform and enforce standardization in the process. For example, if a Service Catalog exposes DBaaS of specific database versions and configurations, it, by its very nature, enforces certain discipline and standardization within the IT environment. What if, instead of specific database configurations, cloud allowed each end user to create databases of their liking resulting in hundreds of version and patch levels and thousands of individual databases. Therefore the right question to ask is whether the unwanted consequence of DBaaS is OS and database sprawl. And if so, who is responsible for tracking them, backing them up, administering them? Studies have shown that these administrative overheads increase exponentially with new targets, and it could result in a management nightmare. That leads us to our next question. 3. Does it satisfy your Security Officers and Compliance Auditors? Compliance Auditors need to know who did what and when. They also want the cloud platform to be secure, so that end users have little freedom in tampering with it. Dealing with VM sprawl is not the easiest of challenges, let alone dealing with them as they keep getting reconfigured and moved around. This leads to the proverbial needle in the haystack problem, and all it needs is one needle to cause a serious compliance issue in the enterprise. Bottomline is, flexibility and agility should not come at the expense of compliance and it is very important to get the balance right. Can we have security and isolation without creating compliance challenges? Instead of a ‘one size fits all approach’ i.e. OS level isolation, can we think smartly about database isolation or schema based isolation? This is where the appropriate resource modeling needs to be applied. The usual systems management vendors out there with heterogeneous common-denominator approach have compromised on these semantics. If you follow Enterprise Manager’s DBaaS solution, you will see that we have considered different models, not precluding virtualization, for different customer use cases. The judgment to use virtual assemblies versus databases on physical RAC versus Schema-as-a-Service in a single database, should be governed by the need of the applications and not by putting compliance considerations in the backburner. 4. Does it satisfy your CIO? Finally, does it satisfy your higher ups? As the sponsor of cloud initiative, the CIO is expected to lead an IT transformation project, not merely a run-of-the-mill IT operations. Simply virtualizing server resources and delivering them through self-service is a good start, but hardly transformational. CIOs may appreciate the instant benefit from server consolidation, but studies have revealed that the ROI from consolidation would flatten out at 20-25%. The question would be: what next? As we go higher up in the stack, the need to virtualize, segregate and optimize shifts to those layers that are more palpable to the business users. As Sushil Kumar noted in his blog post, " the most important thing to note here is the enterprise private cloud is not just an IT project, rather it is a business initiative to create an IT setup that is more aligned with the needs of today's dynamic and highly competitive business environment." Business users could not care less about infrastructure consolidation or virtualization - they care about business agility and service level assurance. Last but not the least, lot of CIOs get miffed if we ask them to throw away their existing hardware investments for implementing DBaaS. In Oracle, we always emphasize on freedom of choosing a platform; hence Enterprise Manager’s DBaaS solution is platform neutral. It can work on any Operating System (that the agent is certified on) Oracle’s hardware as well as 3rd party hardware. As a parting note, I urge you to remember these 4 questions. Remember that your satisfaction as an implementer lies in the satisfaction of others.

    Read the article

  • Server overloaded with log messages: tty_release_dev: pts0: read/write wait queue active!

    - by Raph
    In the logs, I have this (extract from the full kernel messages logges at 06:01:14): Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.863038] BUG: unable to handle kernel NULL pointer dereference at 0000000000000015 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861081] Process telnet (pid: 20247, threadinfo ffff8800f8598000, task ffff8800024d4500) And then the server logs flooded by this message: Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861547] tty_release_dev: pts0: read/write wait queue active! In the end, 2 hours later, I had to reboot because it had become inaccessible: the load hat grown to 160%. The last command does not show anyone logged on pts0 at that time. I also don't know where this telnet process could come from.... This is an AWS instance running UBUNTU 10.04 LTS And here are the complete logs: Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.863038] BUG: unable to handle kernel NULL pointer dereference at 0000000000000015 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861007] IP: [<ffffffff81363dde>] n_tty_read+0x2ce/0x970 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861019] PGD ee13d067 PUD f8698067 PMD 0 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861025] Oops: 0000 [#1] SMP Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861028] last sysfs file: /sys/devices/xen/vbd-2208/block/sdk/removable Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861032] CPU 0 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861034] Modules linked in: ipv6 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861040] Pid: 20247, comm: telnet Not tainted 2.6.32-312-ec2 #24-Ubuntu Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861042] RIP: e030:[<ffffffff81363dde>] [<ffffffff81363dde>] n_tty_read+0x2ce/0x970 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861047] RSP: e02b:ffff8800f8599d88 EFLAGS: 00010246 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861049] RAX: 0000000000000015 RBX: ffff8800f8598000 RCX: 0000000001aed069 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861052] RDX: 0000000000000000 RSI: ffff8800f8599e67 RDI: ffff8801dd833d1c Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861054] RBP: ffff8800f8599e98 R08: ffffffff8135eb10 R09: 7fffffffffffffff Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861057] R10: 0000000000000000 R11: 0000000000000246 R12: ffff8801dd833800 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861059] R13: 0000000000000000 R14: ffff8801dd833a68 R15: ffff8801dd833d1c Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861065] FS: 00007f90121f6720(0000) GS:ffff880002c40000(0000) knlGS:0000000000000000 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861068] CS: e033 DS: 0000 ES: 0000 CR0: 000000008005003b Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861070] CR2: 0000000000000015 CR3: 0000000032a59000 CR4: 0000000000002660 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861073] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861076] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861081] Process telnet (pid: 20247, threadinfo ffff8800f8598000, task ffff8800024d4500) Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861083] Stack: Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861085] 0000000000000000 0000000001aed069 ffff8801dd8339c8 ffff8800024d4500 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861089] <0> ffff8801dd8339c0 ffff8801dd833c90 0000000001aed027 ffff8800024d4500 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861094] <0> ffff8801dd8338d8 0000000000000000 ffff8800024d4500 0000000000000000 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861099] Call Trace: Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861107] [<ffffffff81034bc0>] ? default_wake_function+0x0/0x10 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861113] [<ffffffff8135ebb6>] tty_read+0xa6/0xf0 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861118] [<ffffffff810ee7e5>] vfs_read+0xb5/0x1a0 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861122] [<ffffffff810ee91c>] sys_read+0x4c/0x80 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861127] [<ffffffff81009ba8>] system_call_fastpath+0x16/0x1b Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861131] [<ffffffff81009b40>] ? system_call+0x0/0x52 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861133] Code: 85 d2 0f 84 92 00 00 00 45 8b ac 24 5c 02 00 00 f0 45 0f b3 2e 45 19 ed 49 63 84 24 5c 02 00 00 49 8b 94 24 50 02 00 00 4c 89 ff <0f> be 1c 02 e8 a9 d3 14 00 41 8b 94 24 5c 02 00 00 41 83 ac 24 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861171] RIP [<ffffffff81363dde>] n_tty_read+0x2ce/0x970 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861175] RSP <ffff8800f8599d88> Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861171] RIP [<ffffffff81363dde>] n_tty_read+0x2ce/0x970 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861175] RSP <ffff8800f8599d88> Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861177] CR2: 0000000000000015 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861205] ---[ end trace f10eee2057ff4f6b ]--- Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861547] tty_release_dev: pts0: read/write wait queue active!

    Read the article

  • SPF record doesn't work (not sure which DNS server to tweak)

    - by Ion
    Problem: Google (and perhaps others) marks our emails as SPF neutral. Let me give you some background about the setup: initially got a dedicated server (Hetzner) with Plesk installed to host a domain/web application, let's say: bigjaws.com. Plesk automatically creates a DNS zone for it with some records for the various services it provides out of the box, e.g. webmail.bigjaws.com as a CNAME to bigjaws.com to provide Horde/whatever, etc. Let me point out four relevant of these records (where XXX.XXX.XXX.158 is our dedicated IP): bigjaws.com. A XXX.XXX.XXX.158 mail.bigjaws.com. A XXX.XXX.XXX.158 bigjaws.com MX (10) mail.bigjaws.com. bigjaws.com. TXT v=spf1 +a +mx -all The above records are not(?) valid anymore though, because after using this dedicated server for a while, our site got bigger and bigger so we decided to move our operations over to AWS (EC2, RDS, ELB, etc), but we retained the mail functionality as is, i.e. emails from [email protected] are sent by connecting to our dedicated server where Plesk takes care of things. This was decided in order not to setup anything from scratch. Of course for all DNS-related things we now use Route53. In Route53 I have the following records: mail.schoox.com. A XXX.XXX.XXX.158 bigjaws.com. MX (10) mail.bigjaws.com bigjaws.com. SPF "v=spf1 +ip4:XXX.XXX.XXX.158 +mx ~all" From my understanding of SPF, the SPF status should have been passed: I designate that all email being sent by bigjaws.com from XXX.XXX.XXX.158 are valid/not spam (I added +mx there but I'm not sure if needed). When a mail server receives an email, doesn't it lookup the SPF record of the domain and checks against the IP it got the email from? Checking with spfquery: root@box:~# spfquery -ip XXX.XXX.XXX.158 -sender [email protected] -rcpt-to [email protected] StartError Context: Failed to query MAIL-FROM ErrorCode: (2) Could not find a valid SPF record Error: No DNS data for 'bigjaws.com'. EndError noneneutral Please see http://www.openspf.org/Why?id=employee1%40bigjaws.com&ip=XXX.XXX.XXX.158&receiver=spfquery : Reason: default spfquery: XXX.XXX.XXX.158 is neither permitted nor denied by domain of bigjaws.com Received-SPF: neutral (spfquery: XXX.XXX.XXX.158 is neither permitted nor denied by domain of bigjaws.com) client-ip=XXX.XXX.XXX.158; [email protected]; If I go to the address listed above (openspf.org) it tells me that the message should have been accepted(!): spfquery rejected a message that claimed an envelope sender address of [email protected]. spfquery received a message from static.158.XXX.XXX.XXX.clients.your-server.de (XXX.XXX.XXX.158) that claimed an envelope sender address of [email protected]. The domain bigjaws.com has authorized static.158.XXX.XXX.XXX.clients.your-server.de (XXX.XXX.XXX.158) to send mail on its behalf, so the message should have been accepted. It is impossible for us to say why it was rejected. What should I do? If the problem persists, contact the bigjaws.com postmaster. Also, here are some headers from an email sent by one of our [email protected] addresses to a gmail.com address (by the way, bigjaws.de listed in the "Received: from" field was the initial domain hosted on the dedicated server before adding the .com one -- both are still listed as separate subscriptions under Plesk). Delivered-To: [email protected] Received: by 10.14.177.70 with SMTP id c46csp289656eem; Wed, 23 Oct 2013 01:11:00 -0700 (PDT) X-Received: by 10.14.102.66 with SMTP id c42mr306186eeg.47.1382515860386; Wed, 23 Oct 2013 01:11:00 -0700 (PDT) Return-Path: <[email protected]> Received: from bigjaws.de (static.158.XXX.XXX.XXX.clients.your-server.de. [XXX.XXX.XXX.158]) by mx.google.com with ESMTPS id l4si19438578eew.161.2013.10.23.01.10.59 for <[email protected]> (version=TLSv1 cipher=RC4-SHA bits=128/128); Wed, 23 Oct 2013 01:10:59 -0700 (PDT) Received-SPF: neutral (google.com: XXX.XXX.XXX.158 is neither permitted nor denied by best guess record for domain of [email protected]) client-ip=XXX.XXX.XXX.158; Authentication-Results: mx.google.com; spf=neutral (google.com: XXX.XXX.XXX.158 is neither permitted nor denied by best guess record for domain of [email protected]) [email protected] DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=default; d=bigjaws.com; b=WwRAS0WKjp9lO17iMluYPXOHzqRcOueiQT4rPdvy3WFf0QzoXiy6rLfxU/Ra53jL1vlPbwlLNa5gjoJBi7ZwKfUcvs3s02hJI7b3ozl0fEgJtTPKoCfnwl4bLPbtXNFu; h=Received:Received:Message-ID:Date:From:User-Agent:MIME-Version:To:Subject:Content-Type:Content-Transfer-Encoding; Received: (qmail 22722 invoked from network); 23 Oct 2013 10:10:59 +0200 Received: from hostname.static.ISP.com (HELO ?192.168.1.60?) (YYY.YYY.ISP.IP) by static.158.XXX.XXX.XXX.clients.your-server.de. with ESMTPSA (DHE-RSA-AES256-SHA encrypted, authenticated); 23 Oct 2013 10:10:59 +0200 Message-ID: <[email protected]> Date: Wed, 23 Oct 2013 11:11:00 +0300 From: BigJaws Employee <[email protected]> User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.0.1 MIME-Version: 1.0 To: [email protected] Subject: test SPF Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit test SPF Any ideas why SPF is not working correctly? Also, are there any DNS settings that are not needed anymore and create a problem?

    Read the article

  • PhpMyAdmin Hangs On MySQL Error

    - by user75228
    I'm currently running PhpMyAdmin 4.0.10 (the latest version supporting PHP 4.2.X) on my Amazon EC2 connecting to a MySQL database on RDS. Everything works perfectly fine except actions that return a mysql error message. Whether I perform "any" kind of action that will return a mysql error, Phpmyadmin will hang with the yellow "Loading" box forever without displaying anything. For example, if I perform the following command in MySQL CLI : select * from 123; It instantly returns the following error : ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '123' at line 1 which is completely normal because table 123 doesn't exist. However, if I execute the exact same command in the "SQL" box in Phpmyadmin, after I click "Go" it'll display "Loading" and stops there forever. Has anyone ever encountered this kind of issue with Phpmyadmin? Is this a bug or I have something wrong with my config.inc.php? Any help would be much appreciated. I also noticed these error messages in my apache error logs : /opt/apache/bin/httpd: symbol lookup error: /opt/php/lib/php/extensions/no-debug-non-zts-20060613/iconv.so: undefined symbol: libiconv_open /opt/apache/bin/httpd: symbol lookup error: /opt/php/lib/php/extensions/no-debug-non-zts-20060613/iconv.so: undefined symbol: libiconv_open /opt/apache/bin/httpd: symbol lookup error: /opt/php/lib/php/extensions/no-debug-non-zts-20060613/iconv.so: undefined symbol: libiconv_open Below are my config.inc.php settings : <?php /* vim: set expandtab sw=4 ts=4 sts=4: */ /** * phpMyAdmin sample configuration, you can use it as base for * manual configuration. For easier setup you can use setup/ * * All directives are explained in documentation in the doc/ folder * or at <http://docs.phpmyadmin.net/>. * * @package PhpMyAdmin */ /* * This is needed for cookie based authentication to encrypt password in * cookie */ $cfg['blowfish_secret'] = 'something_random'; /* YOU MUST FILL IN THIS FOR COOKIE AUTH! */ /* * Servers configuration */ $i = 0; /* * First server */ $i++; /* Authentication type */ $cfg['Servers'][$i]['auth_type'] = 'cookie'; /* Server parameters */ $cfg['Servers'][$i]['host'] = '*.rds.amazonaws.com'; $cfg['Servers'][$i]['connect_type'] = 'tcp'; $cfg['Servers'][$i]['compress'] = true; /* Select mysql if your server does not have mysqli */ $cfg['Servers'][$i]['extension'] = 'mysqli'; $cfg['Servers'][$i]['AllowNoPassword'] = false; $cfg['LoginCookieValidity'] = '3600'; /* * phpMyAdmin configuration storage settings. */ /* User used to manipulate with storage */ $cfg['Servers'][$i]['controlhost'] = '*.rds.amazonaws.com'; $cfg['Servers'][$i]['controluser'] = 'pma'; $cfg['Servers'][$i]['controlpass'] = 'password'; /* Storage database and tables */ $cfg['Servers'][$i]['pmadb'] = 'phpmyadmin'; $cfg['Servers'][$i]['bookmarktable'] = 'pma__bookmark'; $cfg['Servers'][$i]['relation'] = 'pma__relation'; $cfg['Servers'][$i]['table_info'] = 'pma__table_info'; $cfg['Servers'][$i]['table_coords'] = 'pma__table_coords'; $cfg['Servers'][$i]['pdf_pages'] = 'pma__pdf_pages'; $cfg['Servers'][$i]['column_info'] = 'pma__column_info'; $cfg['Servers'][$i]['history'] = 'pma__history'; $cfg['Servers'][$i]['table_uiprefs'] = 'pma__table_uiprefs'; $cfg['Servers'][$i]['tracking'] = 'pma__tracking'; $cfg['Servers'][$i]['designer_coords'] = 'pma__designer_coords'; $cfg['Servers'][$i]['userconfig'] = 'pma__userconfig'; $cfg['Servers'][$i]['recent'] = 'pma__recent'; /* Contrib / Swekey authentication */ // $cfg['Servers'][$i]['auth_swekey_config'] = '/etc/swekey-pma.conf'; /* * End of servers configuration */ /* * Directories for saving/loading files from server */ $cfg['UploadDir'] = ''; $cfg['SaveDir'] = ''; /** * Defines whether a user should be displayed a "show all (records)" * button in browse mode or not. * default = false */ //$cfg['ShowAll'] = true; /** * Number of rows displayed when browsing a result set. If the result * set contains more rows, "Previous" and "Next". * default = 30 */ $cfg['MaxRows'] = 50; /** * disallow editing of binary fields * valid values are: * false allow editing * 'blob' allow editing except for BLOB fields * 'noblob' disallow editing except for BLOB fields * 'all' disallow editing * default = blob */ //$cfg['ProtectBinary'] = 'false'; /** * Default language to use, if not browser-defined or user-defined * (you find all languages in the locale folder) * uncomment the desired line: * default = 'en' */ //$cfg['DefaultLang'] = 'en'; //$cfg['DefaultLang'] = 'de'; /** * default display direction (horizontal|vertical|horizontalflipped) */ //$cfg['DefaultDisplay'] = 'vertical'; /** * How many columns should be used for table display of a database? * (a value larger than 1 results in some information being hidden) * default = 1 */ //$cfg['PropertiesNumColumns'] = 2; /** * Set to true if you want DB-based query history.If false, this utilizes * JS-routines to display query history (lost by window close) * * This requires configuration storage enabled, see above. * default = false */ //$cfg['QueryHistoryDB'] = true; /** * When using DB-based query history, how many entries should be kept? * * default = 25 */ //$cfg['QueryHistoryMax'] = 100; /* * You can find more configuration options in the documentation * in the doc/ folder or at <http://docs.phpmyadmin.net/>. */ ?>

    Read the article

  • TOTD #166: Using NoSQL database in your Java EE 6 Applications on GlassFish - MongoDB for now!

    - by arungupta
    The Java EE 6 platform includes Java Persistence API to work with RDBMS. The JPA specification defines a comprehensive API that includes, but not restricted to, how a database table can be mapped to a POJO and vice versa, provides mechanisms how a PersistenceContext can be injected in a @Stateless bean and then be used for performing different operations on the database table and write typesafe queries. There are several well known advantages of RDBMS but the NoSQL movement has gained traction over past couple of years. The NoSQL databases are not intended to be a replacement for the mainstream RDBMS. As Philosophy of NoSQL explains, NoSQL database was designed for casual use where all the features typically provided by an RDBMS are not required. The name "NoSQL" is more of a category of databases that is more known for what it is not rather than what it is. The basic principles of NoSQL database are: No need to have a pre-defined schema and that makes them a schema-less database. Addition of new properties to existing objects is easy and does not require ALTER TABLE. The unstructured data gives flexibility to change the format of data any time without downtime or reduced service levels. Also there are no joins happening on the server because there is no structure and thus no relation between them. Scalability and performance is more important than the entire set of functionality typically provided by an RDBMS. This set of databases provide eventual consistency and/or transactions restricted to single items but more focus on CRUD. Not be restricted to SQL to access the information stored in the backing database. Designed to scale-out (horizontal) instead of scale-up (vertical). This is important knowing that databases, and everything else as well, is moving into the cloud. RBDMS can scale-out using sharding but requires complex management and not for the faint of heart. Unlike RBDMS which require a separate caching tier, most of the NoSQL databases comes with integrated caching. Designed for less management and simpler data models lead to lower administration as well. There are primarily three types of NoSQL databases: Key-Value stores (e.g. Cassandra and Riak) Document databases (MongoDB or CouchDB) Graph databases (Neo4J) You may think NoSQL is panacea but as I mentioned above they are not meant to replace the mainstream databases and here is why: RDBMS have been around for many years, very stable, and functionally rich. This is something CIOs and CTOs can bet their money on without much worry. There is a reason 98% of Fortune 100 companies run Oracle :-) NoSQL is cutting edge, brings excitement to developers, but enterprises are cautious about them. Commercial databases like Oracle are well supported by the backing enterprises in terms of providing support resources on a global scale. There is a full ecosystem built around these commercial databases providing training, performance tuning, architecture guidance, and everything else. NoSQL is fairly new and typically backed by a single company not able to meet the scale of these big enterprises. NoSQL databases are good for CRUDing operations but business intelligence is extremely important for enterprises to stay competitive. RDBMS provide extensive tooling to generate this data but that was not the original intention of NoSQL databases and is lacking in that area. Generating any meaningful information other than CRUDing require extensive programming. Not suited for complex transactions such as banking systems or other highly transactional applications requiring 2-phase commit. SQL cannot be used with NoSQL databases and writing simple queries can be involving. Enough talking, lets take a look at some code. This blog has published multiple blogs on how to access a RDBMS using JPA in a Java EE 6 application. This Tip Of The Day (TOTD) will show you can use MongoDB (a document-oriented database) with a typical 3-tier Java EE 6 application. Lets get started! The complete source code of this project can be downloaded here. Download MongoDB for your platform from here (1.8.2 as of this writing) and start the server as: arun@ArunUbuntu:~/tools/mongodb-linux-x86_64-1.8.2/bin$./mongod./mongod --help for help and startup optionsSun Jun 26 20:41:11 [initandlisten] MongoDB starting : pid=11210port=27017 dbpath=/data/db/ 64-bit Sun Jun 26 20:41:11 [initandlisten] db version v1.8.2, pdfile version4.5Sun Jun 26 20:41:11 [initandlisten] git version:433bbaa14aaba6860da15bd4de8edf600f56501bSun Jun 26 20:41:11 [initandlisten] build sys info: Linuxbs-linux64.10gen.cc 2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 2017:48:28 EST 2009 x86_64 BOOST_LIB_VERSION=1_41Sun Jun 26 20:41:11 [initandlisten] waiting for connections on port 27017Sun Jun 26 20:41:11 [websvr] web admin interface listening on port 28017 The default directory for the database is /data/db and needs to be created as: sudo mkdir -p /data/db/sudo chown `id -u` /data/db You can specify a different directory using "--dbpath" option. Refer to Quickstart for your specific platform. Using NetBeans, create a Java EE 6 project and make sure to enable CDI and add JavaServer Faces framework. Download MongoDB Java Driver (2.6.3 of this writing) and add it to the project library by selecting "Properties", "LIbraries", "Add Library...", creating a new library by specifying the location of the JAR file, and adding the library to the created project. Edit the generated "index.xhtml" such that it looks like: <h1>Add a new movie</h1><h:form> Name: <h:inputText value="#{movie.name}" size="20"/><br/> Year: <h:inputText value="#{movie.year}" size="6"/><br/> Language: <h:inputText value="#{movie.language}" size="20"/><br/> <h:commandButton actionListener="#{movieSessionBean.createMovie}" action="show" title="Add" value="submit"/></h:form> This page has a simple HTML form with three text boxes and a submit button. The text boxes take name, year, and language of a movie and the submit button invokes the "createMovie" method of "movieSessionBean" and then render "show.xhtml". Create "show.xhtml" ("New" -> "Other..." -> "Other" -> "XHTML File") such that it looks like: <head> <title><h1>List of movies</h1></title> </head> <body> <h:form> <h:dataTable value="#{movieSessionBean.movies}" var="m" > <h:column><f:facet name="header">Name</f:facet>#{m.name}</h:column> <h:column><f:facet name="header">Year</f:facet>#{m.year}</h:column> <h:column><f:facet name="header">Language</f:facet>#{m.language}</h:column> </h:dataTable> </h:form> This page shows the name, year, and language of all movies stored in the database so far. The list of movies is returned by "movieSessionBean.movies" property. Now create the "Movie" class such that it looks like: import com.mongodb.BasicDBObject;import com.mongodb.BasicDBObject;import com.mongodb.DBObject;import javax.enterprise.inject.Model;import javax.validation.constraints.Size;/** * @author arun */@Modelpublic class Movie { @Size(min=1, max=20) private String name; @Size(min=1, max=20) private String language; private int year; // getters and setters for "name", "year", "language" public BasicDBObject toDBObject() { BasicDBObject doc = new BasicDBObject(); doc.put("name", name); doc.put("year", year); doc.put("language", language); return doc; } public static Movie fromDBObject(DBObject doc) { Movie m = new Movie(); m.name = (String)doc.get("name"); m.year = (int)doc.get("year"); m.language = (String)doc.get("language"); return m; } @Override public String toString() { return name + ", " + year + ", " + language; }} Other than the usual boilerplate code, the key methods here are "toDBObject" and "fromDBObject". These methods provide a conversion from "Movie" -> "DBObject" and vice versa. The "DBObject" is a MongoDB class that comes as part of the mongo-2.6.3.jar file and which we added to our project earlier.  The complete javadoc for 2.6.3 can be seen here. Notice, this class also uses Bean Validation constraints and will be honored by the JSF layer. Finally, create "MovieSessionBean" stateless EJB with all the business logic such that it looks like: package org.glassfish.samples;import com.mongodb.BasicDBObject;import com.mongodb.DB;import com.mongodb.DBCollection;import com.mongodb.DBCursor;import com.mongodb.DBObject;import com.mongodb.Mongo;import java.net.UnknownHostException;import java.util.ArrayList;import java.util.List;import javax.annotation.PostConstruct;import javax.ejb.Stateless;import javax.inject.Inject;import javax.inject.Named;/** * @author arun */@Stateless@Namedpublic class MovieSessionBean { @Inject Movie movie; DBCollection movieColl; @PostConstruct private void initDB() throws UnknownHostException { Mongo m = new Mongo(); DB db = m.getDB("movieDB"); movieColl = db.getCollection("movies"); if (movieColl == null) { movieColl = db.createCollection("movies", null); } } public void createMovie() { BasicDBObject doc = movie.toDBObject(); movieColl.insert(doc); } public List<Movie> getMovies() { List<Movie> movies = new ArrayList(); DBCursor cur = movieColl.find(); System.out.println("getMovies: Found " + cur.size() + " movie(s)"); for (DBObject dbo : cur.toArray()) { movies.add(Movie.fromDBObject(dbo)); } return movies; }} The database is initialized in @PostConstruct. Instead of a working with a database table, NoSQL databases work with a schema-less document. The "Movie" class is the document in our case and stored in the collection "movies". The collection allows us to perform query functions on all movies. The "getMovies" method invokes "find" method on the collection which is equivalent to the SQL query "select * from movies" and then returns a List<Movie>. Also notice that there is no "persistence.xml" in the project. Right-click and run the project to see the output as: Enter some values in the text box and click on enter to see the result as: If you reached here then you've successfully used MongoDB in your Java EE 6 application, congratulations! Some food for thought and further play ... SQL to MongoDB mapping shows mapping between traditional SQL -> Mongo query language. Tutorial shows fun things you can do with MongoDB. Try the interactive online shell  The cookbook provides common ways of using MongoDB In terms of this project, here are some tasks that can be tried: Encapsulate database management in a JPA persistence provider. Is it even worth it because the capabilities are going to be very different ? MongoDB uses "BSonObject" class for JSON representation, add @XmlRootElement on a POJO and how a compatible JSON representation can be generated. This will make the fromXXX and toXXX methods redundant.

    Read the article

  • TOTD #166: Using NoSQL database in your Java EE 6 Applications on GlassFish - MongoDB for now!

    - by arungupta
    The Java EE 6 platform includes Java Persistence API to work with RDBMS. The JPA specification defines a comprehensive API that includes, but not restricted to, how a database table can be mapped to a POJO and vice versa, provides mechanisms how a PersistenceContext can be injected in a @Stateless bean and then be used for performing different operations on the database table and write typesafe queries. There are several well known advantages of RDBMS but the NoSQL movement has gained traction over past couple of years. The NoSQL databases are not intended to be a replacement for the mainstream RDBMS. As Philosophy of NoSQL explains, NoSQL database was designed for casual use where all the features typically provided by an RDBMS are not required. The name "NoSQL" is more of a category of databases that is more known for what it is not rather than what it is. The basic principles of NoSQL database are: No need to have a pre-defined schema and that makes them a schema-less database. Addition of new properties to existing objects is easy and does not require ALTER TABLE. The unstructured data gives flexibility to change the format of data any time without downtime or reduced service levels. Also there are no joins happening on the server because there is no structure and thus no relation between them. Scalability and performance is more important than the entire set of functionality typically provided by an RDBMS. This set of databases provide eventual consistency and/or transactions restricted to single items but more focus on CRUD. Not be restricted to SQL to access the information stored in the backing database. Designed to scale-out (horizontal) instead of scale-up (vertical). This is important knowing that databases, and everything else as well, is moving into the cloud. RBDMS can scale-out using sharding but requires complex management and not for the faint of heart. Unlike RBDMS which require a separate caching tier, most of the NoSQL databases comes with integrated caching. Designed for less management and simpler data models lead to lower administration as well. There are primarily three types of NoSQL databases: Key-Value stores (e.g. Cassandra and Riak) Document databases (MongoDB or CouchDB) Graph databases (Neo4J) You may think NoSQL is panacea but as I mentioned above they are not meant to replace the mainstream databases and here is why: RDBMS have been around for many years, very stable, and functionally rich. This is something CIOs and CTOs can bet their money on without much worry. There is a reason 98% of Fortune 100 companies run Oracle :-) NoSQL is cutting edge, brings excitement to developers, but enterprises are cautious about them. Commercial databases like Oracle are well supported by the backing enterprises in terms of providing support resources on a global scale. There is a full ecosystem built around these commercial databases providing training, performance tuning, architecture guidance, and everything else. NoSQL is fairly new and typically backed by a single company not able to meet the scale of these big enterprises. NoSQL databases are good for CRUDing operations but business intelligence is extremely important for enterprises to stay competitive. RDBMS provide extensive tooling to generate this data but that was not the original intention of NoSQL databases and is lacking in that area. Generating any meaningful information other than CRUDing require extensive programming. Not suited for complex transactions such as banking systems or other highly transactional applications requiring 2-phase commit. SQL cannot be used with NoSQL databases and writing simple queries can be involving. Enough talking, lets take a look at some code. This blog has published multiple blogs on how to access a RDBMS using JPA in a Java EE 6 application. This Tip Of The Day (TOTD) will show you can use MongoDB (a document-oriented database) with a typical 3-tier Java EE 6 application. Lets get started! The complete source code of this project can be downloaded here. Download MongoDB for your platform from here (1.8.2 as of this writing) and start the server as: arun@ArunUbuntu:~/tools/mongodb-linux-x86_64-1.8.2/bin$./mongod./mongod --help for help and startup optionsSun Jun 26 20:41:11 [initandlisten] MongoDB starting : pid=11210port=27017 dbpath=/data/db/ 64-bit Sun Jun 26 20:41:11 [initandlisten] db version v1.8.2, pdfile version4.5Sun Jun 26 20:41:11 [initandlisten] git version:433bbaa14aaba6860da15bd4de8edf600f56501bSun Jun 26 20:41:11 [initandlisten] build sys info: Linuxbs-linux64.10gen.cc 2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 2017:48:28 EST 2009 x86_64 BOOST_LIB_VERSION=1_41Sun Jun 26 20:41:11 [initandlisten] waiting for connections on port 27017Sun Jun 26 20:41:11 [websvr] web admin interface listening on port 28017 The default directory for the database is /data/db and needs to be created as: sudo mkdir -p /data/db/sudo chown `id -u` /data/db You can specify a different directory using "--dbpath" option. Refer to Quickstart for your specific platform. Using NetBeans, create a Java EE 6 project and make sure to enable CDI and add JavaServer Faces framework. Download MongoDB Java Driver (2.6.3 of this writing) and add it to the project library by selecting "Properties", "LIbraries", "Add Library...", creating a new library by specifying the location of the JAR file, and adding the library to the created project. Edit the generated "index.xhtml" such that it looks like: <h1>Add a new movie</h1><h:form> Name: <h:inputText value="#{movie.name}" size="20"/><br/> Year: <h:inputText value="#{movie.year}" size="6"/><br/> Language: <h:inputText value="#{movie.language}" size="20"/><br/> <h:commandButton actionListener="#{movieSessionBean.createMovie}" action="show" title="Add" value="submit"/></h:form> This page has a simple HTML form with three text boxes and a submit button. The text boxes take name, year, and language of a movie and the submit button invokes the "createMovie" method of "movieSessionBean" and then render "show.xhtml". Create "show.xhtml" ("New" -> "Other..." -> "Other" -> "XHTML File") such that it looks like: <head> <title><h1>List of movies</h1></title> </head> <body> <h:form> <h:dataTable value="#{movieSessionBean.movies}" var="m" > <h:column><f:facet name="header">Name</f:facet>#{m.name}</h:column> <h:column><f:facet name="header">Year</f:facet>#{m.year}</h:column> <h:column><f:facet name="header">Language</f:facet>#{m.language}</h:column> </h:dataTable> </h:form> This page shows the name, year, and language of all movies stored in the database so far. The list of movies is returned by "movieSessionBean.movies" property. Now create the "Movie" class such that it looks like: import com.mongodb.BasicDBObject;import com.mongodb.BasicDBObject;import com.mongodb.DBObject;import javax.enterprise.inject.Model;import javax.validation.constraints.Size;/** * @author arun */@Modelpublic class Movie { @Size(min=1, max=20) private String name; @Size(min=1, max=20) private String language; private int year; // getters and setters for "name", "year", "language" public BasicDBObject toDBObject() { BasicDBObject doc = new BasicDBObject(); doc.put("name", name); doc.put("year", year); doc.put("language", language); return doc; } public static Movie fromDBObject(DBObject doc) { Movie m = new Movie(); m.name = (String)doc.get("name"); m.year = (int)doc.get("year"); m.language = (String)doc.get("language"); return m; } @Override public String toString() { return name + ", " + year + ", " + language; }} Other than the usual boilerplate code, the key methods here are "toDBObject" and "fromDBObject". These methods provide a conversion from "Movie" -> "DBObject" and vice versa. The "DBObject" is a MongoDB class that comes as part of the mongo-2.6.3.jar file and which we added to our project earlier.  The complete javadoc for 2.6.3 can be seen here. Notice, this class also uses Bean Validation constraints and will be honored by the JSF layer. Finally, create "MovieSessionBean" stateless EJB with all the business logic such that it looks like: package org.glassfish.samples;import com.mongodb.BasicDBObject;import com.mongodb.DB;import com.mongodb.DBCollection;import com.mongodb.DBCursor;import com.mongodb.DBObject;import com.mongodb.Mongo;import java.net.UnknownHostException;import java.util.ArrayList;import java.util.List;import javax.annotation.PostConstruct;import javax.ejb.Stateless;import javax.inject.Inject;import javax.inject.Named;/** * @author arun */@Stateless@Namedpublic class MovieSessionBean { @Inject Movie movie; DBCollection movieColl; @PostConstruct private void initDB() throws UnknownHostException { Mongo m = new Mongo(); DB db = m.getDB("movieDB"); movieColl = db.getCollection("movies"); if (movieColl == null) { movieColl = db.createCollection("movies", null); } } public void createMovie() { BasicDBObject doc = movie.toDBObject(); movieColl.insert(doc); } public List<Movie> getMovies() { List<Movie> movies = new ArrayList(); DBCursor cur = movieColl.find(); System.out.println("getMovies: Found " + cur.size() + " movie(s)"); for (DBObject dbo : cur.toArray()) { movies.add(Movie.fromDBObject(dbo)); } return movies; }} The database is initialized in @PostConstruct. Instead of a working with a database table, NoSQL databases work with a schema-less document. The "Movie" class is the document in our case and stored in the collection "movies". The collection allows us to perform query functions on all movies. The "getMovies" method invokes "find" method on the collection which is equivalent to the SQL query "select * from movies" and then returns a List<Movie>. Also notice that there is no "persistence.xml" in the project. Right-click and run the project to see the output as: Enter some values in the text box and click on enter to see the result as: If you reached here then you've successfully used MongoDB in your Java EE 6 application, congratulations! Some food for thought and further play ... SQL to MongoDB mapping shows mapping between traditional SQL -> Mongo query language. Tutorial shows fun things you can do with MongoDB. Try the interactive online shell  The cookbook provides common ways of using MongoDB In terms of this project, here are some tasks that can be tried: Encapsulate database management in a JPA persistence provider. Is it even worth it because the capabilities are going to be very different ? MongoDB uses "BSonObject" class for JSON representation, add @XmlRootElement on a POJO and how a compatible JSON representation can be generated. This will make the fromXXX and toXXX methods redundant.

    Read the article

  • CodePlex Daily Summary for Wednesday, February 23, 2011

    CodePlex Daily Summary for Wednesday, February 23, 2011Popular ReleasesSQL Server CLR Function for Address Correction and Geocoding: Release 2.1: Adds support for the User Key argument in Process function calls.ClosedXML - The easy way to OpenXML: ClosedXML 0.45.2: New on this release: 1) Added data validation. See Data Validation 2) Deleting or clearing cells deletes the hyperlinks too. New on v0.45.1 1) Fixed issues 6237, 6240 New on v0.45.2 1) Fixed issues 6257, 6266 New Examples Data ValidationCrystalbyte Equinox (LINQ to IMAP): Equinox Alpha Release v0.3.0.0: Fixed bugsFixed issue introduced in last release; No connection could be established since the client did not read the welcome message during the initial connection attempt. Contact names are now being properly parsed into the envelope Introduced featuresImplemented SMTP support, the client is contained inside the Crystalbyte.Equinox.Smtp.dll which was added to the current release.OMEGA CMS: OMEGA CMA - Alpha 0.2: A few fixes for OMEGA Framework (DLL) A few tweeks for OMEGA CMSJSLint for Visual Studio 2010: 1.2.4: Bug Fix release.Coding4Fun Tools: Coding4Fun.Phone.Toolkit v1.2: New control, Toast Prompt! Removed progress bar since Silverlight Toolkit Feb 2010 has it.Umbraco CMS: Umbraco 4.7: Service release fixing 31 issues. A full changelog will be available with the final stable release of 4.7 Important when upgradingUpgrade as if it was a patch release (update /bin, /umbraco and /umbraco_client). For general upgrade information follow the guide found at http://our.umbraco.org/wiki/install-and-setup/upgrading-an-umbraco-installation 4.7 requires the .NET 4.0 framework Web.Config changes Update the web web.config to include the 4 changes found in (they're clearly marked in...HubbleDotNet - Open source full-text search engine: V1.1.0.0: Add Sqlite3 DBAdapter Add App Report when Query Cache is Collecting. Improve the performance of index through Synchronize. Add top 0 feature so that we can only get count of the result. Improve the score calculating algorithm of match. Let the score of the record that match all items large then others. Add MySql DBAdapter Improve performance for multi-fields sort . Using hash table to access the Payload data. The version before used bin search. Using heap sort instead of qui...Silverlight????[???]: silverlight????[???]2.0: ???????,?????,????????silverlight????。DBSourceTools: DBSourceTools_1.3.0.0: Release 1.3.0.0 Changed editors from FireEdit to ICSharpCode.TextEditor. Complete re-vamp of Intellisense ( further testing needed). Hightlight Field and Table Names in sql scripts. Added field dropdown on all tables and views in DBExplorer. Added data option for viewing data in Tables. Fixed comment / uncomment bug as reported by tareq. Included Synonyms in scripting engine ( nickt_ch ).IronPython: 2.7 Release Candidate 1: We are pleased to announce the first Release Candidate for IronPython 2.7. This release contains over two dozen bugs fixed in preparation for 2.7 Final. See the release notes for 60193 for details and what has already been fixed in the earlier 2.7 prereleases. - IronPython TeamCaliburn Micro: A Micro-Framework for WPF, Silverlight and WP7: Caliburn.Micro 1.0 RC: This is the official Release Candicate for Caliburn.Micro 1.0. The download contains the binaries, samples and VS templates. VS Templates The templates included are designed for situations where the Caliburn.Micro source needs to be embedded within a single project solution. This was targeted at government and other organizations that expressed specific requirements around using an open source project like this. NuGet This release does not have a corresponding NuGet package. The NuGet pack...Caliburn: A Client Framework for WPF and Silverlight: Caliburn 2.0 RC: This is the official Release Candidate for Caliburn 2.0. It contains all binaries, samples and generated code docs.Chiave File Encryption: Chiave 0.9: Application for file encryption and decryption using 512 Bit rijndael encyrption algorithm with simple to use UI. Its written in C# and compiled in .Net version 3.5. It incorporates features of Windows 7 like Jumplists, Taskbar progress and Aero Glass. Feedbacks are Welcome!....Rawr: Rawr 4.0.20 Beta: Rawr is now web-based. The link to use Rawr4 is: http://elitistjerks.com/rawr.phpThis is the Cataclysm Beta Release. More details can be found at the following link http://rawr.codeplex.com/Thread/View.aspx?ThreadId=237262 As of the 4.0.16 release, you can now also begin using the new Downloadable WPF version of Rawr!This is a pre-alpha release of the WPF version, there are likely to be a lot of issues. If you have a problem, please follow the Posting Guidelines and put it into the Issue Trac...PowerGUI Visual Studio Extension: PowerGUI VSX 1.3.2: New FeaturesPowerGUI Console Tool Window PowerShell Project Type PowerGUI 2.4 SupportMiniTwitter: 1.66: MiniTwitter 1.66 ???? ?? ?????????? 2 ??????????????????? User Streams ?????????Windows Phone 7 Isolated Storage Explorer: WP7 Isolated Storage Explorer v1.0 Beta: Current release features:WPF desktop explorer client Visual Studio integrated tool window explorer client (Visual Studio 2010 Professional and above) Supported operations: Refresh (isolated storage information), Add Folder, Add Existing Item, Download File, Delete Folder, Delete File Explorer supports operations running on multiple remote applications at the same time Explorer detects application disconnect (1-2 second delay) Explorer confirms operation completed status Explorer d...Image.Viewer: 2011: First version of 2011Silverlight Toolkit: Silverlight for Windows Phone Toolkit - Feb 2011: Silverlight for Windows Phone Toolkit OverviewSilverlight for Windows Phone Toolkit offers developers additional controls for Windows Phone application development, designed to match the rich user experience of the Windows Phone 7. Suggestions? Features? Questions? Ask questions in the Create.msdn.com forum. Add bugs or feature requests to the Issue Tracker. Help us shape the Silverlight Toolkit with your feedback! Please clearly indicate that the work items and issues are for the phone t...New Projects.Net Dating & Social Software Suite: A free, open source dating and social software suite. Dot Net Dating Suite uses the latest features in .Net combined with experienced developers efforts. This allows us to deliver professional public SEO websites with the support of an enterprise enabled administration suite.AllegroSharp: Biblioteka zapewniajaca obsluge serwisu aukcyjnego Allegro.pl w oparciu o udostepniona publicznie usluge Allegro Web API.asdfasdfasdf: asdfasdfasdfauto: auto siteAWS Monitor: A web app utilizing the Amazon Web Services API with a focus on browsing through and analyzing data graphically, mostly from CloudWatch - the API providing metrics about the usage of all the Amazon Web Services such as EC2 (Cloud Computing) and ELB (Elastic Load Balancing).Calculation of FEM using DirectX Libraries: Calculation of FEM using DirectX LibrariesCART: System CART (Creative Application to Remedy Traffics) poprawi przepustowosc infrastruktury drogowej i plynnosci ruchu pojazdów w aglomeracjach miejskich. Chaining Assertion for MSTest: Chaining Assertion for MSTest. Simpleness Assert Extension Method on Object. This provides only one .cs file.CodeFirst Membership Provider: Custom Membership Provider based on SimpleMembershipProvider using Entity Framework Code-First, intended for use in ASP.NET MVC 3. Thanks to ASP.NET Web Pages team for building a simple membership provider :PFourSquareSharp: Simple OAuth2 Implementation of FourSquare API in .NETHelix Engine: The Helix Engine is an Isometric Rendering Engine for SilverlightLighthouse - Versatile Unit Test Runner for Silverlight: Lighthouse provides you with set of simple but powerful tools to run Silverlight Unit Tests from Command Line, Windows Test Runner Application and from Resharper plugin for Visual Studio.MISCE: Designs for a Minimal Instruction Set Computer for Education along with a simulator/compiler written in Excel. A computer "from first principles" that utilises a minimal instruction set and can be built for demonstration purposes as part of a technology or computing course. Oh My Log: Oh My Log inventorises eventlog files from Windows Servers in a network. Clean and simple sysadmin tool. OSIS Interop Tests: These are the tests created for use by the OSIS working group (http://osis.idcommons.net) to test interoperability features of user-centric solutions during interop events.Program Options: Parse command line optionsSearchable Property Updater for Microsoft Dynamics CRM 2011: Searchable Property Updater makes it easier for Microsoft Dynamics CRM 2011 customizers to bulk change the "Active for advanced find" property of entities attributesSilverlight Bolo: silverlight bolo cloneSimulacia a vizualizacia vlasov: simulacia a vizualizacia vlasov pomocou GPUSmartkernel: Smartkernel:Framework,Example,Platform,Tool,AppSusuCMS: SusuCMSSwimlanes for Scrum: Swimlanes Taskboard for Scrum Team Projects with TFS setup using SfTS (Scrum for Team System) V3 templateteamdoer: These are the actual source codes that power teamdoer.com - a simple collaborative task/project management software apptiencd: Tiencd projectVG Current Item Display Web Part: VG Current Item Display Web Part allows to display current SharePoint item metadata. For example if you put it on to the Web Part page or list item form it will display the custom view of the current item properties. Output is easily customized with XSLT file.Virtual Interactive Shopper: Addin for SeeMe Rehabilitation System. More information about the SeeMe system can be found at http://www.brontesprocessing.com/health/SeeMeVisual Studio Strategy Manager: Provide a way to create code generation strategies and to manage them in Visual Studio 2010. Strategies can interact with Visual Studio events like document events (saved, closed, opened), building event or DSL Tools events (model element created, deleted, modified..).VMarket: <project name>VMarket</project name> <programming language>asp.net</programming language> <project description> VMarket is virtual market systems which allow users to act as seller or buyer and manage their property online </project description>webclerk: webclerk's project based on microsoft techWebTest: Just Test!Work efficiency (WE) ????????: Work efficiency ???????? ????: ???? ????zrift: nothing here, move along

    Read the article

  • How to properly deny Railo directory access through Apache

    - by Sn3akyP3t3
    I've been battle tested on this and failed to achieve my goal which is to deny all access to all directories except the Public directory and only allow access to all all other directories with specific IP addresses. To get Railo+Apache+Tomcat installed I pretty much followed this script: https://github.com/talltroym/Railo-Ubuntu-Installer-Script then verified settings with this tutorial: http://blog.nictunney.com/2012/03/railo-tomcat-and-apache-on-amazon-ec2.html From the installation script these mods are enabled: sudo a2enmod ssl sudo a2enmod proxy sudo a2enmod proxy_http sudo a2enmod rewrite sudo a2ensite default-ssl Outside of the script I copied the sites-available to sites-enabled then reloaded Apache. I have a directory created for Railo cmfl located at /var/www/Railo/ Navigating the browser to http ://Server_IP_Address/Railo forces ssl and relocates to https ://Server_IP_Address/Railo which shows off index.cfm. Not providing index.cfm and omitting https indicates that the DirectoryIndex directive and RewriteCond of Apache appears to be working for the sites-enabled VirtualHost. The problem I'm encountering is that I cannot seem to deny access to all directories except Public. My directory structure is rather simple and looks like this: Railo error Public NotPublic Sandbox These are my sites-enabled configurations: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www #Default Deny All to prevent walking backwards in file system Alias /Railo/ "/var/www/Railo/" <Directory ~ ".*/Railo/(?!Public).*"> Order Deny,Allow Deny from All </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> DirectoryIndex index.cfm index.cfml default.cfm default.cfml index.htm index.html index.cfc RewriteEngine on RewriteCond %{SERVER_PORT} !^443$ RewriteRule ^.*$ https://%{SERVER_NAME}%{REQUEST_URI} [L,R] </VirtualHost> and <IfModule mod_ssl.c> <VirtualHost _default_:443> ServerAdmin webmaster@localhost DocumentRoot /var/www Alias /Railo/ "/var/www/Railo/" <Directory ~ "/var/www/Railo/(?!Public).*"> Order Deny,Allow Deny from All </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCACertificatePath /etc/ssl/certs/ #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt # Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client # authentication or alternatively one huge file containing all # of them (file must be PEM encoded) # Note: Inside SSLCARevocationPath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCARevocationPath /etc/apache2/ssl.crl/ #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl # Client Authentication (Type): # Client certificate verification type and depth. Types are # none, optional, require and optional_no_ca. Depth is a # number which specifies how deeply to verify the certificate # issuer chain before deciding the certificate is not valid. #SSLVerifyClient require #SSLVerifyDepth 10 # Access Control: # With SSLRequire you can do per-directory access control based # on arbitrary complex boolean expressions containing server # variable checks and other lookup directives. The syntax is a # mixture between C and Perl. See the mod_ssl documentation # for more details. #<Location /> #SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \ # and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \ # and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \ # and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \ # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \ # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/ #</Location> # SSL Engine Options: # Set various options for the SSL engine. # o FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that # the standard Auth/DBMAuth methods can be used for access control. The # user name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. # o ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the # server (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates # into CGI scripts. # o StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the # exportation for CGI and SSI requests only. # o StrictRequire: # This denies access when "SSLRequireSSL" or "SSLRequire" applied even # under a "Satisfy any" situation, i.e. when it applies access is denied # and no other module can change it. # o OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> # SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait for # the close notify alert from client. When you need a different shutdown # approach you can use one of the following variables: # o ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates # the SSL/TLS standard but is needed for some brain-dead browsers. Use # this when you receive I/O errors because of the standard approach where # mod_ssl sends the close notify alert. # o ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation # works correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. BrowserMatch "MSIE [2-6]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 # MSIE 7 and newer should be able to use keepalive BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown DirectoryIndex index.cfm index.cfml default.cfm default.cfml index.htm index.html #Proxy .cfm and cfc requests to Railo ProxyPassMatch ^/(.+.cf[cm])(/.*)?$ http://127.0.0.1:8888/$1 ProxyPassReverse / http://127.0.0.1:8888/ #Deny access to admin except for local clients <Location /railo-context/admin/> Order deny,allow Deny from all # Allow from <Omitted> # Allow from <Omitted> Allow from 127.0.0.1 </Location> </VirtualHost> </IfModule> The apache2.conf includes the following: # Include the virtual host configurations: Include sites-enabled/ <IfModule !mod_jk.c> LoadModule jk_module /usr/lib/apache2/modules/mod_jk.so </IfModule> <IfModule mod_jk.c> JkMount /*.cfm ajp13 JkMount /*.cfc ajp13 JkMount /*.do ajp13 JkMount /*.jsp ajp13 JkMount /*.cfchart ajp13 JkMount /*.cfm/* ajp13 JkMount /*.cfml/* ajp13 # Flex Gateway Mappings # JkMount /flex2gateway/* ajp13 # JkMount /flashservices/gateway/* ajp13 # JkMount /messagebroker/* ajp13 JkMountCopy all JkLogFile /var/log/apache2/mod_jk.log </IfModule> I believe I understand most of this except the jk_module inclusion which I've noticed has an error that shows up in the logs that I can't sort out: [warn] No JkShmFile defined in httpd.conf. Using default /etc/apache2/logs/jk-runtime-status I've checked my Regular expression against the paths of the directories with RegexBuddy just to be sure that I wasn't correct. The problem doesn't appear to be Regex related although I may have something incorrect in the Directory directive. The Location directive seems to be working correctly for blocking out Railo admin site access.

    Read the article

  • Mysql 100% CPU + Slow query

    - by felipeclopes
    I'm using the RDS database from amazon with a some very big tables, and yesterday I started to face 100% CPU utilisation on the server and a bunch of slow query logs that were not happening before. I tried to check the queries that were running and faced this result from the explain command +----+-------------+-------------------------------+--------+----------------------------------------------------------------------------------------------+---------------------------------------+---------+-----------------------------------------------------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------------------------------+--------+----------------------------------------------------------------------------------------------+---------------------------------------+---------+-----------------------------------------------------------------+------+----------------------------------------------+ | 1 | SIMPLE | businesses | const | PRIMARY | PRIMARY | 4 | const | 1 | Using index; Using temporary; Using filesort | | 1 | SIMPLE | activities_businesses | ref | PRIMARY,index_activities_users_on_business_id,index_tweets_users_on_tweet_id_and_business_id | index_activities_users_on_business_id | 9 | const | 2252 | Using index condition; Using where | | 1 | SIMPLE | activities_b_taggings_975e9c4 | ref | taggings_idx | taggings_idx | 782 | const,myapp_production.activities_businesses.id,const | 1 | Using index condition; Using where | | 1 | SIMPLE | activities | eq_ref | PRIMARY,index_activities_on_created_at | PRIMARY | 8 | myapp_production.activities_businesses.activity_id | 1 | Using where | +----+-------------+-------------------------------+--------+----------------------------------------------------------------------------------------------+---------------------------------------+---------+-----------------------------------------------------------------+------+----------------------------------------------+ Also checkin in the process list, I got something like this: +----+-----------------+-------------------------------------+----------------------------+---------+------+--------------+------------------------------------------------------------------------------------------------------+ | Id | User | Host | db | Command | Time | State | Info | +----+-----------------+-------------------------------------+----------------------------+---------+------+--------------+------------------------------------------------------------------------------------------------------+ | 1 | my_app | my_ip:57152 | my_app_production | Sleep | 0 | | NULL | | 2 | my_app | my_ip:57153 | my_app_production | Sleep | 2 | | NULL | | 3 | rdsadmin | localhost:49441 | NULL | Sleep | 9 | | NULL | | 6 | my_app | my_other_ip:47802 | my_app_production | Sleep | 242 | | NULL | | 7 | my_app | my_other_ip:47807 | my_app_production | Query | 231 | Sending data | SELECT my_fields... | | 8 | my_app | my_other_ip:47809 | my_app_production | Query | 231 | Sending data | SELECT my_fields... | | 9 | my_app | my_other_ip:47810 | my_app_production | Query | 231 | Sending data | SELECT my_fields... | | 10 | my_app | my_other_ip:47811 | my_app_production | Query | 231 | Sending data | SELECT my_fields... | | 11 | my_app | my_other_ip:47813 | my_app_production | Query | 231 | Sending data | SELECT my_fields... | ... So based on the numbers, it looks like there is no reason to have a slow query, since the worst execution plan is the one that goes through 2k rows which is not much. Edit 1 Another information that might be useful is the slow query_log SET timestamp=1401457485; SELECT my_query... # User@Host: myapp[myapp] @ ip-10-195-55-233.ec2.internal [IP] Id: 435 # Query_time: 95.830497 Lock_time: 0.000178 Rows_sent: 0 Rows_examined: 1129387 Edit 2 After profiling, I got this result. The result have approximately 250 rows with two columns each. +----------------------+----------+ | state | duration | +----------------------+----------+ | Sending data | 272 | | removing tmp table | 0 | | optimizing | 0 | | Creating sort index | 0 | | init | 0 | | cleaning up | 0 | | executing | 0 | | checking permissions | 0 | | freeing items | 0 | | Creating tmp table | 0 | | query end | 0 | | statistics | 0 | | end | 0 | | System lock | 0 | | Opening tables | 0 | | logging slow query | 0 | | Sorting result | 0 | | starting | 0 | | closing tables | 0 | | preparing | 0 | +----------------------+----------+ Edit 3 Adding query as requested SELECT activities.share_count, activities.created_at FROM `activities_businesses` INNER JOIN `businesses` ON `businesses`.`id` = `activities_businesses`.`business_id` INNER JOIN `activities` ON `activities`.`id` = `activities_businesses`.`activity_id` JOIN taggings activities_b_taggings_975e9c4 ON activities_b_taggings_975e9c4.taggable_id = activities_businesses.id AND activities_b_taggings_975e9c4.taggable_type = 'ActivitiesBusiness' AND activities_b_taggings_975e9c4.tag_id = 104 AND activities_b_taggings_975e9c4.created_at >= '2014-04-30 13:36:44' WHERE ( businesses.id = 1 ) AND ( activities.created_at > '2014-04-30 13:36:44' ) AND ( activities.created_at < '2014-05-30 12:27:03' ) ORDER BY activities.created_at; Edit 4 There may be a chance that the indexes are not being applied due to difference in column type between the taggings and the activities_businesses, on the taggable_id column. mysql> SHOW COLUMNS FROM activities_businesses; +-------------+------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------+------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | activity_id | bigint(20) | YES | MUL | NULL | | | business_id | bigint(20) | YES | MUL | NULL | | +-------------+------------+------+-----+---------+----------------+ 3 rows in set (0.01 sec) mysql> SHOW COLUMNS FROM taggings; +---------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | tag_id | int(11) | YES | MUL | NULL | | | taggable_id | bigint(20) | YES | | NULL | | | taggable_type | varchar(255) | YES | | NULL | | | tagger_id | int(11) | YES | | NULL | | | tagger_type | varchar(255) | YES | | NULL | | | context | varchar(128) | YES | | NULL | | | created_at | datetime | YES | | NULL | | +---------------+--------------+------+-----+---------+----------------+ So it is examining way more rows than it shows in the explain query, probably because some indexes are not being applied. Do you guys can help m with that?

    Read the article

  • Why does this Java code not utilize all CPU cores?

    - by ReneS
    The attached simple Java code should load all available cpu core when starting it with the right parameters. So for instance, you start it with java VMTest 8 int 0 and it will start 8 threads that do nothing else than looping and adding 2 to an integer. Something that runs in registers and not even allocates new memory. The problem we are facing now is, that we do not get a 24 core machine loaded (AMD 2 sockets with 12 cores each), when running this simple program (with 24 threads of course). Similar things happen with 2 programs each 12 threads or smaller machines. So our suspicion is that the JVM (Sun JDK 6u20 on Linux x64) does not scale well. Did anyone see similar things or has the ability to run it and report whether or not it runs well on his/her machine (= 8 cores only please)? Ideas? I tried that on Amazon EC2 with 8 cores too, but the virtual machine seems to run different from a real box, so the loading behaves totally strange. package com.test; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.Future; import java.util.concurrent.TimeUnit; public class VMTest { public class IntTask implements Runnable { @Override public void run() { int i = 0; while (true) { i = i + 2; } } } public class StringTask implements Runnable { @Override public void run() { int i = 0; String s; while (true) { i++; s = "s" + Integer.valueOf(i); } } } public class ArrayTask implements Runnable { private final int size; public ArrayTask(int size) { this.size = size; } @Override public void run() { int i = 0; String[] s; while (true) { i++; s = new String[size]; } } } public void doIt(String[] args) throws InterruptedException { final String command = args[1].trim(); ExecutorService executor = Executors.newFixedThreadPool(Integer.valueOf(args[0])); for (int i = 0; i < Integer.valueOf(args[0]); i++) { Runnable runnable = null; if (command.equalsIgnoreCase("int")) { runnable = new IntTask(); } else if (command.equalsIgnoreCase("string")) { runnable = new StringTask(); } Future<?> submit = executor.submit(runnable); } executor.awaitTermination(1, TimeUnit.HOURS); } public static void main(String[] args) throws InterruptedException { if (args.length < 3) { System.err.println("Usage: VMTest threadCount taskDef size"); System.err.println("threadCount: Number 1..n"); System.err.println("taskDef: int string array"); System.err.println("size: size of memory allocation for array, "); System.exit(-1); } new VMTest().doIt(args); } }

    Read the article

  • CodePlex Daily Summary for Thursday, October 18, 2012

    CodePlex Daily Summary for Thursday, October 18, 2012Popular ReleasesGac Library -- C++ Utilities for GPU Accelerated GUI and Script: Gaclib 0.4.0.0: Gaclib.zip contains the following content GacUIDemo Demo solution and projects Public Source GacUI library Document HTML document. Please start at reference_gacui.html Content Necessary CSS/JPG files for document. Improvements to the previous release Added Windows 8 theme (except tab control, will be provided for the next release) GacUI will choose to use Windows 7 theme or Windows 8 theme as default theme according to OS version Added 1 new demos Template.Window.CustomizedBorder ...Yahoo! UI Library: YUI Compressor for .Net: Version 2.1.1.0 - Sartha (BugFix): - Revered back the embedding of the 2x assemblies.Visual Studio Team Foundation Server Branching and Merging Guide: v2.1 - Visual Studio 2012: Welcome to the Branching and Merging Guide What is new? The Version Control specific discussions have been moved from the Branching and Merging Guide to the new Advanced Version Control Guide. The Branching and Merging Guide and the Advanced Version Control Guide have been ported to the new document style. See http://blogs.msdn.com/b/willy-peter_schaub/archive/2012/10/17/alm-rangers-raising-the-quality-bar-for-documentation-part-2.aspx for more information. Quality-Bar Details Documentatio...D3 Loot Tracker: 1.5.5: Compatible with 1.05.Write Once, Play Everywhere: MonoGame 3.0 (BETA): This is a beta release of the up coming MonoGame 3.0. It contains an Installer which will install a binary release of MonoGame on windows boxes with the following platforms. Windows, Linux, Android and Windows 8. If you need to build for iOS or Mac you will need to get the source code at this time as the installers for those platforms are not available yet. The installer will also install a bunch of Project templates for Visual Studio 2010 , 2012 and MonoDevleop. For those of you wish...Windawesome: Windawesome v1.4.1 x64: Fixed switching of applications across monitors Changed window flashing API (fix your config files) Added NetworkMonitorWidget (thanks to weiwen) Any issues/recommendations/requests for future versions? This is the 64-bit version of the release. Be sure to use that if you are on a 64-bit Windows. Works with "Required DLLs v3".CODE Framework: 4.0.21017.0: See change log in the Documentation section for details.Global Stock Exchange (Hobby Project): Global Stock Exchange - Invst Banking (Hobby Proj): Initial VersionMagelia WebStore Open-source Ecommerce software: Magelia WebStore 2.1: Add support for .net 4.0 to Magelia.Webstore.Client and StarterSite version 2.1.254.3 Scheduler Import & Export feature (for Professional and Entreprise Editions) UTC datetime and timezone support .net 4.5 and Visual Studio 2012 migration client magelia global refactoring release of a nugget package to help developers speed up development http://nuget.org/packages/Magelia.Webstore.Client optimization of the data update mechanism (a.k.a. "burst") Performance improvment of the d...JayData - The cross-platform HTML5 data-management library for JavaScript: JayData 1.2.2: JayData is a unified data access library for JavaScript to CRUD + Query data from different sources like OData, MongoDB, WebSQL, SqLite, HTML5 localStorage, Facebook or YQL. The library can be integrated with Knockout.js or Sencha Touch 2 and can be used on Node.js as well. See it in action in this 6 minutes video Sencha Touch 2 example app using JayData: Netflix browser. What's new in JayData 1.2.2 For detailed release notes check the release notes. Revitalized IndexedDB providerNow you c...JaySvcUtil - generate JavaScript context from OData metadata: JaySvcUtil 1.2.2: You will need the JayData library to use contexts generated with JaySvcUtil! Get it from here: http://jaydata.codeplex.comVFPX: FoxcodePlus: FoxcodePlus - Visual Studio like extensions to Visual FoxPro IntelliSense.Droid Explorer: Droid Explorer 0.8.8.8 Beta: fixed the icon for packages on the desktop fixed the install dialog closing right when it starts removed the link to "set up the sdk for me" as this is no longer supported. fixed bug where the device selection dialog would show, even if there was only one device connected. fixed toolbar from having "gap" between other toolbar removed main menu items that do not have any menus Fiskalizacija za developere: FiskalizacijaDev 1.0: Prva verzija ovog projekta, još je uvijek oznacena kao BETA - ovo znaci da su naša testiranja prošla uspješno :) No, kako mi ne proizvodimo neki software za blagajne, tako sve ovo nije niti isprobano u "realnim" uvjetima - svaka je sugestija, primjedba ili prijava bug-a je dobrodošla. Za sve ovo koristite, molimo, Discussions ili Issue Tracker. U ovom trenutku runtime binary je raspoloživ kao Any CPU za .NET verzije 2.0. Javite ukoliko trebaju i verzije buildane za 32-bit/64-bit kao i za .N...Squiggle - A free open source LAN Messenger: Squiggle 3.2 (Development): NOTE: This is development release and not recommended for production use. This release is mainly for enabling extensibility and interoperability with other platforms. Support for plugins Support for extensions Communication layer and protocol is platform independent (ZeroMQ, ProtocolBuffers) Bug fixes New /invite command Edit the sent message Disable update check NOTE: This is development release and not recommended for production use.AcDown????? - AcDown Downloader Framework: AcDown????? v4.2: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown??????????????????,????????????????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ???? 32??64? ???Linux ????(1)????????Windows XP???,????????.NET Framework 2.0???(x86),?????"?????????"??? (2)???????????Linux???,????????Mono?? ??2...PHPExcel: PHPExcel 1.7.8: See Change Log for details of the new features and bugfixes included in this release, and methods that are now deprecated. Note changes to the PDF Writer: tcPDF is no longer bundled with PHPExcel, but should be installed separately if you wish to use that 3rd-Party library with PHPExcel. Alternatively, you can choose to use mPDF or DomPDF as PDF Rendering libraries instead: PHPExcel now provides a configurable wrapper allowing you a choice of PDF renderer. See the documentation, or the PDF s...DirectX Tool Kit: October 12, 2012: October 12, 2012 Added PrimitiveBatch for drawing user primitives Debug object names for all D3D resources (for PIX and debug layer leak reporting)Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.70: Fixed issue described in discussion #399087: variable references within case values weren't getting resolved.GoogleMap Control: GoogleMap Control 6.1: Some important bug fixes and couple of new features were added. There are no major changes to the sample website. Source code could be downloaded from the Source Code section selecting branch release-6.1. Thus just builds of GoogleMap Control are issued here in this release. Update 14.Oct.2012 - Client side access fixed NuGet Package GoogleMap Control 6.1 NuGet Package FeaturesBounds property to provide ability to create a map by center and bounds as well; Setting in markup <artem:Goog...New ProjectsAnonymous InfoPath Browser Forms Web Part: Web part for SharePoint 2007 and 2010 that enables the anonymous submission of InfoPath forms.ath sem5 gr3 proj1: just a simple app to learn how to work in groups, just a start in our programming careerAtif M DNNTaskManager: This is a simple task manager AWS for .NET Sample (Amazon EC2, S3, SQS, DynamoDB): Amazon Web Services (AWS) Sample for .NET (C#) with Asp.NET MVC and Web API. Including S3, DynamoDB, Elastic Beanstalk and SQSCodingWheels.DataTypes: DataTypes tries to make it easier for developers to have concrete typesafe objects for working with many common forms of data. Many times these data objects are just doubles or ints floating through your code with abbreviations on them describing what they represent.Display attachments (list view) SP 2010: Display attachments (SP 2010) is a field for all type lists without types: Document library, Image library, Links, Surveys.Donation Tracker: This is a school projectEveHQ : The Internet Spaceship Toolkit: Multi-faceted character application for Eve-Online. Includes pilot monitoring, skill queue planning, ship fitting, industry and more.EventConni: EventConniFileCloud.Net: FileCloud.NET is a .Net wrapper for http://filecloud.io/ service API.Foodies: FoodiesForce PowerPivot Refresh: This project contains an c# utility class (CustomDataRefreshUtils) to refresh a PowerPivot file in SharePoint 2010. Geometric Modeling: REnder a mesh with Visual Studio - C#Heavysoft Mince: Heavysoft Mince is an opportunity to enhance my skills and to show how free software can be used in enterprise-level systems.HP iLO Management Utility: A small utility to provide a nice interface for managing multiple servers over the iLO interface.iMeeting: iMeeting is a simple application built in Java. iTask: iTask for Windows 8 LamWebOS: this is test project.MackProjects_Musicallis: Projeto da disciplina de Técnicas de Programação Aplicada II da Universidade Presbiteriana Mackenzie - SP.Merge PDF: MergePDF is an easy and simple tool which could help you batch merge PDF files (any number you want) into one PDF file quickly. It is completely free. With MerNetGen: O NetGen é um projeto de geração automatizada de interface Web em ASP.NET a partir de modelo de classes do LINQ-TO-SQL, estando disponível como um Addin para o Visual Studio 2008.Other Ways: ?? ! ????!!!!!Physical Therapy Timer: A simple Windows application that makes it easy to set a countdown timer for a required time period, and keep track of how many times the timer has be run.PicoMax: This version of PicoMax has everything you need to use the Seeed OLED without the MS Graphics class. It can be used in Gadgeteer or a regular NETMF project.post tracker: ASP, WEB API, WINDOWS PhoneRandomEngine: Open Source Game Engine.Resident Raver: This is Windows 8 version of Resident Raver which I wrote for my Intro To HTML5 Game Dev book. It requires the Impact JavaScript game framework.RogueTS: RogueTS is a Roguelike engine written in TypeScript. It features randomly generated maps, basic lighting effects, collision detection and additional stub code.RonYeeStores: ????????????,??C2C?B2C???????????。ShangHaiTunnel monitor: shang hai tunnel projectSharePoint List Security: Free, open source set of features for SharePoint 2010 to enhance the OOTB list permissions by adding column level permissions for users and groups.Skywave Code Lines Counter: It is a tiny program which will count real written lines of code. Real lines means that we removed autogenerated lines, empty lines and single '{' or '}' lines (in C#) and ... Currently supports .cs and .vb files (for C# and VB) It will say how much you worked ;-)switchcontroles: HAT Switch controle sy seguimientoSyscore: Steps : 1 - Definitions 2 - Requirements 3 - Projecting 4 - Delegation 5 - Prototyping 6 - Testing/Analizing 7 - Finalizing 8 - Distribution 9 - Congratulationstarikscodeplexproject: tarikscodeplexprojectTaskManager04: This project is an effort to implement the coding examples given in the TASK MANAGER video series into a working DotNetNuke program.testc: this is a test ,heihei,yongshengtesttom10172012git01: fds fs fs fs tsUnit - TypeScript Unit Testing Framework: tsUnit is a unit testing framework for TypeScript, written in TypeScript. It allows you to encapsulate your test functions in classes and modules.WebGrease: WebGrease is a suite of tools for optimizing javascript, css files and images.????: ??《?????》、《??》、《???》、《??》???。。。

    Read the article

< Previous Page | 49 50 51 52 53