Search Results

Search found 12056 results on 483 pages for 'apple dev'.

Page 126/483 | < Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >

  • Startup applications 14.04

    - by gpalthrow
    I'm trying to mount 3 partitions by default when I log in. I used to do that with the start up applications in 12.04 However there seems to be a nasty bug in 14.04 where false command duplicates are removed (apparently, only the command is taken into account, even if the arguments differ). I tried using one single command instead of 3, putting all the devices in one line (something like /usr/bin/udisks --mount /dev/sdb1; /usr/bin/udisks --mount /dev/sdb2; /usr/bin/udisks --mount /dev/sdb3; ) However it does not seem to work either. What can I do to have 3 partitions mounted by default (with their label) ?!

    Read the article

  • How can I handle "NTFS partition is in unsafe state"?

    - by user211040
    Error mounting /dev/sda3 at /media/franklcohen/OS: Command-line `mount -t "ntfs" -o "uhelper=udisks2,nodev,nosuid,uid=1000,gid=1000,dmask=0077,fmask=0177" "/dev/sda3" "/media/franklcohen/OS"' exited with non-zero exit status 14: Windows is hibernated, refused to mount. Failed to mount '/dev/sda3': Operation not permitted The NTFS partition is in an unsafe state. Please resume and shutdown Windows fully (no hibernation or fast restarting), or mount the volume read-only with the 'ro' mount option. i get this error i have disabled fast start up in windows 8. what can i do i shut down my computer 4 times in windows and disabled fast start in windows 8. i'm using Ubuntu 13.10. please help thanks.

    Read the article

  • X 11 Development Librararies

    - by user2592799
    I am new to ubuntu, I am using ubuntu 13.10, and trying to install NS-2. During the installation I am facing the following error; X11/Xlib.h: No such file or directory Then i tried to install sudo apt-get install libx11-dev This time I am facing the following error; Reading package lists... Done Building dependency tree Reading state information... Done Package libx11-dev is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'libx11-dev' has no installation candidate I have no idea how to deal it, Please help Thanks in advance

    Read the article

  • Is it possible to boot without mounting /home?

    - by Exeleration-G
    I want to backup my /home partition on /dev/sda6 using partclone, a command line utility. To do so, I first have to unmount the partition that I want to backup. Most of the time, this is easy, but /home is used by so many processes that it can't be unmounted without first killing all those processes. So, the thing I'm looking for is a way to boot Ubuntu, without mounting /home, so I can back up the not-mounted /dev/sda6 partition. Is that possible? To be clear, it would be nice if this special boot could be 'one-time-only'. So I'm not looking for ways to change /etc/fstab in such way that /dev/sda6 won't be mounted. That's because that would require me to change /etc/fstab twice each time just to make a backup. I'm aware of the fact that there are other backup solutions available, such as deja-dup. I'd like to use partclone, though.

    Read the article

  • Can't open any of my drives/devices(not USB drive)

    - by Anontu
    I can't open any of my computer drives (:c/,:d/.....) Every time i tried to open the drives, the following notice appeared: (i'm new in Ask Ubuntu,so,i can't upload the snapshot,i need 10 reputations to do that) **Unable to access "__ Volume"** error mounting/dev/sda7at/media/MyPC/__Volume:Command-Line 'mount-t "ntfs"-o "uhelper=udisk2,nodev,nosuid,uid=1000,gid=1000,dmask=0077,fmask=0177""/ dev/sda7""/media/MyPC/__Volume"' exited with non-zero exit status 14:The disk contains an unclean file system (0,0).| Metadata Kept in Windows cache,refused to mount. Failed to mount '/dev/sda7':Operation not permitted The NTFS partition is in an unsafe state.Please,resume and shutdown Windows fully(no hibernation or fast restarting),or mount the volume read-only with the 'ro' mount option. I have Windows side by side Ubuntu 13.04 in my computer and i have done (like-Shutting down Windows properly...) things as these in the notice. But it's not working.

    Read the article

  • External hdd boot entries added to GRUB after upgrade to 13.10

    - by Jason
    Long story short, I was using my laptop's internal HDD as an external one on my desktop, and I forgot to remove it when I was upgrading to 13.10. Now GRUB on my desktop has added entries for the Windows and Ubuntu partitions that exist on the laptop's HDD, but I don't want them to be there. Can I safely remove them ? My GRUB table looks like this if I recall correctly: Ubuntu Advanced Options (or something like this) Memtest Another Memtest Entry Windows 7 (loader) on /dev/sda1 Windows 7 (loader) on /dev/sdb1 Ubuntu 13.04 Advanced Options for Ubuntu 13.04 Where Windows 7 (loader) on /dev/sdb1 Ubuntu 13.04 Advanced Options for Ubuntu 13.04 are the entries from the external/laptop's HDD.

    Read the article

  • /usr/src is eating up all inodes

    - by klingone
    it seems /usr/src (apparently old kernels) used up all my inodes Dateisystem Inodes IBenutzt IFrei IUse% Eingehängt auf /dev/sda4 489600 489600 0 100% / devtmpfs 219658 539 219119 1% /dev none 219844 474 219370 1% /run none 219844 3 219841 1% /run/lock none 219844 8 219836 1% /run/shm /dev/sda6 5963776 8361 5955415 1% /home tried everything to remove/purge etc. the old kernels, without success.. dpkg is not working anymore, tried a few manual commands but 12.04 gives me nothing. apt-get etc. is not possible due to lack of space on the hard drive...which is not the problem obviously. however cannot install/ remove anything! read a lot about users with the same problem, but their solutions are not working for me. Anyone out there who could help? thanks a lot!

    Read the article

  • Hide from google while developing

    - by user210757
    I will be building a (wordpress) web site. While I am developing, other team members will be pushing content. I'd like to have it hidden from google while under development. It will be hosted on godaddy. I have thought of not pointing the domain name to it until live and using "preview dns", or buying a static IP during development. Or hosting dev site in a sub-directory ("/dev/") until ready and then moving it up a level. If in the dev directory I'd add htaccess or robots.txt to not crawl. Is any of this a bad idea? Will google penalize for any of this - like search by IP and then associate that with the domain later on? Any better ideas?

    Read the article

  • Update failing. Not enough space on /tmp

    - by KodeSeeker
    Im not being able to run update manager as I get the error saying that there is not enough free space in the /tmp directory. I've practically cleaned out the tmp directory but the error persists. Any help would be appreciated. here's df-h /dev/loop0 13G 11G 952M 92% / udev 2.0G 4.0K 2.0G 1% /dev tmpfs 785M 920K 784M 1% /run none 5.0M 0 5.0M 0% /run/lock none 2.0G 584K 2.0G 1% /run/shm /dev/sda6 20G 14G 6.4G 68% /host overflow 1.0M 16K 1008K 2% /tmp Thanks.

    Read the article

  • How can I use the Homebrew Python with Homebrew MacVim on Mountain Lion?

    - by Stephen Jennings
    I originally asked and answered this question: How can I use the Homebrew Python version with Homebrew MacVim? These instructions worked on Snow Leopard using Xcode 4.0.1 and associated developer tools. However, they no longer seem to work on Mountain Lion with Xcode 4.4.1. My goal is to leave the system's version of Python completely untouched, and to only install PyPI packages into Homebrew's site-packages directory. I want to use the vim_bridge package in MacVim, so I need to compile MacVim against the Homebrew version of Python. I've edited the MacVim formula to add these to the arguments: --enable-pythoninterp=dynamic --with-python-config-dir=/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/Current/lib/python2.7/config Then I install with the command: brew install macvim --override-system-vim --custom-icons --with-cscope --with-lua However, it still seems to be somehow using Python 2.7.2 from the system. This seems strange to me because it also seems to be using the correct executable. :python print(sys.version) 2.7.2 (default, Jun 20 2012, 16:23:33) [GCC 4.2.1 Compatible Apple Clang 4.0 (tags/Apple/clang-418.0.60)] :python print(sys.executable) /usr/local/bin/python $ /usr/local/bin/python --version Python 2.7.3 $ /usr/local/bin/python -c "import sys; print(sys.version)" 2.7.3 (default, Aug 12 2012, 21:17:22) [GCC 4.2.1 Compatible Apple Clang 4.0 ((tags/Apple/clang-421.0.60))] $ readlink /usr/local/lib/python2.7/config /usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/Current/lib/python2.7/config I've removed everything in /usr/local and reinstalled Homebrew by running these commands: $ ruby <(curl -fsSkL raw.github.com/mxcl/homebrew/go) $ brew install git mercurial python ruby $ brew install macvim (nope, still broken) $ brew remove macvim $ ln -s /usr/local/Cellar/python/..../python2.7/config /usr/local/lib/python2.7/config $ brew install macvim

    Read the article

  • OSX 10.6 goes unresponsive

    - by mjb
    This behavior continues to perplex me. My MBP, running 10.6.7, stops responding to all Apple-based software. Whatever software I have open remains open (Terminal, iTunes, Safari), but if I try to use the F-shortcuts or launch any OSX-based software not already open (System Preferences for example) it just bounces in the dock then never launches. I also cannot reboot without hard rebooting. I left terminal open, so I see the following in /var/log/system.log Jun 25 19:39:02 mjb-2 com.apple.ReportCrash.Root[59432]: 2011-06-25 19:39:02.585 ReportCrash[59432:7f1f] Saved crash report for CoreServicesUIAgent[59576] version ??? (???) to /Library/Logs/DiagnosticReports/CoreServicesUIAgent_2011-06-25-193902_localhost.crash Jun 25 19:39:02 mjb-2 com.apple.ReportCrash.Root[59432]: 2011-06-25 19:39:02.586 ReportCrash[59432:b10f] Saved crash report for quicklookd[59571] version ??? (???) to /Library/Logs/DiagnosticReports/quicklookd_2011-06-25-193902_localhost.crash Two requests: (1) please don't send this off to the Apple area so it can die a slow painful rotting death of tumbleweed. (2) Suggest what I should kill -9 or logs to look at to cut this sh*$ out. Cheers, mjb

    Read the article

  • How to reliably mount a shared folder /volume/folder at boot up

    - by Tanmay
    Following is my sample.sh in /usr/local/bin/ #!/bin/sh mkdir -p /Volumes/folder mount -t afp -o rw afp://user:password@server_name/folder_name /Volumes/folder Following is my com.apple.sample.plist in /Library/LaunchAgents/ ?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>com.apple.sample</string> <key>ProgramArguments</key> <array> <string>/usr/local/bin/sample.sh</string> </array> <key>RunAtLoad</key> <true/> </dict> </plist> Where as when I am able to run sample.sh independently and is working fine. Also I have tried using launchd.conf as mkdir -p /Volumes/folder mount -t afp -o rw afp://user:[email protected]/testsuites /Volumes/folder Still not working.

    Read the article

  • Understanding the value of Customer Experience & Loyalty for the Telecommunications Industry

    - by raul.goycoolea
    Worried by economic woes and market forces, especially in mature markets, communications service providers (CSPs) increasingly focus on improving customer experience. In fact, it seems difficult to find a major message by a C-level executive in the developed world that does not include something on "meeting and exceeding customers' needs". Frequently in customer satisfaction studies by prominent firms, CSPs fall short of the leadership demonstrated by other industries that take customer-centric approaches to their bottom-line strategies. Consider the following:Despite the continued impact of global economic crisis, in July 2010, Apple Computer posted record revenue and net quarterly profit. Those who attribute the results primarily to the iPhone 4 launch should note that Apple also shipped around 30% more Macintosh computers than the same period the previous year. Even sales of the iPod line increased by 8% in a highly commoditized, shrinking media player market. Finally, Apple began selling iPads during the quarter, with total sales of more than 3 million units. What does Apple have that the others lack? Well, some great products (and services) to be sure, but it also excels at customer service and support, marketing, and distribution, and has one of the strongest brands globally. Its products are useful, simple to use, easy to acquire and augment, high quality, and considered very cool. They also evoke such an emotional response from many of Apple's customers, which they turn up their noses at competitive products.In other words, Apple appears to have mastered virtually every aspect of customer experience and the resultant loyalty of its customer base - even in difficult financial times. Through that unwavering customer focus, Apple continues to drive its revenues and profits to new heights. Other customer loyalty leaders like Wal-Mart, Google, Toyota and Honda are also doing well by focusing on customer experience as an essential driver of profitability. Service providers should note this performance and ask themselves how they might leverage the same principles to increase their own profitability. After all, that is what customer experience and loyalty are all about: profitability.To successfully manage all the critical touch points of customer experience, CSPs must shun the one-size-fits-all approach. They can no longer afford to view customer service fundamentally as an act of altruism - which mentality dates back to the industry's civil service days, when CSPs were typically government organizations that were critical to economic development and public safety.As regulators and public officials have pushed, and continue to push, service providers to new heights of reliability - using incentives and punishments - most CSPs already have some of the fundamental building blocks of customer service in place. Yet despite that history and experience, service providers still lag other industries in providing what is seen as good customer service.As we observed in the TMF's 2009 Insights Research report, Customer Experience Management: Driving Loyalty & Profitability there has been resurgence in interest by CSPs. More and more of them have stated ambitions to catch up other industries, and they are realizing that good customer service is a powerful strategy for increasing business performance and profitability, not an act of good will.CSPs are recognizing the connection between customer experience and profitability, as demonstrated in many studies. For example, according to research by Bain & Company, a 5 percent improvement in customer retention rates can yield as much as a 75 percent increase in profits for companies across a range of industries.After decades of customer experience strategy formulation, Bain partner and business author, Frederick Reichheld, considers "would you recommend us to a friend?" as the ultimate question for a customer. How many times have you or your friends recommended an iPod, iPhone or a Mac? What do your children recommend to their peers? Their peers to them?There are certain steps service providers have to take to create more personalized relationships with their customers, as well as reduce churn and increase profitability, all while becoming leaner and more agile. First, they have to define customer experience, we define it as the result of the sum of observations, perceptions, thoughts and feelings arising from interactions and relationships between customers and their service provider(s). Virtually every customer touch point - whether directly or indirectly linked to service providers and their partners - contributes to customer perception, satisfaction, loyalty, and ultimately profitability. Gaining leadership in customer experience and satisfaction will not be a simple task, as it is affected by virtually every customer-facing aspect of the service provider, and in turn impacts the service provider deeply - especially on the all-important bottom line. The scope of issues affecting customer experience is complex and dynamic.With new services, devices and applications extending the basis of customer experience to domains beyond the direct control of the service provider, it is likely to increase in complexity and dynamism.Customer loyalty = increased profitsAs stated earlier, customer experience programs are not fundamentally altruistic exercises, but a strategic means of improving competitiveness and profitability in the short and long term. Loyalty is essential to deriving long term profits from customers.Some of the earliest loyalty programs date back to the 1930s, when packaged goods companies offered embedded coupons for rewards to buyers, and eventually retail chains began offering reward programs to frequent shoppers. These programs continued for decades but were leapfrogged in the 1980s by more aggressive programs from the airlines.This movement was led by American Airlines, which launched the first full-scale loyalty marketing program of the modern era with the AAdvantage frequent flyer scheme. It was the first to reward frequent fliers with notional air miles that could be accumulated and later redeemed for free travel. Figure 1: Opportunities example of Customer loyalty driven profitOther airlines and travel providers were quick to grasp the incredible value of providing customers with an incentive to use their company exclusively. Within a few years, dozens of travel industry companies launched similar initiatives and now loyalty programs are achieving near-ubiquity in many service industries, especially those in which it is difficult to differentiate offerings by product attributes.The belief is that increased profitability will result from customer retention efforts because:•    The cost of acquisition occurs only at the beginning of a relationship: the longer the relationship, the lower the amortized cost;•    Account maintenance costs decline as a percentage of total costs, or as a percentage of revenue, over the lifetime of the relationship;•    Long term customers tend to be less inclined to switch and less price sensitive which can result in stable unit sales volume and increases in dollar-sales volume;•    Long term customers may initiate word-of-mouth promotions and referrals, which cost the company nothing and arguably are the most effective form of advertising;•    Long-term customers are more likely to buy ancillary products and higher margin supplemental products;•    Long term customers tend to be satisfied with their relationship with the company and are less likely to switch to competitors, making market entry or competitors gaining market share difficult;•    Regular customers tend to be less expensive to service, as they are familiar with the processes involved, require less 'education', and are consistent in their order placement;•    Increased customer retention and loyalty makes the employees' jobs easier and more satisfying. In turn, happy employees feed back into higher customer satisfaction in a virtuous circle. Figure 2: The virtuous circle of customer loyaltyFigure 2 represents a high-level example of a virtuous cycle driven by customer satisfaction and loyalty, depicting how superiority in product and service offerings, as well as strong customer support by competent employees, lead to higher sales and ultimately profitability. As stated above, this is not a new concept, but succeeding with it is difficult. It has eluded many a company driven to achieve profitability goals. Of course, for this circle to be virtuous, the customer relationship(s) must be profitable.Trying to maintain the loyalty of unprofitable customers is not a viable business strategy. It is, therefore, important that marketers can assess the profitability of each customer (or customer segment), and either improve or terminate relationships that are not profitable. This means each customer's 'relationship costs' must be understood and compared to their 'relationship revenue'. Customer lifetime value (CLV) is the most commonly used metric here, as it is generally accepted as a representation of exactly how much each customer is worth in monetary terms, and therefore a determinant of exactly how much a service provider should be willing to spend to acquire or retain that customer.CLV models make several simplifying assumptions and often involve the following inputs:•    Churn rate represents the percentage of customers who end their relationship with a company in a given period;•    Retention rate is calculated by subtracting the churn rate percentage from 100;•    Period/horizon equates to the units of time into which a customer relationship can be divided for analysis. A year is the most commonly used period for this purpose. Customer lifetime value is a multi-period calculation, often projecting three to seven years into the future. In practice, analysis beyond this point is viewed as too speculative to be reliable. The model horizon is the number of periods used in the calculation;•    Periodic revenue is the amount of revenue collected from a customer in a given period (though this is often extended across multiple periods into the future to understand lifetime value), such as usage revenue, revenues anticipated from cross and upselling, and often some weighting for referrals by a loyal customer to others; •    Retention cost describes the amount of money the service provider must spend, in a given period, to retain an existing customer. Again, this is often forecast across multiple periods. Retention costs include customer support, billing, promotional incentives and so on;•    Discount rate means the cost of capital used to discount future revenue from a customer. Discounting is an advanced method used in more sophisticated CLV calculations;•    Profit margin is the projected profit as a percentage of revenue for the period. This may be reflected as a percentage of gross or net profit. Again, this is generally projected across the model horizon to understand lifetime value.A strong focus on managing these inputs can help service providers realize stronger customer relationships and profits, but there are some obstacles to overcome in achieving accurate calculations of CLV, such as the complexity of allocating costs across the customer base. There are many costs that serve all customers which must be properly allocated across the base, and often a simple proportional allocation across the whole base or a segment may not accurately reflect the true cost of serving that customer;  This is made worse by the fragmentation of customer information, which is likely to be across a variety of product or operations groups, and may be difficult to aggregate due to different representations.In addition, there is the complexity of account relationships and structures to take into consideration. Complex account structures may not be understood or properly represented. For example, a profitable customer may have a separate account for a second home or another family member, which may appear to be unprofitable. If the service provider cannot relate the two accounts, CLV is not properly represented and any resultant cancellation of the apparently unprofitable account may result in the customer churning from the profitable one.In summary, if service providers are to realize strong customer relationships and their attendant profits, there must be a very strong focus on data management. This needs to be coupled with analytics that help business managers and those who work in customer-facing functions offer highly personalized solutions to customers, while maintaining profitability for the service provider. It's clear that acquiring new customers is expensive. Advertising costs, campaign management expenses, promotional service pricing and discounting, and equipment subsidies make a serious dent in a new customer's profitability. That is especially true given the rising subsidies for Smartphone users, which service providers hope will result in greater profits from profits from data services profitability in future.  The situation is made worse by falling prices and greater competition in mature markets.Customer acquisition through industry consolidation isn't cheap either. A North American service provider spent about $2,000 per subscriber in its acquisition of a smaller company earlier this year. While this has allowed it to leapfrog to become the largest mobile service provider in the country, it required a total investment of more than $28 billion (including assumption of the acquiree's debt).While many operating cost synergies clearly made this deal more attractive to the acquiring company, this is certainly an expensive way to acquire customers: the cost per subscriber in this case is not out of line with the prices others have paid for acquisitions.While growth by acquisition certainly increases overall revenues, it often creates tremendous challenges for profitability. Organic growth through increased customer loyalty and retention is a more effective driver of profit, as well as a stronger predictor of future profitability. Service providers, especially those in mature markets, are increasingly recognizing this and taking steps toward a creating a more personalized, flexible and satisfying experience for their customers.In summary, the clearest path to profitability for companies in virtually all industries is through customer retention and maximization of lifetime value. Service providers would do well to recognize this and focus attention on profitable customer relationships.

    Read the article

  • 8 Mac System Features You Can Access in Recovery Mode

    - by Chris Hoffman
    A Mac’s Recovery Mode is for more than just reinstalling Mac OS X. You’ll find many other useful troubleshooting utilities here — you can use these even if your Mac can’t boot normally. To access Recovery Mode, restart your Mac and press and hold the Command + R keys during the boot-up process. This is one of several hidden startup options on a Mac. Reinstall Mac OS X Most people know Recovery Mode as the place you go to reinstall OS X on your Mac. Recovery Mode will download the OS X installer files from teh Intenret if you don’t have them locally, so they don’t take up space on your disk and you’ll never have to hunt for an opearign system disc. Better yet, it will download up-to-date installation files so you don’t have to spend hours installing operating system updates later. Microsoft could learn a lot from Apple here. Restore From a Time Machine Backup Instead of reinstalling OS X, you can choose to restore your Mac from a time machine backup. This is like restoring a system image on another operating system. You’ll need an external disk containing a backup image created on the current computer to do this. Browse the Web The Get Help Online link opens the Safari web browser to Apple’s documentation site. It’s not limited to Apple’s website, though — you can navigate to any website you like. This feature allows you to access and use a browser on your Mac even if it isn’t booting properly. It’s ideal for looking up troubleshooting information. Manage Your Disks The Disk Utility option opens the same Disk Utility you can access from within Mac OS X. It allows you to partition disks, format them, scan disks for problems, wipe drives, and set up drives in a RAID configuration. If you need to edit partitions from outside your operating system, you can just boot into the recovery environment — you don’t have to download a special partitioning tool and boot into it. Choose the Default Startup Disk Click the Apple menu on the bar at the top of your screen and select Startup Disk to access the Choose Startup Disk tool. Use this tool to choose your computer’s default startup disk and reboot into another operating system. For example, it’s useful if you have Windows installed alongside Mac OS X with Boot Camp. Add or Remove an EFI Firmware Password You can also add a firmware password to your Mac. This works like a BIOS password or UEFI password on a Windows or Linux PC. Click the Utilities menu on the bar at the top of your screen and select Firmware Password Utility to open this tool. Use the tool to turn on a firmware password, which will prevent your computer from starting up from a different hard disk, CD, DVD, or USB drive without the password you provide. This prevents people form booting up your Mac with an unauthorized operating system. If you’ve already enabled a firmware password, you can remove it from here. Use Network Tools to Troubleshoot Your Connection Select Utilities > Network Utility to open a network diagnostic tool. This utility provides a graphical way to view your network connection information. You can also use the netstat, ping, lookup, traceroute, whois, finger, and port scan utilities from here. These can be helpful to troubleshoot Internet connection problems. For example, the ping command can demonstrate whether you can communicate with a remote host and show you if you’re experiencing packet loss, while the traceroute command can show you where a connection is failing if you can’t connect to a remote server. Open a Terminal If you’d like to get your hands dirty, you can select Utilities > Terminal to open a terminal from here. This terminal allows you to do more advanced troubleshooting. Mac OS X uses the bash shell, just as typical Linux distributions do. Most people will just need to use the Reinstall Mac OS X option here, but there are many other tools you can benefit from. If the Recovery Mode files on your Mac are damaged or unavailable, your Mac will automatically download them from Apple so you can use the full recovery environment.

    Read the article

  • No, iCloud Isn’t Backing Them All Up: How to Manage Photos on Your iPhone or iPad

    - by Chris Hoffman
    Are the photos you take with your iPhone or iPad backed up in case you lose your device? If you’re just relying on iCloud to manage your important memories, your photos may not be backed up at all. Apple’s iCloud has a photo-syncing feature in the form of “Photo Stream,” but Photo Stream doesn’t actually perform any long-term backups of your photos. iCloud’s Photo Backup Limitations Assuming you’ve set up iCloud on your iPhone or iPad, your device is using a feature called “Photo Stream” to automatically upload the photos you take to your iCloud storage and sync them across your devices. Unfortunately, there are some big limitations here. 1000 Photos: Photo Stream only backs up the latest 1000 photos. Do you have 1500 photos in your Camera Roll folder on your phone? If so, only the latest 1000 photos are stored in your iCloud account online. If you don’t have those photos backed up elsewhere, you’ll lose them when you lose your phone. If you have 1000 photos and take one more, the oldest photo will be removed from your iCloud Photo Stream. 30 Days: Apple also states that photos in your Photo Stream will be automatically deleted after 30 days “to give your devices plenty of time to connect and download them.” Some people report photos aren’t deleted after 30 days, but it’s clear you shouldn’t rely on iCloud for more than 30 days of storage. iCloud Storage Limits: Apple only gives you 5 GB of iCloud storage space for free, and this is shared between backups, documents, and all other iCloud data. This 5 GB can fill up pretty quickly. If your iCloud storage is full and you haven’t purchased any more storage more from Apple, your photos aren’t being backed up. Videos Aren’t Included: Photo Stream doesn’t include videos, so any videos you take aren’t automatically backed up. It’s clear that iCloud’s Photo Stream isn’t designed as a long-term way to store your photos, just a convenient way to access recent photos on all your devices before you back them up for real. iCloud’s Photo Stream is Designed for Desktop Backups If you have a Mac, you can launch iPhoto and enable the Automatic Import option under Photo Stream in its preferences pane. Assuming your Mac is on and connected to the Internet, iPhoto will automatically download photos from your photo stream and make local backups of them on your hard drive. You’ll then have to back up your photos manually so you don’t lose them if your Mac’s hard drive ever fails. If you have a Windows PC, you can install the iCloud Control Panel, which will create a Photo Stream folder on your PC. Your photos will be automatically downloaded to this folder and stored in it. You’ll want to back up your photos so you don’t lose them if your PC’s hard drive ever fails. Photo Stream is clearly designed to be used along with a desktop application. Photo Stream temporarily backs up your photos to iCloud so iPhoto or iCloud Control Panel can download them to your Mac or PC and make a local backup before they’re deleted. You could also use iTunes to sync your photos from your device to your PC or Mac, but we don’t really recommend it — you should never have to use iTunes. How to Actually Back Up All Your Photos Online So Photo Stream is actually pretty inconvenient — or, at least, it’s just a way to temporarily sync photos between your devices without storing them long-term. But what if you actually want to automatically back up your photos online without them being deleted automatically? The solution here is a third-party app that does this for you, offering the automatic photo uploads with long-term storage. There are several good services with apps in the App Store: Dropbox: Dropbox’s Camera Upload feature allows you to automatically upload the photos — and videos — you take to your Dropbox account. They’ll be easily accessible anywhere there’s a Dropbox app and you can get much more free Dropbox storage than you can iCloud storage. Dropbox will never automatically delete your old photos. Google+: Google+ offers photo and video backups with its Auto Upload feature, too. Photos will be stored in your Google+ Photos — formerly Picasa Web Albums — and will be marked as private by default so no one else can view them. Full-size photos will count against your free 15 GB of Google account storage space, but you can also choose to upload an unlimited amount of photos at a smaller resolution. Flickr: The Flickr app is no longer a mess. Flickr offers an Auto Upload feature for uploading full-size photos you take and free Flickr accounts offer a massive 1 TB of storage for you to store your photos. The massive amount of free storage alone makes Flickr worth a look. Use any of these services and you’ll get an online, automatic photo backup solution you can rely on. You’ll get a good chunk of free space, your photos will never be automatically deleted, and you can easily access them from any device. You won’t have to worry about storing local copies of your photos and backing them up manually. Apple should fix this mess and offer a better solution for long-term photo backup, especially considering the limitations aren’t immediately obvious to users. Until they do, third-party apps are ready to step in and take their place. You can also automatically back up your photos to the web on Android with Google+’s Auto Upload or Dropbox’s Camera Upload. Image Credit: Simon Yeo on Flickr     

    Read the article

  • iPack -The iOS Application Packager

    - by user13277780
    iOS applications are distributed in .ipa archive files. These files are regular zip files which contain application resources and executable-s. To protect them from unauthorized modifications and to provide identification of their sources, the content of the archives is signed. The signature is included in the application executable of an.ipa archive and protects the executable file itself and the associated resource files. Apple provides native Mac OS tools for signing iOS executable-s (which are actually generic Mach-O code signing tools), but these tools are not generally available on other platforms. To provide a multi-platform development environment for JavaFX based iOS applications, we ported iOS signing and packaging to Java and created a dedicated ipack tool for it. The iPack tool can be used as a last step of creating .ipa package on various operating systems. Prototype has been tested by creating a final distributable for JavaFX application that runs on iPad, all done on Windows 7. Source Code The source code of iPac tool is in OpenJFX project repository. You can find it in: <openjfx root>/rt/tools/ios/Maven/ipack To build the iPack tool use: rt/tools/ios/Maven/ipack$ mvn package After building, you can run the tool: java -jar <path to ipack.jar> <arguments>  Signing keystore The tool uses a java key store to read the signing certificate and the associated private key. To prepare such keystore users can use keytool from JDK. One possible scenario is to import an existing private key and the certificate from a key store used on Mac OS: To list the content of an existing key store and identify the source alias: keytool -list -keystore <src keystore>.p12 -storetype pkcs12 -storepass <src keystore password> To create Java key store and import the private key with its certificate to the keys store: keytool -importkeystore \ -destkeystore <dst keystore> -deststorepass <dst keystore password> \ -srckeystore <src keystore>.p12 -srcstorepass <src keystore password> -srcstoretype pkcs12 \ -srcalias <src alias> -destalias <dst alias> -destkeypass <dst key password> Another scenario would be to generate a private / public key pair directly in a Java key store and create a certificate request from it. After sending the request to Apple one can then import the certificate response back to the Java key store and complete the signing certificate entry. In both scenarios the resulting alias in the Java key store will contain only a single (leaf) certificate. This can be verified with the following command: keytool -list -v -keystore <ipack keystore> -storepass <keystore password> When looking at the Certificate chain length entry, the number next to it is 1. When an executable file is signed on Mac OS, the resulting signature (in CMS format) includes the whole certificate chain up to the Apple Root CA. The ipack tool includes only the chain which is stored under the alias specified on the command line. So to have the whole chain in the signature we need to replace the single certificate entry under the alias with the corresponding full certificate chain. To do that we need first to create the chain in a separate file. It is easy to create such chain when working with certificates in Base-64 encoded PEM format. A certificate chain can be created by concatenating PEM certificates, which should form the chain, into a single file. For iOS signing we need the following certificates in our chain: Apple Root CA Apple Worldwide Developer Relations CA Our signing leaf certificate To convert a certificate from the binary DER format (.der, .cer) to PEM format: keytool -importcert -noprompt -keystore temp.ks -storepass temppwd -alias tempcert -file <certificate>.cer keytool -exportcert -keystore temp.ks -storepass temppwd -alias tempcert -rfc -file <certificate>.pem To export the signing certificate into PEM format: keytool -exportcert -keystore <ipack keystore> -storepass <keystore password> -alias <signing alias> -rfc -file SigningCert.pem After constructing a chain from AppleIncRootCertificate.pem, AppleWWDRCA.pem andSigningCert.pem, it can be imported back into the keystore with: keytool -importcert -noprompt -keystore <ipack keystore> -storepass <keystore password> -alias <signing alias> -keypass <key password> -file SigningCertChain.pem To summarize, the following example shows the full certificate chain replacement process: keytool -importcert -noprompt -keystore temp.ks -storepass temppwd -alias tempcert1 -file AppleIncRootCertificate.cer keytool -exportcert -keystore temp.ks -storepass temppwd -alias tempcert1 -rfc -file AppleIncRootCertificate.pem keytool -importcert -noprompt -keystore temp.ks -storepass temppwd -alias tempcert2 -file AppleWWDRCA.cer keytool -exportcert -keystore temp.ks -storepass temppwd -alias tempcert2 -rfc -file AppleWWDRCA.pem keytool -exportcert -keystore ipack.ks -storepass keystorepwd -alias mycert -rfc -file SigningCert.pem cat SigningCert.pem AppleWWDRCA.pem AppleIncRootCertificate.pem >SigningCertChain.pem keytool -importcert -noprompt -keystore ipack.ks -storepass keystorepwd -alias mycert -keypass keypwd -file SigningCertChain.pem keytool -list -v -keystore ipack.ks -storepass keystorepwd Usage When the ipack tool is started with no arguments it prints the following usage information: -appname MyApplication -appid com.myorg.MyApplication     Usage: ipack <archive> <signing opts> <application opts> [ <application opts> ... ] Signing options: -keystore <keystore> keystore to use for signing -storepass <password> keystore password -alias <alias> alias for the signing certificate chain and the associated private key -keypass <password> password for the private key Application options: -basedir <directory> base directory from which to derive relative paths -appdir <directory> directory with the application executable and resources -appname <file> name of the application executable -appid <id> application identifier Example: ipack MyApplication.ipa -keystore ipack.ks -storepass keystorepwd -alias mycert -keypass keypwd -basedir mysources/MyApplication/dist -appdir Payload/MyApplication.app -appname MyApplication -appid com.myorg.MyApplication    

    Read the article

  • how do i install PHP with JSON and OAuth on Mac Snow Leopard?

    - by meilas
    i want to use the dropbox api via this library http://code.google.com/p/dropbox-php/ i installed MAMP then I tried "sudo pecl install oauth" but I got downloading oauth-1.0.0.tgz ... Starting to download oauth-1.0.0.tgz (42,834 bytes) ............done: 42,834 bytes 6 source files, building running: phpize Configuring for: PHP Api Version: 20090626 Zend Module Api No: 20090626 Zend Extension Api No: 220090626 building in /var/tmp/pear-build-root/oauth-1.0.0 running: /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/configure checking for grep that handles long lines and -e... /usr/bin/grep checking for egrep... /usr/bin/grep -E checking for a sed that does not truncate output... /opt/local/bin/gsed checking for cc... cc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether cc accepts -g... yes checking for cc option to accept ISO C89... none needed checking how to run the C preprocessor... cc -E checking for icc... no checking for suncc... no checking whether cc understands -c and -o together... yes checking for system library directory... lib checking if compiler supports -R... no checking if compiler supports -Wl,-rpath,... yes checking build system type... i686-apple-darwin10.4.0 checking host system type... i686-apple-darwin10.4.0 checking target system type... i686-apple-darwin10.4.0 checking for PHP prefix... /usr checking for PHP includes... -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib checking for PHP extension directory... /usr/lib/php/extensions/no-debug-non-zts-20090626 checking for PHP installed headers prefix... /usr/include/php checking if debug is enabled... no checking if zts is enabled... no checking for re2c... no configure: WARNING: You will need re2c 0.13.4 or later if you want to regenerate PHP parsers. checking for gawk... gawk checking for oauth support... yes, shared checking for cURL in default path... found in /usr checking for ld used by cc... /usr/libexec/gcc/i686-apple-darwin10/4.2.1/ld checking if the linker (/usr/libexec/gcc/i686-apple-darwin10/4.2.1/ld) is GNU ld... no checking for /usr/libexec/gcc/i686-apple-darwin10/4.2.1/ld option to reload object files... -r checking for BSD-compatible nm... /usr/bin/nm checking whether ln -s works... yes checking how to recognise dependent libraries... pass_all checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking dlfcn.h usability... yes checking dlfcn.h presence... yes checking for dlfcn.h... yes checking the maximum length of command line arguments... 196608 checking command to parse /usr/bin/nm output from cc object... rm: conftest.dSYM: is a directory rm: conftest.dSYM: is a directory rm: conftest.dSYM: is a directory rm: conftest.dSYM: is a directory ok checking for objdir... .libs checking for ar... ar checking for ranlib... ranlib checking for strip... strip rm: conftest.dSYM: is a directory rm: conftest.dSYM: is a directory checking if cc static flag works... rm: conftest.dSYM: is a directory yes checking if cc supports -fno-rtti -fno-exceptions... rm: conftest.dSYM: is a directory no checking for cc option to produce PIC... -fno-common checking if cc PIC flag -fno-common works... rm: conftest.dSYM: is a directory yes checking if cc supports -c -o file.o... rm: conftest.dSYM: is a directory yes checking whether the cc linker (/usr/libexec/gcc/i686-apple-darwin10/4.2.1/ld) supports shared libraries... yes checking dynamic linker characteristics... darwin10.4.0 dyld checking how to hardcode library paths into programs... immediate checking whether stripping libraries is possible... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... yes checking whether to build static libraries... no creating libtool appending configuration tag "CXX" to libtool configure: creating ./config.status config.status: creating config.h running: make /bin/sh /private/var/tmp/pear-build-root/oauth-1.0.0/libtool --mode=compile cc -I. -I/private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth -DPHP_ATOM_INC -I/private/var/tmp/pear-build-root/oauth-1.0.0/include -I/private/var/tmp/pear-build-root/oauth-1.0.0/main -I/private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -Wall -g -c /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/oauth.c -o oauth.lo mkdir .libs cc -I. "-I/private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth" -DPHP_ATOM_INC -I/private/var/tmp/pear-build-root/oauth-1.0.0/include -I/private/var/tmp/pear-build-root/oauth-1.0.0/main "-I/private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth" -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -Wall -g -c "/private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/oauth.c" -fno-common -DPIC -o .libs/oauth.o In file included from /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/php_oauth.h:47, from /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/oauth.c:14: /usr/include/php/ext/pcre/php_pcre.h:29:18: error: pcre.h: No such file or directory In file included from /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/php_oauth.h:47, from /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/oauth.c:14: /usr/include/php/ext/pcre/php_pcre.h:37: error: expected '=', ',', ';', 'asm' or 'attribute' before '*' token /usr/include/php/ext/pcre/php_pcre.h:38: error: expected '=', ',', ';', 'asm' or 'attribute' before '*' token /usr/include/php/ext/pcre/php_pcre.h:44: error: expected specifier-qualifier-list before 'pcre' make: * [oauth.lo] Error 1 ERROR: `make' failed

    Read the article

  • How do I install PHP with JSON and OAuth on Mac Snow Leopard?

    - by meilas
    I want to use the Dropbox API via this library, http://code.google.com/p/dropbox-php/. I installed MAMP, and then I tried sudo pecl install oauth but I got the following. >>>> downloading oauth-1.0.0.tgz ... Starting to download oauth-1.0.0.tgz (42,834 bytes) ............done: 42,834 bytes 6 source files, building running: phpize Configuring for: PHP Api Version: 20090626 Zend Module Api No: 20090626 Zend Extension Api No: 220090626 building in /var/tmp/pear-build-root/oauth-1.0.0 running: /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/configure checking for grep that handles long lines and -e... /usr/bin/grep checking for egrep... /usr/bin/grep -E checking for a sed that does not truncate output... /opt/local/bin/gsed checking for cc... cc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether cc accepts -g... yes checking for cc option to accept ISO C89... none needed checking how to run the C preprocessor... cc -E checking for icc... no checking for suncc... no checking whether cc understands -c and -o together... yes checking for system library directory... lib checking if compiler supports -R... no checking if compiler supports -Wl,-rpath,... yes checking build system type... i686-apple-darwin10.4.0 checking host system type... i686-apple-darwin10.4.0 checking target system type... i686-apple-darwin10.4.0 checking for PHP prefix... /usr checking for PHP includes... -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib checking for PHP extension directory... /usr/lib/php/extensions/no-debug-non-zts-20090626 checking for PHP installed headers prefix... /usr/include/php checking if debug is enabled... no checking if zts is enabled... no checking for re2c... no configure: WARNING: You will need re2c 0.13.4 or later if you want to regenerate PHP parsers. checking for gawk... gawk checking for oauth support... yes, shared checking for cURL in default path... found in /usr checking for ld used by cc... /usr/libexec/gcc/i686-apple-darwin10/4.2.1/ld checking if the linker (/usr/libexec/gcc/i686-apple-darwin10/4.2.1/ld) is GNU ld... no checking for /usr/libexec/gcc/i686-apple-darwin10/4.2.1/ld option to reload object files... -r checking for BSD-compatible nm... /usr/bin/nm checking whether ln -s works... yes checking how to recognise dependent libraries... pass_all checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking dlfcn.h usability... yes checking dlfcn.h presence... yes checking for dlfcn.h... yes checking the maximum length of command line arguments... 196608 checking command to parse /usr/bin/nm output from cc object... rm: conftest.dSYM: is a directory rm: conftest.dSYM: is a directory rm: conftest.dSYM: is a directory rm: conftest.dSYM: is a directory ok checking for objdir... .libs checking for ar... ar checking for ranlib... ranlib checking for strip... strip rm: conftest.dSYM: is a directory rm: conftest.dSYM: is a directory checking if cc static flag works... rm: conftest.dSYM: is a directory yes checking if cc supports -fno-rtti -fno-exceptions... rm: conftest.dSYM: is a directory no checking for cc option to produce PIC... -fno-common checking if cc PIC flag -fno-common works... rm: conftest.dSYM: is a directory yes checking if cc supports -c -o file.o... rm: conftest.dSYM: is a directory yes checking whether the cc linker (/usr/libexec/gcc/i686-apple-darwin10/4.2.1/ld) supports shared libraries... yes checking dynamic linker characteristics... darwin10.4.0 dyld checking how to hardcode library paths into programs... immediate checking whether stripping libraries is possible... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... yes checking whether to build static libraries... no >>>> creating libtool appending configuration tag "CXX" to libtool configure: creating ./config.status config.status: creating config.h running: make /bin/sh /private/var/tmp/pear-build-root/oauth-1.0.0/libtool --mode=compile cc -I. -I/private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth -DPHP_ATOM_INC -I/private/var/tmp/pear-build-root/oauth-1.0.0/include -I/private/var/tmp/pear-build-root/oauth-1.0.0/main -I/private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -Wall -g -c /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/oauth.c -o oauth.lo mkdir .libs cc -I. "-I/private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth" -DPHP_ATOM_INC -I/private/var/tmp/pear-build-root/oauth-1.0.0/include -I/private/var/tmp/pear-build-root/oauth-1.0.0/main "-I/private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth" -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -Wall -g -c "/private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/oauth.c" -fno-common -DPIC -o .libs/oauth.o In file included from /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/php_oauth.h:47, from /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/oauth.c:14: /usr/include/php/ext/pcre/php_pcre.h:29:18: error: pcre.h: No such file or directory In file included from /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/php_oauth.h:47, from /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/oauth.c:14: /usr/include/php/ext/pcre/php_pcre.h:37: error: expected '=', ',', ';', 'asm' or '__attribute__' before '*' token /usr/include/php/ext/pcre/php_pcre.h:38: error: expected '=', ',', ';', 'asm' or '__attribute__' before '*' token /usr/include/php/ext/pcre/php_pcre.h:44: error: expected specifier-qualifier-list before 'pcre' make: *** [oauth.lo] Error 1 ERROR: `make' failed </block>

    Read the article

  • Azure WNS to Win8 - Push Notifications for Metro Apps

    - by JoshReuben
    Background The Windows Azure Toolkit for Windows 8 allows you to build a Windows Azure Cloud Service that can send Push Notifications to registered Metro apps via Windows Notification Service (WNS). Some configuration is required - you need to: Register the Metro app for Windows Live Application Management Provide Package SID & Client Secret to WNS Modify the Azure Cloud App cscfg file and the Metro app package.appxmanifest file to contain matching Metro package name, SID and client secret. The Mechanism: These notifications take the form of XAML Tile, Toast, Raw or Badge UI notifications. The core engine is provided via the WNS nuget recipe, which exposes an API for constructing payloads and posting notifications to WNS. An application receives push notifications by requesting a notification channel from WNS, which returns a channel URI that the application then registers with a cloud service. In the cloud service, A WnsAccessTokenProvider authenticates with WNS by providing its credentials, the package SID and secret key, and receives in return an access token that the provider caches and can reuse for multiple notification requests. The cloud service constructs a notification request by filling out a template class that contains the information that will be sent with the notification, including text and image references. Using the channel URI of a registered client, the cloud service can then send a notification whenever it has an update for the user. The package contains the NotificationSendUtils class for submitting notifications. The Windows Azure Toolkit for Windows 8 (WAT) provides the PNWorker sample pair of solutions - The Azure server side contains a WebRole & a WorkerRole. The WebRole allows submission of new push notifications into an Azure Queue which the WorkerRole extracts and processes. Further background resources: http://watwindows8.codeplex.com/ - Windows Azure Toolkit for Windows 8 http://watwindows8.codeplex.com/wikipage?title=Push%20Notification%20Worker%20Sample - WAT WNS sample setup http://watwindows8.codeplex.com/wikipage?title=Using%20the%20Windows%208%20Cloud%20Application%20Services%20Application – using Windows 8 with Cloud Application Services A bit of Configuration Register the Metro apps for Windows Live Application Management From the current app manifest of your metro app Publish tab, copy the Package Display Name and the Publisher From: https://manage.dev.live.com/Build/ Package name: <-- we need to change this Client secret: keep this Package Security Identifier (SID): keep this Verify the app here: https://manage.dev.live.com/Applications/Index - so this step is done "If you wish to send push notifications in your application, provide your Package Security Identifier (SID) and client secret to WNS." Provide Package SID & Client Secret to WNS http://msdn.microsoft.com/en-us/library/windows/apps/hh465407.aspx - How to authenticate with WNS https://appdev.microsoft.com/StorePortals/en-us/Account/Signup/PurchaseSubscription - register app with dashboard - need registration code or register a new account & pay $170 shekels http://msdn.microsoft.com/en-us/library/windows/apps/hh868184.aspx - Registering for a Windows Store developer account http://msdn.microsoft.com/en-us/library/windows/apps/hh868187.aspx - Picking a Microsoft account for the Windows Store The WNS Nuget Recipe The WNS Recipe is a nuget package that provides an API for authenticating against WNS, constructing payloads and posting notifications to WNS. After installing this package, a WnsRecipe assembly is added to project references. To send notifications using WNS, first register the application at the Windows Push Notifications & Live Connect portal to obtain Package Security Identifier (SID) and a secret key that your cloud service uses to authenticate with WNS. An application receives push notifications by requesting a notification channel from WNS, which returns a channel URI that the application then registers with a cloud service. In the cloud service, the WnsAccessTokenProvider authenticates with WNS by providing its credentials, the package SID and secret key, and receives in return an access token that the provider caches and can reuse for multiple notification requests. The cloud service constructs a notification request by filling out a template class that contains the information that will be sent with the notification, including text and image references.Using the channel URI of a registered client, the cloud service can then send a notification whenever it has an update for the user. var provider = new WnsAccessTokenProvider(clientId, clientSecret); var notification = new ToastNotification(provider) {     ToastType = ToastType.ToastText02,     Text = new List<string> { "blah"} }; notification.Send(channelUri); the WNS Recipe is instrumented to write trace information via a trace listener – configuratively or programmatically from Application_Start(): WnsDiagnostics.Enable(); WnsDiagnostics.TraceSource.Listeners.Add(new DiagnosticMonitorTraceListener()); WnsDiagnostics.TraceSource.Switch.Level = SourceLevels.Verbose; The WAT PNWorker Sample The Azure server side contains a WebRole & a WorkerRole. The WebRole allows submission of new push notifications into an Azure Queue which the WorkerRole extracts and processes. Overview of Push Notification Worker Sample The toolkit includes a sample application based on the same solution structure as the one created by theWindows 8 Cloud Application Services project template. The sample demonstrates how to off-load the job of sending Windows Push Notifications using a Windows Azure worker role. You can find the source code in theSamples\PNWorker folder. This folder contains a full version of the sample application showing how to use Windows Push Notifications using ASP.NET Membership as the authentication mechanism. The sample contains two different solution files: WATWindows.Azure.sln: This solution must be opened with Visual Studio 2010 and contains the projects related to the Windows Azure web and worker roles. WATWindows.Client.sln: This solution must be opened with Visual Studio 11 and contains the Windows Metro style application project. Only Visual Studio 2010 supports Windows Azure cloud projects so you currently need to use this edition to launch the server application. This will change in a future release of the Windows Azure tools when support for Visual Studio 11 is enabled. Important: Setting up the PNWorker Sample Before running the PNWorker sample, you need to register the application and configure it: 1. Register the app: To register your application, go to the Windows Live Application Management site for Metro style apps at https://manage.dev.live.com/build and sign in with your Windows Live ID. In the Windows Push Notifications & Live Connect page, enter the following information. Package Display Name PNWorker.Sample Publisher CN=127.0.0.1, O=TESTING ONLY, OU=Windows Azure DevFabric 2. 3. Once you register the application, make a note of the values shown in the portal for Client Secret,Package Name and Package SID. 4. Configure the app - double-click the SetupSample.cmd file located inside the Samples\PNWorker folder to launch a tool that will guide you through the process of configuring the sample. setup runs a PowerShell script that requires running with administration privileges to allow the scripts to execute in your machine. When prompted, enter the Client Secret, Package Name, and Package Security Identifier you obtained previously and wait until the tool finishes configuring your sample. Running the PNWorker Sample To run this sample, you must run both the client and the server application projects. 1. Open Visual Studio 2010 as an administrator. Open the WATWindows.Azure.sln solution. Set the start-up project of the solution as the cloud project. Run the app in the dev fabric to test. 2. Open Visual Studio 11 and open the WATWindows.Client.sln solution. Run the Metro client application. In the client application, click Reopen channel and send to server. à the application opens the channel and registers it with the cloud application, & the Output area shows the channel URI. 3. Refresh the WebRole's Push Notifications page to see the UI list the newly registered client. 4. Send notifications to the client application by clicking the Send Notification button. Setup 3 command files + 1 powershell script: SetupSample.cmd –> SetupWPNS.vbs –> SetupWPNS.cmd –> SetupWPNS.UpdateWPNSCredentialsInServiceConfiguration.ps1 appears to set PackageName – from manifest Client Id package security id (SID) – from registration Client Secret – from registration The following configs are modified: WATWindows\ServiceConfiguration.Cloud.cscfg WATWindows\ServiceConfiguration.Local.cscfg WATWindows.Client\package.appxmanifest WatWindows.Notifications A class library – it references the following WNS DLL: C:\WorkDev\CountdownValue\AzureToolkits\WATWindows8\Samples\PNWorker\packages\WnsRecipe.0.0.3.0\lib\net40\WnsRecipe.dll NotificationJobRequest A DataContract for triggering notifications:     using System.Runtime.Serialization; using Microsoft.Windows.Samples.Notifications;     [DataContract]     [KnownType(typeof(WnsAccessTokenProvider))] public class NotificationJobRequest     {               [DataMember] public bool ProcessAsync { get; set; }          [DataMember] public string Payload { get; set; }         [DataMember] public string ChannelUrl { get; set; }         [DataMember] public NotificationType NotificationType { get; set; }         [DataMember] public IAccessTokenProvider AccessTokenProvider { get; set; }         [DataMember] public NotificationSendOptions NotificationSendOptions{ get; set; }     } Investigated these types: WnsAccessTokenProvider – a DataContract that contains the client Id and client secret NotificationType – an enum that can be: Tile, Toast, badge, Raw IAccessTokenProvider – get or reset the access token NotificationSendOptions – SecondsTTL, NotificationPriority (enum), isCache, isRequestForStatus, Tag   There is also a NotificationJobSerializer class which basically wraps a DataContractSerializer serialization / deserialization of NotificationJobRequest The WNSNotificationJobProcessor class This class wraps the NotificationSendUtils API – it periodically extracts any NotificationJobRequest objects from a CloudQueue and submits them to WNS. The ProcessJobMessageRequest method – this is the punchline: it will deserialize a CloudQueueMessage into a NotificationJobRequest & send pass its contents to NotificationUtils to SendAsynchronously / SendSynchronously, (and then dequeue the message).     public override void ProcessJobMessageRequest(CloudQueueMessage notificationJobMessageRequest)         { Trace.WriteLine("Processing a new Notification Job Request", "Information"); NotificationJobRequest pushNotificationJob =                 NotificationJobSerializer.Deserialize(notificationJobMessageRequest.AsString); if (pushNotificationJob != null)             { if (pushNotificationJob.ProcessAsync)                 { Trace.WriteLine("Sending the notification asynchronously", "Information"); NotificationSendUtils.SendAsynchronously( new Uri(pushNotificationJob.ChannelUrl),                         pushNotificationJob.AccessTokenProvider,                         pushNotificationJob.Payload,                         result => this.ProcessSendResult(pushNotificationJob, result),                         result => this.ProcessSendResultError(pushNotificationJob, result),                         pushNotificationJob.NotificationType,                         pushNotificationJob.NotificationSendOptions);                 } else                 { Trace.WriteLine("Sending the notification synchronously", "Information"); NotificationSendResult result = NotificationSendUtils.Send( new Uri(pushNotificationJob.ChannelUrl),                         pushNotificationJob.AccessTokenProvider,                         pushNotificationJob.Payload,                         pushNotificationJob.NotificationType,                         pushNotificationJob.NotificationSendOptions); this.ProcessSendResult(pushNotificationJob, result);                 }             } else             { Trace.WriteLine("Could not deserialize the notification job", "Error");             } this.queue.DeleteMessage(notificationJobMessageRequest);         } Investigation of NotificationSendUtils class - This is the engine – it exposes Send and a SendAsyncronously overloads that take the following params from the NotificationJobRequest: Channel Uri AccessTokenProvider Payload NotificationType NotificationSendOptions WebRole WebRole is a large MVC project – it references WatWindows.Notifications as well as the following WNS DLL: \AzureToolkits\WATWindows8\Samples\PNWorker\packages\WnsRecipe.0.0.3.0\lib\net40\NotificationsExtensions.dll Controllers\PushNotificationController.cs Notification related namespaces:     using Notifications;     using NotificationsExtensions;     using NotificationsExtensions.BadgeContent;     using NotificationsExtensions.RawContent;     using NotificationsExtensions.TileContent;     using NotificationsExtensions.ToastContent;     using Windows.Samples.Notifications; TokenProvider – initialized from the Azure RoleEnvironment:   IAccessTokenProvider tokenProvider = new WnsAccessTokenProvider(         RoleEnvironment.GetConfigurationSettingValue("WNSPackageSID"),         RoleEnvironment.GetConfigurationSettingValue("WNSClientSecret")); SendNotification method – calls QueuePushMessage method to create and serialize a NotificationJobRequest and enqueue it in a CloudQueue [HttpPost]         public ActionResult SendNotification(             [ModelBinder(typeof(NotificationTemplateModelBinder))] INotificationContent notification,             string channelUrl,             NotificationPriority priority = NotificationPriority.Normal)         {             var payload = notification.GetContent();             var options = new NotificationSendOptions()             {                 Priority = priority             };             var notificationType =                 notification is IBadgeNotificationContent ? NotificationType.Badge :                 notification is IRawNotificationContent ? NotificationType.Raw :                 notification is ITileNotificationContent ? NotificationType.Tile :                 NotificationType.Toast;             this.QueuePushMessage(payload, channelUrl, notificationType, options);             object response = new             {                 Status = "Queued for delivery to WNS"             };             return this.Json(response);         } GetSendTemplate method: Create the cshtml partial rendering based on the notification type     [HttpPost]         public ActionResult GetSendTemplate(NotificationTemplateViewModel templateOptions)         {             PartialViewResult result = null;             switch (templateOptions.NotificationType)             {                 case "Badge":                     templateOptions.BadgeGlyphValueContent = Enum.GetNames(typeof( GlyphValue));                     ViewBag.ViewData = templateOptions;                     result = PartialView("_" + templateOptions.NotificationTemplateType);                     break;                 case "Raw":                     ViewBag.ViewData = templateOptions;                     result = PartialView("_Raw");                     break;                 case "Toast":                     templateOptions.TileImages = this.blobClient.GetAllBlobsInContainer(ConfigReader.GetConfigValue("TileImagesContainer")).OrderBy(i => i.FileName).ToList();                     templateOptions.ToastAudioContent = Enum.GetNames(typeof( ToastAudioContent));                     templateOptions.Priorities = Enum.GetNames(typeof( NotificationPriority));                     ViewBag.ViewData = templateOptions;                     result = PartialView("_" + templateOptions.NotificationTemplateType);                     break;                 case "Tile":                     templateOptions.TileImages = this.blobClient.GetAllBlobsInContainer(ConfigReader.GetConfigValue("TileImagesContainer")).OrderBy(i => i.FileName).ToList();                     ViewBag.ViewData = templateOptions;                     result = PartialView("_" + templateOptions.NotificationTemplateType);                     break;             }             return result;         } Investigated these types: ToastAudioContent – an enum of different Win8 sound effects for toast notifications GlyphValue – an enum of different Win8 icons for badge notifications · Infrastructure\NotificationTemplateModelBinder.cs WNS Namespace references     using NotificationsExtensions.BadgeContent;     using NotificationsExtensions.RawContent;     using NotificationsExtensions.TileContent;     using NotificationsExtensions.ToastContent; Various NotificationFactory derived types can server as bindable models in MVC for creating INotificationContent types. Default values are also set for IWideTileNotificationContent & IToastNotificationContent. Type factoryType = null;             switch (notificationType)             {                 case "Badge":                     factoryType = typeof(BadgeContentFactory);                     break;                 case "Tile":                     factoryType = typeof(TileContentFactory);                     break;                 case "Toast":                     factoryType = typeof(ToastContentFactory);                     break;                 case "Raw":                     factoryType = typeof(RawContentFactory);                     break;             } Investigated these types: BadgeContentFactory – CreateBadgeGlyph, CreateBadgeNumeric (???) TileContentFactory – many notification content creation methods , apparently one for every tile layout type ToastContentFactory – many notification content creation methods , apparently one for every toast layout type RawContentFactory – passing strings WorkerRole WNS Namespace references using Notifications; using Notifications.WNS; using Windows.Samples.Notifications; OnStart() Method – on Worker Role startup, initialize the NotificationJobSerializer, the CloudQueue, and the WNSNotificationJobProcessor _notificationJobSerializer = new NotificationJobSerializer(); _cloudQueueClient = this.account.CreateCloudQueueClient(); _pushNotificationRequestsQueue = _cloudQueueClient.GetQueueReference(ConfigReader.GetConfigValue("RequestQueueName")); _processor = new WNSNotificationJobProcessor(_notificationJobSerializer, _pushNotificationRequestsQueue); Run() Method – poll the Azure Queue for NotificationJobRequest messages & process them:   while (true)             { Trace.WriteLine("Checking for Messages", "Information"); try                 { Parallel.ForEach( this.pushNotificationRequestsQueue.GetMessages(this.batchSize), this.processor.ProcessJobMessageRequest);                 } catch (Exception e)                 { Trace.WriteLine(e.ToString(), "Error");                 } Trace.WriteLine(string.Format("Sleeping for {0} seconds", this.pollIntervalMiliseconds / 1000)); Thread.Sleep(this.pollIntervalMiliseconds);                                            } How I learned to appreciate Win8 There is really only one application architecture for Windows 8 apps: Metro client side and Azure backend – and that is a good thing. With WNS, tier integration is so automated that you don’t even have to leverage a HTTP push API such as SignalR. This is a pretty powerful development paradigm, and has changed the way I look at Windows 8 for RAD business apps. When I originally looked at Win8 and the WinRT API, my first opinion on Win8 dev was as follows – GOOD:WinRT, WRL, C++/CX, WinJS, XAML (& ease of Direct3D integration); BAD: low projected market penetration,.NET lobotomized (Only 8% of .NET 4.5 classes can be used in Win8 non-desktop apps - http://bit.ly/HRuJr7); UGLY:Metro pascal tiles! Perhaps my 80s teenage years gave me a punk reactionary sense of revulsion towards the Partridge Family 70s style that Metro UX seems to have appropriated: On second thought though, it simplifies UI dev to a single paradigm (although UX guys will need to change career) – you will not find an easier app dev environment. Speculation: If LightSwitch is going to support HTML5 client app generation, then its a safe guess to say that vnext will support Win8 Metro XAML - a much easier port from Silverlight XAML. Given the VS2012 LightSwitch integration as a thumbs up from the powers that be at MS, and given that Win8 C#/XAML Metro apps tend towards a streamlined 'golden straight-jacket' cookie cutter app dev style with an Azure back-end supporting Win8 push notifications... --> its easy to extrapolate than LightSwitch vnext could well be the Win8 Metro XAML to Azure RAD tool of choice! The hook is already there - :) Why else have the space next to the HTML Client box? This high level of application development abstraction will facilitate rapid app cookie-cutter architecture-infrastructure frameworks for wrapping any app. This will allow me to avoid too much XAML code-monkeying around & focus on my area of interest: Technical Computing.

    Read the article

  • MySQL query, 2 similar servers, 2 minute difference in execution times

    - by mr12086
    I had a similar question on stack overflow, but it seems to be more server/mysql setup related than coding. The queries below all execute instantly on our development server where as they can take upto 2 minutes 20 seconds. The query execution time seems to be affected by home ambiguous the LIKE string's are. If they closely match a country that has few matches it will take less time, and if you use something like 'ge' for germany - it will take longer to execute. But this doesn't always work out like that, at times its quite erratic. Sending data appears to be the culprit but why and what does that mean. Also memory on production looks to be quite low (free memory)? Production: Intel Quad Xeon E3-1220 3.1GHz 4GB DDR3 2x 1TB SATA in RAID1 Network speed 100Mb Ubuntu Development Intel Core i3-2100, 2C/4T, 3.10GHz 500 GB SATA - No RAID 4GB DDR3 UPDATE 2 : mysqltuner output: [prod] -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.61-0ubuntu0.10.04.1 [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 103M (Tables: 180) [--] Data in InnoDB tables: 491M (Tables: 19) [!!] Total fragmented tables: 38 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 77d 4h 6m 1s (53M q [7.968 qps], 14M conn, TX: 87B, RX: 12B) [--] Reads / Writes: 98% / 2% [--] Total buffers: 58.0M global + 2.7M per thread (151 max threads) [OK] Maximum possible memory usage: 463.8M (11% of installed RAM) [OK] Slow queries: 0% (12K/53M) [OK] Highest usage of available connections: 22% (34/151) [OK] Key buffer size / total MyISAM indexes: 16.0M/10.6M [OK] Key buffer hit rate: 98.7% (162M cached / 2M reads) [OK] Query cache efficiency: 20.7% (7M cached / 36M selects) [!!] Query cache prunes per day: 3934 [OK] Sorts requiring temporary tables: 1% (3K temp sorts / 230K sorts) [!!] Joins performed without indexes: 71068 [OK] Temporary tables created on disk: 24% (3M on disk / 13M total) [OK] Thread cache hit rate: 99% (690 created / 14M connections) [!!] Table cache hit rate: 0% (64 open / 85M opened) [OK] Open file limit used: 12% (128/1K) [OK] Table locks acquired immediately: 99% (16M immediate / 16M locks) [!!] InnoDB data size / buffer pool: 491.9M/8.0M -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Enable the slow query log to troubleshoot bad queries Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Variables to adjust: query_cache_size (> 16M) join_buffer_size (> 128.0K, or always use indexes with joins) table_cache (> 64) innodb_buffer_pool_size (>= 491M) [dev] -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.62-0ubuntu0.11.10.1 [!!] Switch to 64-bit OS - MySQL cannot currently use all of your RAM -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 185M (Tables: 632) [--] Data in InnoDB tables: 967M (Tables: 38) [!!] Total fragmented tables: 73 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 1d 2h 26m 9s (5K q [0.058 qps], 1K conn, TX: 4M, RX: 1M) [--] Reads / Writes: 99% / 1% [--] Total buffers: 58.0M global + 2.7M per thread (151 max threads) [OK] Maximum possible memory usage: 463.8M (11% of installed RAM) [OK] Slow queries: 0% (0/5K) [OK] Highest usage of available connections: 1% (2/151) [OK] Key buffer size / total MyISAM indexes: 16.0M/18.6M [OK] Key buffer hit rate: 99.9% (60K cached / 36 reads) [OK] Query cache efficiency: 44.5% (1K cached / 2K selects) [OK] Query cache prunes per day: 0 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 44 sorts) [OK] Temporary tables created on disk: 24% (162 on disk / 666 total) [OK] Thread cache hit rate: 99% (2 created / 1K connections) [!!] Table cache hit rate: 1% (64 open / 4K opened) [OK] Open file limit used: 8% (88/1K) [OK] Table locks acquired immediately: 100% (1K immediate / 1K locks) [!!] InnoDB data size / buffer pool: 967.7M/8.0M -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Enable the slow query log to troubleshoot bad queries Increase table_cache gradually to avoid file descriptor limits Variables to adjust: table_cache (> 64) innodb_buffer_pool_size (>= 967M) UPDATE 1: When testing the queries listed here there is usually no more than one other query taking place, and usually none. Because production is actually handling apache requests that development gets very few of as it's only myself and 1 other who accesses it - could the 4GB of RAM be getting exhausted by using the single machine for both apache and mysql server? Production: sudo hdparm -tT /dev/sda /dev/sda: Timing cached reads: 24872 MB in 2.00 seconds = 12450.72 MB/sec Timing buffered disk reads: 368 MB in 3.00 seconds = 122.49 MB/sec sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 24786 MB in 2.00 seconds = 12407.22 MB/sec Timing buffered disk reads: 350 MB in 3.00 seconds = 116.53 MB/sec Server version(mysql + ubuntu versions): 5.1.61-0ubuntu0.10.04.1 Development: sudo hdparm -tT /dev/sda /dev/sda: Timing cached reads: 10632 MB in 2.00 seconds = 5319.40 MB/sec Timing buffered disk reads: 400 MB in 3.01 seconds = 132.85 MB/sec Server version(mysql + ubuntu versions): 5.1.62-0ubuntu0.11.10.1 ORIGINAL DATA : This query is NOT the query in question but is related so ill post it. SELECT f.form_question_has_answer_id FROM form_question_has_answer f INNER JOIN project_company_has_user p ON f.form_question_has_answer_user_id = p.project_company_has_user_user_id INNER JOIN company c ON p.project_company_has_user_company_id = c.company_id INNER JOIN project p2 ON p.project_company_has_user_project_id = p2.project_id INNER JOIN user u ON p.project_company_has_user_user_id = u.user_id INNER JOIN form f2 ON p.project_company_has_user_project_id = f2.form_project_id WHERE (f2.form_template_name = 'custom' AND p.project_company_has_user_garbage_collection = 0 AND p.project_company_has_user_project_id = '29') AND (LCASE(c.company_country) LIKE '%ge%' OR LCASE(c.company_country) LIKE '%abcde%') AND f.form_question_has_answer_form_id = '174' And the explain plan for the above query is, run on both dev and production produce the same plan. +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+-------------+ | 1 | SIMPLE | p2 | const | PRIMARY | PRIMARY | 4 | const | 1 | Using index | | 1 | SIMPLE | f | ref | form_question_has_answer_form_id,form_question_has_answer_user_id | form_question_has_answer_form_id | 4 | const | 796 | Using where | | 1 | SIMPLE | u | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.f.form_question_has_answer_user_id | 1 | Using index | | 1 | SIMPLE | p | ref | project_company_has_user_unique_key,project_company_has_user_user_id,project_company_has_user_company_id,project_company_has_user_project_id | project_company_has_user_user_id | 4 | new_klarents.f.form_question_has_answer_user_id | 1 | Using where | | 1 | SIMPLE | f2 | ref | form_project_id | form_project_id | 4 | const | 15 | Using where | | 1 | SIMPLE | c | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.p.project_company_has_user_company_id | 1 | Using where | +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+-------------+ This query takes 2 minutes ~20 seconds to execute. The query that is ACTUALLY being run on the server is this one: SELECT COUNT(*) AS num_results FROM (SELECT f.form_question_has_answer_id FROM form_question_has_answer f INNER JOIN project_company_has_user p ON f.form_question_has_answer_user_id = p.project_company_has_user_user_id INNER JOIN company c ON p.project_company_has_user_company_id = c.company_id INNER JOIN project p2 ON p.project_company_has_user_project_id = p2.project_id INNER JOIN user u ON p.project_company_has_user_user_id = u.user_id INNER JOIN form f2 ON p.project_company_has_user_project_id = f2.form_project_id WHERE (f2.form_template_name = 'custom' AND p.project_company_has_user_garbage_collection = 0 AND p.project_company_has_user_project_id = '29') AND (LCASE(c.company_country) LIKE '%ge%' OR LCASE(c.company_country) LIKE '%abcde%') AND f.form_question_has_answer_form_id = '174' GROUP BY f.form_question_has_answer_id;) dctrn_count_query; With explain plans (again same on dev and production): +----+-------------+-------+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+------------------------------+ | 1 | PRIMARY | NULL | NULL | NULL | NULL | NULL | NULL | NULL | Select tables optimized away | | 2 | DERIVED | p2 | const | PRIMARY | PRIMARY | 4 | | 1 | Using index | | 2 | DERIVED | f | ref | form_question_has_answer_form_id,form_question_has_answer_user_id | form_question_has_answer_form_id | 4 | | 797 | Using where | | 2 | DERIVED | p | ref | project_company_has_user_unique_key,project_company_has_user_user_id,project_company_has_user_company_id,project_company_has_user_project_id,project_company_has_user_garbage_collection | project_company_has_user_user_id | 4 | new_klarents.f.form_question_has_answer_user_id | 1 | Using where | | 2 | DERIVED | f2 | ref | form_project_id | form_project_id | 4 | | 15 | Using where | | 2 | DERIVED | c | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.p.project_company_has_user_company_id | 1 | Using where | | 2 | DERIVED | u | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.p.project_company_has_user_user_id | 1 | Using where; Using index | +----+-------------+-------+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+------------------------------+ On the production server the information I have is as follows. Upon execution: +-------------+ | num_results | +-------------+ | 3 | +-------------+ 1 row in set (2 min 14.28 sec) Show profile: +--------------------------------+------------+ | Status | Duration | +--------------------------------+------------+ | starting | 0.000016 | | checking query cache for query | 0.000057 | | Opening tables | 0.004388 | | System lock | 0.000003 | | Table lock | 0.000036 | | init | 0.000030 | | optimizing | 0.000016 | | statistics | 0.000111 | | preparing | 0.000022 | | executing | 0.000004 | | Sorting result | 0.000002 | | Sending data | 136.213836 | | end | 0.000007 | | query end | 0.000002 | | freeing items | 0.004273 | | storing result in query cache | 0.000010 | | logging slow query | 0.000001 | | logging slow query | 0.000002 | | cleaning up | 0.000002 | +--------------------------------+------------+ On development the results are as follows. +-------------+ | num_results | +-------------+ | 3 | +-------------+ 1 row in set (0.08 sec) Again the profile for this query: +--------------------------------+----------+ | Status | Duration | +--------------------------------+----------+ | starting | 0.000022 | | checking query cache for query | 0.000148 | | Opening tables | 0.000025 | | System lock | 0.000008 | | Table lock | 0.000101 | | optimizing | 0.000035 | | statistics | 0.001019 | | preparing | 0.000047 | | executing | 0.000008 | | Sorting result | 0.000005 | | Sending data | 0.086565 | | init | 0.000015 | | optimizing | 0.000006 | | executing | 0.000020 | | end | 0.000004 | | query end | 0.000004 | | freeing items | 0.000028 | | storing result in query cache | 0.000005 | | removing tmp table | 0.000008 | | closing tables | 0.000008 | | logging slow query | 0.000002 | | cleaning up | 0.000005 | +--------------------------------+----------+ If i remove user and/or project innerjoins the query is reduced to 30s. Last bit of information I have: Mysqlserver and Apache are on the same box, there is only one box for production. Production output from top: before & after. top - 15:43:25 up 78 days, 12:11, 4 users, load average: 1.42, 0.99, 0.78 Tasks: 162 total, 2 running, 160 sleeping, 0 stopped, 0 zombie Cpu(s): 0.1%us, 50.4%sy, 0.0%ni, 49.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4037868k total, 3772580k used, 265288k free, 243704k buffers Swap: 3905528k total, 265384k used, 3640144k free, 1207944k cached top - 15:44:31 up 78 days, 12:13, 4 users, load average: 1.94, 1.23, 0.87 Tasks: 160 total, 2 running, 157 sleeping, 0 stopped, 1 zombie Cpu(s): 0.2%us, 50.6%sy, 0.0%ni, 49.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4037868k total, 3834300k used, 203568k free, 243736k buffers Swap: 3905528k total, 265384k used, 3640144k free, 1207804k cached But this isn't a good representation of production's normal status so here is a grab of it from today outside of executing the queries. top - 11:04:58 up 79 days, 7:33, 4 users, load average: 0.39, 0.58, 0.76 Tasks: 156 total, 1 running, 155 sleeping, 0 stopped, 0 zombie Cpu(s): 3.3%us, 2.8%sy, 0.0%ni, 93.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4037868k total, 3676136k used, 361732k free, 271480k buffers Swap: 3905528k total, 268736k used, 3636792k free, 1063432k cached Development: This one doesn't change during or after. top - 15:47:07 up 110 days, 22:11, 7 users, load average: 0.17, 0.07, 0.06 Tasks: 210 total, 2 running, 208 sleeping, 0 stopped, 0 zombie Cpu(s): 0.1%us, 0.2%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4111972k total, 1821100k used, 2290872k free, 238860k buffers Swap: 4183036k total, 66472k used, 4116564k free, 921072k cached

    Read the article

  • Load balancing with multiple gateways

    - by ttouch
    I have to different ISPs, each on each own network. The main connects via ethernet and the secondary via wifi. The two networks have no relation at all. I just connect to them simultaneously. The reason I want to load balance between them is to achieve higher Internet speeds. Note: I have no advanced network hardware. Just my pc and the two routers that I have no access... main network: if: eth0 gw: 192.168.178.1 my ip: 192.168.178.95 speed: 400 kbit/s secondary network: if: wlan0 gw: 192.168.1.1 my ip: 192.168.1.95 speed: 300 kbit/s A diagram to explain the situation: http://i.imgur.com/NZdsv.jpg I'm on Arch Linux x64. I use netcfg to configure the interfaces Configs: # /etc/network.d/main CONNECTION='ethernet' DESCRIPTION='A basic static ethernet connection using iproute' INTERFACE='eth0' IP='static' ADDR='192.168.178.95' # /etc/network.d/second CONNECTION='wireless' DESCRIPTION='A simple WEP encrypted wireless connection' INTERFACE='wlan0' SECURITY='wep' ESSID='wifi_essid' KEY='the_password' IP="static" ADDR='192.168.1.95' And I use iptables to load balance, rules: #!/bin/bash /usr/sbin/ip route flush table ISP1 2>/dev/null /usr/sbin/ip rule del fwmark 101 table ISP1 2>/dev/null /usr/sbin/ip route add table ISP1 192.168.178.0/24 dev eth0 proto kernel scope link src 192.168.178.95 metric 202 /usr/sbin/ip route add table ISP1 default via 192.168.178.1 dev eth0 /usr/sbin/ip rule add fwmark 101 table ISP1 /usr/sbin/ip route flush table ISP2 2>/dev/null /usr/sbin/ip rule del fwmark 102 table ISP2 2>/dev/null /usr/sbin/ip route add table ISP2 192.168.1.0/24 dev wlan0 proto kernel scope link src 192.168.1.95 metric 202 /usr/sbin/ip route add table ISP2 default via 192.168.1.1 dev wlan0 /usr/sbin/ip rule add fwmark 102 table ISP2 /usr/sbin/iptables -t mangle -F /usr/sbin/iptables -t mangle -X /usr/sbin/iptables -t mangle -N MARK-gw1 /usr/sbin/iptables -t mangle -A MARK-gw1 -m comment --comment 'send via 192.168.178.1' -j MARK --set-mark 101 /usr/sbin/iptables -t mangle -A MARK-gw1 -j CONNMARK --save-mark /usr/sbin/iptables -t mangle -A MARK-gw1 -j RETURN /usr/sbin/iptables -t mangle -N MARK-gw2 /usr/sbin/iptables -t mangle -A MARK-gw2 -m comment --comment 'send via 192.168.1.1' -j MARK --set-mark 102 /usr/sbin/iptables -t mangle -A MARK-gw2 -j CONNMARK --save-mark /usr/sbin/iptables -t mangle -A MARK-gw2 -j RETURN /usr/sbin/iptables -t mangle -A PREROUTING -j CONNMARK --restore-mark /usr/sbin/iptables -t mangle -A PREROUTING -m comment --comment "this stream is already marked; escape early" -m mark ! --mark 0 -j ACCEPT /usr/sbin/iptables -t mangle -A PREROUTING -m comment --comment 'prevent asynchronous routing' -i eth0 -m conntrack --ctstate NEW -j MARK-gw1 /usr/sbin/iptables -t mangle -A PREROUTING -m comment --comment 'prevent asynchronous routing' -i wlan0 -m conntrack --ctstate NEW -j MARK-gw2 /usr/sbin/iptables -t mangle -N DEF_POL /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'default balancing' -p tcp -m conntrack --ctstate ESTABLISHED,RELATED -j CONNMARK --restore-mark /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'default balancing' -p udp -m conntrack --ctstate ESTABLISHED,RELATED -j CONNMARK --restore-mark /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'balance gw1 tcp' -p tcp -m conntrack --ctstate NEW -m statistic --mode nth --every 2 --packet 0 -j MARK-gw1 /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'balance gw1 tcp' -p tcp -m conntrack --ctstate NEW -m statistic --mode nth --every 2 --packet 0 -j ACCEPT /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'balance gw2 tcp' -p tcp -m conntrack --ctstate NEW -m statistic --mode nth --every 2 --packet 1 -j MARK-gw2 /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'balance gw2 tcp' -p tcp -m conntrack --ctstate NEW -m statistic --mode nth --every 2 --packet 1 -j ACCEPT /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'balance gw1 udp' -p udp -m conntrack --ctstate NEW -m statistic --mode nth --every 2 --packet 0 -j MARK-gw1 /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'balance gw1 udp' -p udp -m conntrack --ctstate NEW -m statistic --mode nth --every 2 --packet 0 -j ACCEPT /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'balance gw2 udp' -p udp -m conntrack --ctstate NEW -m statistic --mode nth --every 2 --packet 1 -j MARK-gw2 /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'balance gw2 udp' -p udp -m conntrack --ctstate NEW -m statistic --mode nth --every 2 --packet 1 -j ACCEPT /usr/sbin/iptables -t mangle -A PREROUTING -j DEF_POL /usr/sbin/iptables -t nat -A POSTROUTING -m comment --comment 'snat outbound eth0' -o eth0 -s 192.168.0.0/16 -m mark --mark 101 -j SNAT --to-source 192.168.178.95 /usr/sbin/iptables -t nat -A POSTROUTING -m comment --comment 'snat outbound wlan0' -o wlan0 -s 192.168.0.0/16 -m mark --mark 102 -j SNAT --to-source 192.168.1.95 /usr/sbin/ip route flush cache (this script was made by fukawi2, I don't know how to use iptables) but I have no Internet connection... output of iptables -t mangle -nvL Chain PREROUTING (policy ACCEPT 1254K packets, 1519M bytes) pkts bytes target prot opt in out source destination 1278K 1535M CONNMARK all -- * * 0.0.0.0/0 0.0.0.0/0 CONNMARK restore 21532 15M ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 /* this stream is already marked; escape early */ mark match ! 0x0 582 72579 MARK-gw1 all -- eth0 * 0.0.0.0/0 0.0.0.0/0 /* prevent asynchronous routing */ ctstate NEW 2376 696K MARK-gw2 all -- wlan0 * 0.0.0.0/0 0.0.0.0/0 /* prevent asynchronous routing */ ctstate NEW 1257K 1520M DEF_POL all -- * * 0.0.0.0/0 0.0.0.0/0 Chain INPUT (policy ACCEPT 1276K packets, 1535M bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 870K packets, 97M bytes) pkts bytes target prot opt in out source destination Chain POSTROUTING (policy ACCEPT 870K packets, 97M bytes) pkts bytes target prot opt in out source destination Chain DEF_POL (1 references) pkts bytes target prot opt in out source destination 1236K 1517M CONNMARK tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default balancing */ ctstate RELATED,ESTABLISHED CONNMARK restore 15163 2041K CONNMARK udp -- * * 0.0.0.0/0 0.0.0.0/0 /* default balancing */ ctstate RELATED,ESTABLISHED CONNMARK restore 555 33176 MARK-gw1 tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* balance gw1 tcp */ ctstate NEW statistic mode nth every 2 555 33176 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* balance gw1 tcp */ ctstate NEW statistic mode nth every 2 277 16516 MARK-gw2 tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* balance gw2 tcp */ ctstate NEW statistic mode nth every 2 packet 1 277 16516 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* balance gw2 tcp */ ctstate NEW statistic mode nth every 2 packet 1 1442 384K MARK-gw1 udp -- * * 0.0.0.0/0 0.0.0.0/0 /* balance gw1 udp */ ctstate NEW statistic mode nth every 2 1442 384K ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 /* balance gw1 udp */ ctstate NEW statistic mode nth every 2 720 189K MARK-gw2 udp -- * * 0.0.0.0/0 0.0.0.0/0 /* balance gw2 udp */ ctstate NEW statistic mode nth every 2 packet 1 720 189K ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 /* balance gw2 udp */ ctstate NEW statistic mode nth every 2 packet 1 Chain MARK-gw1 (3 references) pkts bytes target prot opt in out source destination 2579 490K MARK all -- * * 0.0.0.0/0 0.0.0.0/0 /* send via 192.168.178.1 */ MARK set 0x65 2579 490K CONNMARK all -- * * 0.0.0.0/0 0.0.0.0/0 CONNMARK save 2579 490K RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 Chain MARK-gw2 (3 references) pkts bytes target prot opt in out source destination 3373 901K MARK all -- * * 0.0.0.0/0 0.0.0.0/0 /* send via 192.168.1.1 */ MARK set 0x66 3373 901K CONNMARK all -- * * 0.0.0.0/0 0.0.0.0/0 CONNMARK save 3373 901K RETURN all -- * * 0.0.0.0/0 0.0.0.0/0

    Read the article

  • Another sound not working post

    - by Thomas Smart
    Tried all the other "sound not working" posts i think, lost count. purge/reinstall alsa and pulse, reboot, add user to audio group, various lines in the alsa config file such as "options snd-hda-intel model=" then tried different options like generic, auto, basic, default, etc. tried pulseaudio -k && sudo alsa force-reload a few times, with and without rebooting. Hardware: 16gb ram, core I7-4790, Intel Haswell mboard with onboard sound and graphics Multimedia: Audio Adapter: HDA-Intel-HDA Intel HDMI OS: Ubuntu server 14.04 with ubuntu-desktop installed. GUI sound settings lists only the dummy sound card alsamixer -c 0 ¦ Card: HDA Intel HDMI F1: Help ¦ ¦ Chip: Intel Haswell HDMI F2: System information ¦ ¦ View: F3:[Playback] F4: Capture F5: All F6: Select sound card ¦ ¦ Item: S/PDIF ¦ ¦ +--+ ¦ ¦ ¦OO¦ ¦ ¦ +--+ ¦ ¦ < S/PDIF > ¦ aplay -l **** List of PLAYBACK Hardware Devices **** card 0: HDMI [HDA Intel HDMI], device 3: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 aplay -L default Playback/recording through the PulseAudio sound server null Discard all samples (playback) or generate zero samples (capture) pulse PulseAudio Sound Server hdmi:CARD=HDMI,DEV=0 HDA Intel HDMI, HDMI 0 HDMI Audio Output dmix:CARD=HDMI,DEV=3 HDA Intel HDMI, HDMI 0 Direct sample mixing device dsnoop:CARD=HDMI,DEV=3 HDA Intel HDMI, HDMI 0 Direct sample snooping device hw:CARD=HDMI,DEV=3 HDA Intel HDMI, HDMI 0 Direct hardware device without any conversions plughw:CARD=HDMI,DEV=3 HDA Intel HDMI, HDMI 0 Hardware device with all software conversions cat /proc/asound/cards 0 [HDMI ]: HDA-Intel - HDA Intel HDMI HDA Intel HDMI at 0xf7d14000 irq 46 cat /proc/asound/devices 1: : sequencer 2: [ 0- 3]: digital audio playback 3: [ 0- 0]: hardware dependent 4: [ 0] : control 33: : timer mplayer -ao alsa:device=hdmi /usr/share/sounds/ubuntu/stereo/system-ready.ogg MPlayer 1.1-4.8 (C) 2000-2012 MPlayer Team mplayer: could not connect to socket mplayer: No such file or directory Failed to open LIRC support. You will not be able to use your remote control. Playing /usr/share/sounds/ubuntu/stereo/system-ready.ogg. libavformat version 54.20.4 (external) Mismatching header version 54.20.3 libavformat file format detected. [lavf] stream 0: audio (vorbis), -aid 0 Load subtitles in /usr/share/sounds/ubuntu/stereo/ ========================================================================== Opening audio decoder: [ffmpeg] FFmpeg/libavcodec audio decoders libavcodec version 54.35.0 (external) AUDIO: 44100 Hz, 1 ch, floatle, 80.0 kbit/5.67% (ratio: 10000->176400) Selected audio codec: [ffvorbis] afm: ffmpeg (FFmpeg Vorbis) ========================================================================== [AO_ALSA] alsa-lib: confmisc.c:768:(parse_card) cannot find card '1' [AO_ALSA] alsa-lib: conf.c:4248:(_snd_config_evaluate) function snd_func_card_driver returned error: No such file or directory [AO_ALSA] alsa-lib: confmisc.c:392:(snd_func_concat) error evaluating strings [AO_ALSA] alsa-lib: conf.c:4248:(_snd_config_evaluate) function snd_func_concat returned error: No such file or directory [AO_ALSA] alsa-lib: confmisc.c:1251:(snd_func_refer) error evaluating name [AO_ALSA] alsa-lib: conf.c:4248:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory [AO_ALSA] alsa-lib: conf.c:4727:(snd_config_expand) Evaluate error: No such file or directory [AO_ALSA] alsa-lib: pcm.c:2239:(snd_pcm_open_noupdate) Unknown PCM hdmi [AO_ALSA] Playback open error: No such file or directory Failed to initialize audio driver 'alsa:device=hdmi' Could not open/initialize audio device -> no sound. Audio: no sound Video: no video Exiting... (End of file) mplayer -ao alsa:device=hw=0.3 /usr/share/sounds/ubuntu/stereo/system-ready.ogg MPlayer 1.1-4.8 (C) 2000-2012 MPlayer Team mplayer: could not connect to socket mplayer: No such file or directory Failed to open LIRC support. You will not be able to use your remote control. Playing /usr/share/sounds/ubuntu/stereo/system-ready.ogg. libavformat version 54.20.4 (external) Mismatching header version 54.20.3 libavformat file format detected. [lavf] stream 0: audio (vorbis), -aid 0 Load subtitles in /usr/share/sounds/ubuntu/stereo/ ========================================================================== Opening audio decoder: [ffmpeg] FFmpeg/libavcodec audio decoders libavcodec version 54.35.0 (external) AUDIO: 44100 Hz, 1 ch, floatle, 80.0 kbit/5.67% (ratio: 10000->176400) Selected audio codec: [ffvorbis] afm: ffmpeg (FFmpeg Vorbis) ========================================================================== [AO_ALSA] Format floatle is not supported by hardware, trying default. AO: [alsa] 44100Hz 2ch s16le (2 bytes per sample) Video: no video Starting playback... A: 0.4 (00.4) of 0.8 (00.7) 0.1% Exiting... (End of file) Thank you for your time and help :)

    Read the article

  • Integrated webcam in lenovo t410 not working with 12.04

    - by kristianp
    I have a Lenovo T410 with an inbuilt webcam and I haven't been able to get the webcam working. I tried skype, cheese, both just give me a black window. The microphone works fine with skype, by the way. Can anyone provide any clues please? The webcam is enabled in the bios, but there is no light indicating the webcam is on (not sure if there should be, though). I tried this on Kubuntu 11.10 and have upgraded to 12.04 with the same results. The Fn-F6 keyboard combination doens't seem to do anything either. EDIT: I got the webcam replaced, it looks like it was a hardware problem, because it works fine now. Thanks guys. $ ls /dev/v4l/* /dev/v4l/by-id: usb-Chicony_Electronics_Co.__Ltd._Integrated_Camera-video-index0 /dev/v4l/by-path: pci-0000:00:1a.0-usb-0:1.6:1.0-video-index0 And lsusb: $ lsusb Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 002: ID 8087:0020 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0020 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 003: ID 147e:2016 Upek Biometric Touchchip/Touchstrip Fingerprint Sensor Bus 001 Device 004: ID 0a5c:217f Broadcom Corp. Bluetooth Controller Bus 001 Device 005: ID 17ef:480f Lenovo Integrated Webcam [R5U877] Bus 002 Device 003: ID 05c6:9204 Qualcomm, Inc. Bus 002 Device 004: ID 17ef:1003 Lenovo Integrated Smart Card Reader Here is the output from guvcview, minus lots of lines describing the available capture formats. It says "unable to start with minimum setup. Please reconnect your camera.". guvcview 1.5.3 ALSA lib pcm_dmix.c:1018:(snd_pcm_dmix_open) unable to open slave ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.rear ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.center_lfe ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.side ALSA lib audio/pcm_bluetooth.c:1614:(audioservice_expect) BT_GET_CAPABILITIES failed : Input/output error(5) ALSA lib audio/pcm_bluetooth.c:1614:(audioservice_expect) BT_GET_CAPABILITIES failed : Input/output error(5) ALSA lib audio/pcm_bluetooth.c:1614:(audioservice_expect) BT_GET_CAPABILITIES failed : Input/output error(5) ALSA lib audio/pcm_bluetooth.c:1614:(audioservice_expect) BT_GET_CAPABILITIES failed : Input/output error(5) ALSA lib pcm_dmix.c:957:(snd_pcm_dmix_open) The dmix plugin supports only playback stream ALSA lib pcm_dmix.c:1018:(snd_pcm_dmix_open) unable to open slave Cannot connect to server socket err = No such file or directory Cannot connect to server socket jack server is not running or cannot be started video device: /dev/video0 Init. Integrated Camera (location: usb-0000:00:1a.0-1.6) { pixelformat = 'YUYV', description = 'YUV 4:2:2 (YUYV)' } { discrete: width = 640, height = 480 } Time interval between frame: 1/30, .... { discrete: width = 1600, height = 1200 } Time interval between frame: 1/15, vid:17ef pid:480f driver:uvcvideo checking format: 1196444237 libv4l2: error setting pixformat: Device or resource busy VIDIOC_S_FORMAT - Unable to set format: Device or resource busy Init v4L2 failed !! Init video returned -2 trying minimum setup ... video device: /dev/video0 Init. Integrated Camera (location: usb-0000:00:1a.0-1.6) { pixelformat = 'YUYV', description = 'YUV 4:2:2 (YUYV)' } { discrete: width = 640, height = 480 } .... vid:17ef pid:480f driver:uvcvideo checking format: 1448695129 libv4l2: error setting pixformat: Device or resource busy VIDIOC_S_FORMAT - Unable to set format: Device or resource busy Init v4L2 failed !! ERROR: Minimum Setup Failed. Exiting... VIDIOC_REQBUFS - Failed to delete buffers: Invalid argument (errno 22) cleaned allocations - 100% Closing portaudio ...OK Terminated.

    Read the article

  • Set up tunnel to HE.net and now only ipv6.google.com works, but other sites ping fine.

    - by AndrejaKo
    I'm setting up IPv6 using my router which is running OpenWRT, version Backfire 10.03.1-rc4. I made a tunnel using Hurricane Electric's tunnel broker and set it up on the router and I'm using RADVD to hand out IPv6 addresses. My problem is that on computers on the network, I can only access ipv6.google.com using a browser, but other sites seem to be loading forever and won't open in any browser. I can ping and traceroute to them fine, but can't open them with a browser. I can open any site normally with a browser from the router. Stopping firewall service on the router doesn't help, so it's probably not a firewall issue. All AAAA records resolve fine, so it's probably not a DNS issue. Computers on the network get their IPv6 addresses fine, so it's probably not a radvd issue. Similar setup worked fine for SixXs, but I'm having problems with my PoP there, so I decided to move to HE. Here are some traceroutes: From a client computer: Tracing route to ipv6.he.net [2001:470:0:64::2] over a maximum of 30 hops: 1 <1 ms 1 ms 1 ms 2001:470:1f0b:de5::1 2 62 ms 63 ms 62 ms andrejako-1.tunnel.tserv6.fra1.ipv6.he.net [2001:470:1f0a:de5::1] 3 60 ms 60 ms 63 ms gige-g2-4.core1.fra1.he.net [2001:470:0:69::1] 4 63 ms 68 ms 68 ms 10gigabitethernet1-4.core1.ams1.he.net [2001:470:0:47::1] 5 84 ms 74 ms 76 ms 10gigabitethernet1-4.core1.lon1.he.net [2001:470:0:3f::1] 6 146 ms 147 ms 151 ms 10gigabitethernet4-4.core1.nyc4.he.net [2001:470:0:128::1] 7 200 ms 198 ms 202 ms 10gigabitethernet5-3.core1.lax1.he.net [2001:470:0:10e::1] 8 219 ms * 210 ms 10gigabitethernet2-2.core1.fmt2.he.net [2001:470:0:18d::1] 9 221 ms 338 ms 209 ms gige-g4-18.core1.fmt1.he.net [2001:470:0:2d::1] 10 206 ms 210 ms 207 ms ipv6.he.net [2001:470:0:64::2] Trace complete. and another from a cliet computer Tracing route to whatismyipv6.com [2001:4870:a24f:2::90] over a maximum of 30 hops: 1 7 ms 1 ms 1 ms 2001:470:1f0b:de5::1 2 69 ms 70 ms 63 ms AndrejaKo-1.tunnel.tserv6.fra1.ipv6.he.net [2001:470:1f0a:de5::1] 3 57 ms 65 ms 58 ms gige-g2-4.core1.fra1.he.net [2001:470:0:69::1] 4 73 ms 74 ms 75 ms 10gigabitethernet1-4.core1.ams1.he.net [2001:470:0:47::1] 5 71 ms 74 ms 76 ms 10gigabitethernet1-4.core1.lon1.he.net [2001:470:0:3f::1] 6 141 ms 149 ms 148 ms 10gigabitethernet2-3.core1.nyc4.he.net [2001:470:0:3e::1] 7 141 ms 147 ms 143 ms 10gigabitethernet1-2.core1.nyc1.he.net [2001:470:0:37::2] 8 144 ms 145 ms 142 ms 2001:504:1::a500:4323:1 9 226 ms 225 ms 218 ms 2001:4870:a240::2 10 220 ms 224 ms 219 ms 2001:4870:a240::2 11 219 ms 218 ms 220 ms 2001:4870:a24f::2 12 221 ms 222 ms 220 ms www.whatismyipv6.com [2001:4870:a24f:2::90] Trace complete. Here's some firewall info on the router: root@OpenWrt:/# iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 syn_flood tcp -- 0.0.0.0/0 0.0.0.0/0 tcp flags:0x17/0x02 input_rule all -- 0.0.0.0/0 0.0.0.0/0 input all -- 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy DROP) target prot opt source destination zone_wan_MSSFIX all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED forwarding_rule all -- 0.0.0.0/0 0.0.0.0/0 forward all -- 0.0.0.0/0 0.0.0.0/0 reject all -- 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 output_rule all -- 0.0.0.0/0 0.0.0.0/0 output all -- 0.0.0.0/0 0.0.0.0/0 Chain forward (1 references) target prot opt source destination zone_lan_forward all -- 0.0.0.0/0 0.0.0.0/0 zone_wan_forward all -- 0.0.0.0/0 0.0.0.0/0 zone_wan_forward all -- 0.0.0.0/0 0.0.0.0/0 Chain forwarding_lan (1 references) target prot opt source destination Chain forwarding_rule (1 references) target prot opt source destination nat_reflection_fwd all -- 0.0.0.0/0 0.0.0.0/0 Chain forwarding_wan (1 references) target prot opt source destination Chain input (1 references) target prot opt source destination zone_lan all -- 0.0.0.0/0 0.0.0.0/0 zone_wan all -- 0.0.0.0/0 0.0.0.0/0 zone_wan all -- 0.0.0.0/0 0.0.0.0/0 Chain input_lan (1 references) target prot opt source destination Chain input_rule (1 references) target prot opt source destination Chain input_wan (1 references) target prot opt source destination Chain nat_reflection_fwd (1 references) target prot opt source destination ACCEPT tcp -- 192.168.1.0/24 192.168.1.2 tcp dpt:80 Chain output (1 references) target prot opt source destination zone_lan_ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 zone_wan_ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 Chain output_rule (1 references) target prot opt source destination Chain reject (7 references) target prot opt source destination REJECT tcp -- 0.0.0.0/0 0.0.0.0/0 reject-with tcp-reset REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable Chain syn_flood (1 references) target prot opt source destination RETURN tcp -- 0.0.0.0/0 0.0.0.0/0 tcp flags:0x17/0x02 limit: avg 25/sec burst 50 DROP all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_lan (1 references) target prot opt source destination input_lan all -- 0.0.0.0/0 0.0.0.0/0 zone_lan_ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_lan_ACCEPT (2 references) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_lan_DROP (0 references) target prot opt source destination DROP all -- 0.0.0.0/0 0.0.0.0/0 DROP all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_lan_MSSFIX (0 references) target prot opt source destination TCPMSS tcp -- 0.0.0.0/0 0.0.0.0/0 tcp flags:0x06/0x02 TCPMSS clamp to PMTU Chain zone_lan_REJECT (1 references) target prot opt source destination reject all -- 0.0.0.0/0 0.0.0.0/0 reject all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_lan_forward (1 references) target prot opt source destination zone_wan_ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 forwarding_lan all -- 0.0.0.0/0 0.0.0.0/0 zone_lan_REJECT all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_wan (2 references) target prot opt source destination ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:68 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 8 ACCEPT 41 -- 0.0.0.0/0 0.0.0.0/0 input_wan all -- 0.0.0.0/0 0.0.0.0/0 zone_wan_REJECT all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_wan_ACCEPT (2 references) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_wan_DROP (0 references) target prot opt source destination DROP all -- 0.0.0.0/0 0.0.0.0/0 DROP all -- 0.0.0.0/0 0.0.0.0/0 DROP all -- 0.0.0.0/0 0.0.0.0/0 DROP all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_wan_MSSFIX (1 references) target prot opt source destination TCPMSS tcp -- 0.0.0.0/0 0.0.0.0/0 tcp flags:0x06/0x02 TCPMSS clamp to PMTU TCPMSS tcp -- 0.0.0.0/0 0.0.0.0/0 tcp flags:0x06/0x02 TCPMSS clamp to PMTU Chain zone_wan_REJECT (2 references) target prot opt source destination reject all -- 0.0.0.0/0 0.0.0.0/0 reject all -- 0.0.0.0/0 0.0.0.0/0 reject all -- 0.0.0.0/0 0.0.0.0/0 reject all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_wan_forward (2 references) target prot opt source destination ACCEPT tcp -- 0.0.0.0/0 192.168.1.2 forwarding_wan all -- 0.0.0.0/0 0.0.0.0/0 zone_wan_REJECT all -- 0.0.0.0/0 0.0.0.0/0 Here's some routing info: root@OpenWrt:/# ip -f inet6 route 2001:470:1f0a:de5::/64 via :: dev 6in4-henet proto kernel metric 256 mtu 1280 advmss 1220 hoplimit 0 2001:470:1f0b:de5::/64 dev br-lan proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 0 fe80::/64 dev eth0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 0 fe80::/64 dev br-lan proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 0 fe80::/64 dev eth0.1 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 0 fe80::/64 dev eth0.2 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 0 fe80::/64 via :: dev 6in4-henet proto kernel metric 256 mtu 1280 advmss 1220 hoplimit 0 default dev 6in4-henet metric 1024 mtu 1280 advmss 1220 hoplimit 0 I have computers running windows 7 SP1 and openSUSE 11.3 and all of them have same problem. I also made a thread about this on HE's forum, but it seems that people there are out of ideas what to do.

    Read the article

  • Session Sharing with another User on *NIX and Windows

    - by Giri Mandalika
    Oracle Solaris Since Solaris is not widely known for its graphical interface, let's just focus on sharing a terminal session in read-only mode with another user on the same system. Here is an example. eg., % finger Login Name TTY Idle When Where root Super-User pts/1 Sat 16:57 dhcp-amer-vpn-rmdc-a sunperf ??? pts/2 4 Sat 16:41 pitcher.sfbay.sun.com In this example, two users root and sunperf are connected to the same system from two different terminals pts/1 and pts/2 respectively. If the root user wants to show something to sunperf user -- what s/he is doing in her/his terminal, for example, it can be accomplished with the following command. script -a /dev/null | tee -a <target_terminal eg., # script -a /dev/null | tee -a /dev/pts/2 Script started, file is /dev/null # # uptime 5:04pm up 1 day(s), 2:56, 2 users, load average: 0.81, 0.81, 0.81 # # isainfo -v 64-bit sparcv9 applications crc32c cbcond pause mont mpmul sha512 sha256 sha1 md5 camellia kasumi des aes ima hpc vis3 fmaf asi_blk_init vis2 vis popc 32-bit sparc applications crc32c cbcond pause mont mpmul sha512 sha256 sha1 md5 camellia kasumi des aes ima hpc vis3 fmaf asi_blk_init vis2 vis popc v8plus div32 mul32 # # exit Script done, file is /dev/null After the script .. | tee .. command, sunperf user should be able to see the root user's stdin and stdout contents in her/his own terminal until the script session exits in root user's terminal. Since this kind of sharing is based on capturing and redirecting the contents to the target terminal, the users on the receiving end won't be able to see whatever is being edited on initiators' terminal [using editors such as vi]. Also it is not possible to share the session with any connected user on the system unless the initiator has the necessary permissions and privileges. The script utility records everything printed in a terminal session, while the tee utility replicates the contents of the screen capture on to the standard output of the target terimal. The tee utility does not buffer the output - so, the screen capture from the initiators' terminal appears almost right away in the target terminal. Though I never tested, this technique may work on all *NIX and Linux flavors with little or no changes. Also there might be other ways to accomplish this. [Thanks to Sujeet for sharing this tip] Microsoft Windows Most of the Windows users may rely on VNC services to share a desktop session. Another way to share the desktop session is to use the Remote Desktop Connection (RDC) client. Here are the steps. Connect to the target Windows system using Remote Desktop Connection client Launch Windows Task Manager Navigate to the "Users" tab Find the user session that you want to connect to and have full control over as the other user who is currently holding that session Select the user name in Windows Task Manager, right click and choose the option "Remote Control" A window pops up on the other user's session with the message "<USER is requesting to control your session remotely. Do you accept the request?" Once the other user says "Yes", you will be granted access to that session. Since then both users should be able to see the same screen and even control the session from their respective workstations.

    Read the article

< Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >