Search Results

Search found 7606 results on 305 pages for 'tmax dev'.

Page 74/305 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution)

    - by B Jo
    I would like to upgrade to Ubuntu 13.04 since almost a month now. Am a pretty novice in linux and in software in general. My /boot is full : bijo@bijo-AMILO-Xi-2428:~$ df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/ubuntu-root 228G 7.7G 208G 4% / udev 1001M 4.0K 1001M 1% /dev tmpfs 404M 836K 403M 1% /run none 5.0M 0 5.0M 0% /run/lock none 1008M 156K 1008M 1% /run/shm none 100M 48K 100M 1% /run/user /dev/sda1 228M 222M 0 100% /boot I tried the : sudo apt-get purge $( dpkg --list | grep -P -o "linux-image-\d\S+" | grep -v $(uname -r | grep -P -o ".+\d") ) but i got this as reply E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). infact I'm going round and round ... Can someone guide me through please? Thanks in advance for ur time

    Read the article

  • Can't access my partitions after installing Ubuntu

    - by Manaf Al-Sarraf
    I installed Ubuntu 13.4 replace with win 8 after installation complete I can't access my partitions and receive a message like this : Error mounting /dev/sda5 at /media/manaf/01CD6C0800CEE670: Command-line `mount -t "ntfs" -o "uhelper=udisks2,nodev,nosuid,uid=1000,gid=1000,dmask=0077,fmask=0177" "/dev/sda5" "/media/manaf/01CD6C0800CEE670"' exited with non-zero exit status 14: The disk contains an unclean file system (0, 0). Metadata kept in Windows cache, refused to mount. Failed to mount '/dev/sda5': Operation not permitted The NTFS partition is in an unsafe state. Please resume and shutdown Windows fully (no hibernation or fast restarting), or mount the volume read-only with the 'ro' mount option." Thanks for helping :)

    Read the article

  • How can I use a CanoScan N640Pex scanner over a USB/parallel cable?

    - by detly
    I have an old CanoScan N640Pex flat bed scanner and a USB-to-parallel port cable through which I can connect it to my PC. Unfortunately neither Simple Scan nor XSane detect the scanner. I'm running Ubuntu 12.04 (plus updates and backports) with kernel 3.2.0-31-generic. dmesg tells me this when I plus the cable in: [256411.641910] usb 7-1: new full-speed USB device number 10 using uhci_hcd [256411.872392] usblp0: USB Bidirectional printer dev 10 if 0 alt 1 proto 2 vid 0x067B pid 0x2305 [256411.872417] usbcore: registered new interface driver usblp lsusb shows this device for my cable: Bus 007 Device 010: ID 067b:2305 Prolific Technology, Inc. PL2305 Parallel Port The device node created is /dev/usb/lp0 crw-rw---- 1 root lp 180, 0 Sep 9 17:46 /dev/usb/lp0 There is no extra information from any of these commands when I attach the scanner to the cable and power it on, though. I suspect I might need to change something in /etc/sane.d/canon_pp.conf, but I have no idea what to put for the ieee1284 line, since there is pretty much zero documentation for that parameter. So how can I get it to work?

    Read the article

  • grub2 update-grub puts wrong UUID in grub.cfg for system with separate /, /boot and /home partitions

    - by keepitsimpleengineer
    It is putting in the UUID for the boot partition and not the / (root) partition. It's grub2 (1.99-21ubuntu3.1) on Xubuntu 12.04. UPDATE: ? I ran Boot Info Script 0.60, here are results? The boot info script is too big for askubuntu, but is here. The system not booting is on /dev/sdh. The booted disk, Xubuntu 12.04, is /dev/sdb. /sdh2/etc/fstab is at line 1073. The incorrect UUIDs are at line 945 and 954. "blkid" putput is at line 318. It is putting the UUID for "boot" versus "root" in boot.cfg, line 937. I have noticed that the relationship between physical drives and names in /dev vary depending on which system is booted.

    Read the article

  • Ubuntu 14.04 doesn' t boot after upgrade from 12.04 installed inside Windows 8.1

    - by AdiC
    I have Ubuntu 12.04 installed like an app on windows 8.1 (Ubuntu 12.04 allow to be installed like an app in Windows 8.1 and it can be removed when you don't need it any more from Control Panel). Usually, to chose what os to boot when you start the laptop, you can choose between windows 8.1 and Ubuntu after windows logo appeared at start up and that was ok until I made this upgrade. Now when I try to choose Ubuntu the laptop try to boot it but, after that full colored screen is showed the screen go black and this messages appear: mount: mounting /dev/loop0/ on /root failed : Invalid argument mount: mounting /dev on /root/dev failed: No such file or directory mount: mounting /sys on /root/sys failed: No such file or directory mount: mounting /proc on /root/proc failed: No such file or directory Target filesystem doesn' t have requested /sbin/init No init found. Try passing init = bootarg. BusyBox v1.21.1 (Ubuntu 1:1:21.0-1ubuntu1) built-in shell (ash) Enter 'help' for a list of built-in commands (initramfs) _ I don' t know what to do after this screen appears. Please help !

    Read the article

  • Problem reinstall GRUB2

    - by Caustic
    I tried to upgrade from 12.04 - 12.10. The install seemed to go well but upon reboot, I get the grub recovery console. I tried to reinstall using the boot-repair Boot Repair which usually works a treat. However this time it says it is scanning my HDD and never recovers. When I try to reinstall grub using the old grub-install <dir> <dev> --recheck --debug &> gruboutput.txt I get this Pastebin link I think it should be installing to dev/mapper/isw_gbaaebhfe_Volume0 even when I made -- dev/mapper/isw_gbaaebhfe_Volume3 Any help would be appreciated

    Read the article

  • Startup applications 14.04

    - by gpalthrow
    I'm trying to mount 3 partitions by default when I log in. I used to do that with the start up applications in 12.04 However there seems to be a nasty bug in 14.04 where false command duplicates are removed (apparently, only the command is taken into account, even if the arguments differ). I tried using one single command instead of 3, putting all the devices in one line (something like /usr/bin/udisks --mount /dev/sdb1; /usr/bin/udisks --mount /dev/sdb2; /usr/bin/udisks --mount /dev/sdb3; ) However it does not seem to work either. What can I do to have 3 partitions mounted by default (with their label) ?!

    Read the article

  • System checks for disk drive error every time it boots

    - by Starx
    When my disk space for the ubuntu installation partition was getting low, from a live cd, I used gparted to increase its volume capacity, but deleting another partition and merging it to the ubuntu partition. Since then onwards, I am receiving disk checking for errors at boot screen for my partitions, always. What seem to be causing this and how to fix it? Update Here is my boot.log if it provides few insight fsck from util-linux 2.19.1 fsck from util-linux 2.19.1 /dev/sda1 was not cleanly unmounted, check forced. ubuntu: clean, 501325/1310720 files, 2958455/5242880 blocks /dev/sda1: 241/51272 files (3.3% non-contiguous), 73541/102400 blocks mountall: fsck /boot [358] terminated with status 1 Skipping profile in /etc/apparmor.d/disable: usr.bin.firefox ... /dev/sda1 is a separate grub partition for my dual OS's

    Read the article

  • X 11 Development Librararies

    - by user2592799
    I am new to ubuntu, I am using ubuntu 13.10, and trying to install NS-2. During the installation I am facing the following error; X11/Xlib.h: No such file or directory Then i tried to install sudo apt-get install libx11-dev This time I am facing the following error; Reading package lists... Done Building dependency tree Reading state information... Done Package libx11-dev is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'libx11-dev' has no installation candidate I have no idea how to deal it, Please help Thanks in advance

    Read the article

  • How can I handle "NTFS partition is in unsafe state"?

    - by user211040
    Error mounting /dev/sda3 at /media/franklcohen/OS: Command-line `mount -t "ntfs" -o "uhelper=udisks2,nodev,nosuid,uid=1000,gid=1000,dmask=0077,fmask=0177" "/dev/sda3" "/media/franklcohen/OS"' exited with non-zero exit status 14: Windows is hibernated, refused to mount. Failed to mount '/dev/sda3': Operation not permitted The NTFS partition is in an unsafe state. Please resume and shutdown Windows fully (no hibernation or fast restarting), or mount the volume read-only with the 'ro' mount option. i get this error i have disabled fast start up in windows 8. what can i do i shut down my computer 4 times in windows and disabled fast start in windows 8. i'm using Ubuntu 13.10. please help thanks.

    Read the article

  • Is it possible to boot without mounting /home?

    - by Exeleration-G
    I want to backup my /home partition on /dev/sda6 using partclone, a command line utility. To do so, I first have to unmount the partition that I want to backup. Most of the time, this is easy, but /home is used by so many processes that it can't be unmounted without first killing all those processes. So, the thing I'm looking for is a way to boot Ubuntu, without mounting /home, so I can back up the not-mounted /dev/sda6 partition. Is that possible? To be clear, it would be nice if this special boot could be 'one-time-only'. So I'm not looking for ways to change /etc/fstab in such way that /dev/sda6 won't be mounted. That's because that would require me to change /etc/fstab twice each time just to make a backup. I'm aware of the fact that there are other backup solutions available, such as deja-dup. I'd like to use partclone, though.

    Read the article

  • Can't open any of my drives/devices(not USB drive)

    - by Anontu
    I can't open any of my computer drives (:c/,:d/.....) Every time i tried to open the drives, the following notice appeared: (i'm new in Ask Ubuntu,so,i can't upload the snapshot,i need 10 reputations to do that) **Unable to access "__ Volume"** error mounting/dev/sda7at/media/MyPC/__Volume:Command-Line 'mount-t "ntfs"-o "uhelper=udisk2,nodev,nosuid,uid=1000,gid=1000,dmask=0077,fmask=0177""/ dev/sda7""/media/MyPC/__Volume"' exited with non-zero exit status 14:The disk contains an unclean file system (0,0).| Metadata Kept in Windows cache,refused to mount. Failed to mount '/dev/sda7':Operation not permitted The NTFS partition is in an unsafe state.Please,resume and shutdown Windows fully(no hibernation or fast restarting),or mount the volume read-only with the 'ro' mount option. I have Windows side by side Ubuntu 13.04 in my computer and i have done (like-Shutting down Windows properly...) things as these in the notice. But it's not working.

    Read the article

  • External hdd boot entries added to GRUB after upgrade to 13.10

    - by Jason
    Long story short, I was using my laptop's internal HDD as an external one on my desktop, and I forgot to remove it when I was upgrading to 13.10. Now GRUB on my desktop has added entries for the Windows and Ubuntu partitions that exist on the laptop's HDD, but I don't want them to be there. Can I safely remove them ? My GRUB table looks like this if I recall correctly: Ubuntu Advanced Options (or something like this) Memtest Another Memtest Entry Windows 7 (loader) on /dev/sda1 Windows 7 (loader) on /dev/sdb1 Ubuntu 13.04 Advanced Options for Ubuntu 13.04 Where Windows 7 (loader) on /dev/sdb1 Ubuntu 13.04 Advanced Options for Ubuntu 13.04 are the entries from the external/laptop's HDD.

    Read the article

  • /usr/src is eating up all inodes

    - by klingone
    it seems /usr/src (apparently old kernels) used up all my inodes Dateisystem Inodes IBenutzt IFrei IUse% Eingehängt auf /dev/sda4 489600 489600 0 100% / devtmpfs 219658 539 219119 1% /dev none 219844 474 219370 1% /run none 219844 3 219841 1% /run/lock none 219844 8 219836 1% /run/shm /dev/sda6 5963776 8361 5955415 1% /home tried everything to remove/purge etc. the old kernels, without success.. dpkg is not working anymore, tried a few manual commands but 12.04 gives me nothing. apt-get etc. is not possible due to lack of space on the hard drive...which is not the problem obviously. however cannot install/ remove anything! read a lot about users with the same problem, but their solutions are not working for me. Anyone out there who could help? thanks a lot!

    Read the article

  • Hide from google while developing

    - by user210757
    I will be building a (wordpress) web site. While I am developing, other team members will be pushing content. I'd like to have it hidden from google while under development. It will be hosted on godaddy. I have thought of not pointing the domain name to it until live and using "preview dns", or buying a static IP during development. Or hosting dev site in a sub-directory ("/dev/") until ready and then moving it up a level. If in the dev directory I'd add htaccess or robots.txt to not crawl. Is any of this a bad idea? Will google penalize for any of this - like search by IP and then associate that with the domain later on? Any better ideas?

    Read the article

  • Update failing. Not enough space on /tmp

    - by KodeSeeker
    Im not being able to run update manager as I get the error saying that there is not enough free space in the /tmp directory. I've practically cleaned out the tmp directory but the error persists. Any help would be appreciated. here's df-h /dev/loop0 13G 11G 952M 92% / udev 2.0G 4.0K 2.0G 1% /dev tmpfs 785M 920K 784M 1% /run none 5.0M 0 5.0M 0% /run/lock none 2.0G 584K 2.0G 1% /run/shm /dev/sda6 20G 14G 6.4G 68% /host overflow 1.0M 16K 1008K 2% /tmp Thanks.

    Read the article

  • Azure WNS to Win8 - Push Notifications for Metro Apps

    - by JoshReuben
    Background The Windows Azure Toolkit for Windows 8 allows you to build a Windows Azure Cloud Service that can send Push Notifications to registered Metro apps via Windows Notification Service (WNS). Some configuration is required - you need to: Register the Metro app for Windows Live Application Management Provide Package SID & Client Secret to WNS Modify the Azure Cloud App cscfg file and the Metro app package.appxmanifest file to contain matching Metro package name, SID and client secret. The Mechanism: These notifications take the form of XAML Tile, Toast, Raw or Badge UI notifications. The core engine is provided via the WNS nuget recipe, which exposes an API for constructing payloads and posting notifications to WNS. An application receives push notifications by requesting a notification channel from WNS, which returns a channel URI that the application then registers with a cloud service. In the cloud service, A WnsAccessTokenProvider authenticates with WNS by providing its credentials, the package SID and secret key, and receives in return an access token that the provider caches and can reuse for multiple notification requests. The cloud service constructs a notification request by filling out a template class that contains the information that will be sent with the notification, including text and image references. Using the channel URI of a registered client, the cloud service can then send a notification whenever it has an update for the user. The package contains the NotificationSendUtils class for submitting notifications. The Windows Azure Toolkit for Windows 8 (WAT) provides the PNWorker sample pair of solutions - The Azure server side contains a WebRole & a WorkerRole. The WebRole allows submission of new push notifications into an Azure Queue which the WorkerRole extracts and processes. Further background resources: http://watwindows8.codeplex.com/ - Windows Azure Toolkit for Windows 8 http://watwindows8.codeplex.com/wikipage?title=Push%20Notification%20Worker%20Sample - WAT WNS sample setup http://watwindows8.codeplex.com/wikipage?title=Using%20the%20Windows%208%20Cloud%20Application%20Services%20Application – using Windows 8 with Cloud Application Services A bit of Configuration Register the Metro apps for Windows Live Application Management From the current app manifest of your metro app Publish tab, copy the Package Display Name and the Publisher From: https://manage.dev.live.com/Build/ Package name: <-- we need to change this Client secret: keep this Package Security Identifier (SID): keep this Verify the app here: https://manage.dev.live.com/Applications/Index - so this step is done "If you wish to send push notifications in your application, provide your Package Security Identifier (SID) and client secret to WNS." Provide Package SID & Client Secret to WNS http://msdn.microsoft.com/en-us/library/windows/apps/hh465407.aspx - How to authenticate with WNS https://appdev.microsoft.com/StorePortals/en-us/Account/Signup/PurchaseSubscription - register app with dashboard - need registration code or register a new account & pay $170 shekels http://msdn.microsoft.com/en-us/library/windows/apps/hh868184.aspx - Registering for a Windows Store developer account http://msdn.microsoft.com/en-us/library/windows/apps/hh868187.aspx - Picking a Microsoft account for the Windows Store The WNS Nuget Recipe The WNS Recipe is a nuget package that provides an API for authenticating against WNS, constructing payloads and posting notifications to WNS. After installing this package, a WnsRecipe assembly is added to project references. To send notifications using WNS, first register the application at the Windows Push Notifications & Live Connect portal to obtain Package Security Identifier (SID) and a secret key that your cloud service uses to authenticate with WNS. An application receives push notifications by requesting a notification channel from WNS, which returns a channel URI that the application then registers with a cloud service. In the cloud service, the WnsAccessTokenProvider authenticates with WNS by providing its credentials, the package SID and secret key, and receives in return an access token that the provider caches and can reuse for multiple notification requests. The cloud service constructs a notification request by filling out a template class that contains the information that will be sent with the notification, including text and image references.Using the channel URI of a registered client, the cloud service can then send a notification whenever it has an update for the user. var provider = new WnsAccessTokenProvider(clientId, clientSecret); var notification = new ToastNotification(provider) {     ToastType = ToastType.ToastText02,     Text = new List<string> { "blah"} }; notification.Send(channelUri); the WNS Recipe is instrumented to write trace information via a trace listener – configuratively or programmatically from Application_Start(): WnsDiagnostics.Enable(); WnsDiagnostics.TraceSource.Listeners.Add(new DiagnosticMonitorTraceListener()); WnsDiagnostics.TraceSource.Switch.Level = SourceLevels.Verbose; The WAT PNWorker Sample The Azure server side contains a WebRole & a WorkerRole. The WebRole allows submission of new push notifications into an Azure Queue which the WorkerRole extracts and processes. Overview of Push Notification Worker Sample The toolkit includes a sample application based on the same solution structure as the one created by theWindows 8 Cloud Application Services project template. The sample demonstrates how to off-load the job of sending Windows Push Notifications using a Windows Azure worker role. You can find the source code in theSamples\PNWorker folder. This folder contains a full version of the sample application showing how to use Windows Push Notifications using ASP.NET Membership as the authentication mechanism. The sample contains two different solution files: WATWindows.Azure.sln: This solution must be opened with Visual Studio 2010 and contains the projects related to the Windows Azure web and worker roles. WATWindows.Client.sln: This solution must be opened with Visual Studio 11 and contains the Windows Metro style application project. Only Visual Studio 2010 supports Windows Azure cloud projects so you currently need to use this edition to launch the server application. This will change in a future release of the Windows Azure tools when support for Visual Studio 11 is enabled. Important: Setting up the PNWorker Sample Before running the PNWorker sample, you need to register the application and configure it: 1. Register the app: To register your application, go to the Windows Live Application Management site for Metro style apps at https://manage.dev.live.com/build and sign in with your Windows Live ID. In the Windows Push Notifications & Live Connect page, enter the following information. Package Display Name PNWorker.Sample Publisher CN=127.0.0.1, O=TESTING ONLY, OU=Windows Azure DevFabric 2. 3. Once you register the application, make a note of the values shown in the portal for Client Secret,Package Name and Package SID. 4. Configure the app - double-click the SetupSample.cmd file located inside the Samples\PNWorker folder to launch a tool that will guide you through the process of configuring the sample. setup runs a PowerShell script that requires running with administration privileges to allow the scripts to execute in your machine. When prompted, enter the Client Secret, Package Name, and Package Security Identifier you obtained previously and wait until the tool finishes configuring your sample. Running the PNWorker Sample To run this sample, you must run both the client and the server application projects. 1. Open Visual Studio 2010 as an administrator. Open the WATWindows.Azure.sln solution. Set the start-up project of the solution as the cloud project. Run the app in the dev fabric to test. 2. Open Visual Studio 11 and open the WATWindows.Client.sln solution. Run the Metro client application. In the client application, click Reopen channel and send to server. à the application opens the channel and registers it with the cloud application, & the Output area shows the channel URI. 3. Refresh the WebRole's Push Notifications page to see the UI list the newly registered client. 4. Send notifications to the client application by clicking the Send Notification button. Setup 3 command files + 1 powershell script: SetupSample.cmd –> SetupWPNS.vbs –> SetupWPNS.cmd –> SetupWPNS.UpdateWPNSCredentialsInServiceConfiguration.ps1 appears to set PackageName – from manifest Client Id package security id (SID) – from registration Client Secret – from registration The following configs are modified: WATWindows\ServiceConfiguration.Cloud.cscfg WATWindows\ServiceConfiguration.Local.cscfg WATWindows.Client\package.appxmanifest WatWindows.Notifications A class library – it references the following WNS DLL: C:\WorkDev\CountdownValue\AzureToolkits\WATWindows8\Samples\PNWorker\packages\WnsRecipe.0.0.3.0\lib\net40\WnsRecipe.dll NotificationJobRequest A DataContract for triggering notifications:     using System.Runtime.Serialization; using Microsoft.Windows.Samples.Notifications;     [DataContract]     [KnownType(typeof(WnsAccessTokenProvider))] public class NotificationJobRequest     {               [DataMember] public bool ProcessAsync { get; set; }          [DataMember] public string Payload { get; set; }         [DataMember] public string ChannelUrl { get; set; }         [DataMember] public NotificationType NotificationType { get; set; }         [DataMember] public IAccessTokenProvider AccessTokenProvider { get; set; }         [DataMember] public NotificationSendOptions NotificationSendOptions{ get; set; }     } Investigated these types: WnsAccessTokenProvider – a DataContract that contains the client Id and client secret NotificationType – an enum that can be: Tile, Toast, badge, Raw IAccessTokenProvider – get or reset the access token NotificationSendOptions – SecondsTTL, NotificationPriority (enum), isCache, isRequestForStatus, Tag   There is also a NotificationJobSerializer class which basically wraps a DataContractSerializer serialization / deserialization of NotificationJobRequest The WNSNotificationJobProcessor class This class wraps the NotificationSendUtils API – it periodically extracts any NotificationJobRequest objects from a CloudQueue and submits them to WNS. The ProcessJobMessageRequest method – this is the punchline: it will deserialize a CloudQueueMessage into a NotificationJobRequest & send pass its contents to NotificationUtils to SendAsynchronously / SendSynchronously, (and then dequeue the message).     public override void ProcessJobMessageRequest(CloudQueueMessage notificationJobMessageRequest)         { Trace.WriteLine("Processing a new Notification Job Request", "Information"); NotificationJobRequest pushNotificationJob =                 NotificationJobSerializer.Deserialize(notificationJobMessageRequest.AsString); if (pushNotificationJob != null)             { if (pushNotificationJob.ProcessAsync)                 { Trace.WriteLine("Sending the notification asynchronously", "Information"); NotificationSendUtils.SendAsynchronously( new Uri(pushNotificationJob.ChannelUrl),                         pushNotificationJob.AccessTokenProvider,                         pushNotificationJob.Payload,                         result => this.ProcessSendResult(pushNotificationJob, result),                         result => this.ProcessSendResultError(pushNotificationJob, result),                         pushNotificationJob.NotificationType,                         pushNotificationJob.NotificationSendOptions);                 } else                 { Trace.WriteLine("Sending the notification synchronously", "Information"); NotificationSendResult result = NotificationSendUtils.Send( new Uri(pushNotificationJob.ChannelUrl),                         pushNotificationJob.AccessTokenProvider,                         pushNotificationJob.Payload,                         pushNotificationJob.NotificationType,                         pushNotificationJob.NotificationSendOptions); this.ProcessSendResult(pushNotificationJob, result);                 }             } else             { Trace.WriteLine("Could not deserialize the notification job", "Error");             } this.queue.DeleteMessage(notificationJobMessageRequest);         } Investigation of NotificationSendUtils class - This is the engine – it exposes Send and a SendAsyncronously overloads that take the following params from the NotificationJobRequest: Channel Uri AccessTokenProvider Payload NotificationType NotificationSendOptions WebRole WebRole is a large MVC project – it references WatWindows.Notifications as well as the following WNS DLL: \AzureToolkits\WATWindows8\Samples\PNWorker\packages\WnsRecipe.0.0.3.0\lib\net40\NotificationsExtensions.dll Controllers\PushNotificationController.cs Notification related namespaces:     using Notifications;     using NotificationsExtensions;     using NotificationsExtensions.BadgeContent;     using NotificationsExtensions.RawContent;     using NotificationsExtensions.TileContent;     using NotificationsExtensions.ToastContent;     using Windows.Samples.Notifications; TokenProvider – initialized from the Azure RoleEnvironment:   IAccessTokenProvider tokenProvider = new WnsAccessTokenProvider(         RoleEnvironment.GetConfigurationSettingValue("WNSPackageSID"),         RoleEnvironment.GetConfigurationSettingValue("WNSClientSecret")); SendNotification method – calls QueuePushMessage method to create and serialize a NotificationJobRequest and enqueue it in a CloudQueue [HttpPost]         public ActionResult SendNotification(             [ModelBinder(typeof(NotificationTemplateModelBinder))] INotificationContent notification,             string channelUrl,             NotificationPriority priority = NotificationPriority.Normal)         {             var payload = notification.GetContent();             var options = new NotificationSendOptions()             {                 Priority = priority             };             var notificationType =                 notification is IBadgeNotificationContent ? NotificationType.Badge :                 notification is IRawNotificationContent ? NotificationType.Raw :                 notification is ITileNotificationContent ? NotificationType.Tile :                 NotificationType.Toast;             this.QueuePushMessage(payload, channelUrl, notificationType, options);             object response = new             {                 Status = "Queued for delivery to WNS"             };             return this.Json(response);         } GetSendTemplate method: Create the cshtml partial rendering based on the notification type     [HttpPost]         public ActionResult GetSendTemplate(NotificationTemplateViewModel templateOptions)         {             PartialViewResult result = null;             switch (templateOptions.NotificationType)             {                 case "Badge":                     templateOptions.BadgeGlyphValueContent = Enum.GetNames(typeof( GlyphValue));                     ViewBag.ViewData = templateOptions;                     result = PartialView("_" + templateOptions.NotificationTemplateType);                     break;                 case "Raw":                     ViewBag.ViewData = templateOptions;                     result = PartialView("_Raw");                     break;                 case "Toast":                     templateOptions.TileImages = this.blobClient.GetAllBlobsInContainer(ConfigReader.GetConfigValue("TileImagesContainer")).OrderBy(i => i.FileName).ToList();                     templateOptions.ToastAudioContent = Enum.GetNames(typeof( ToastAudioContent));                     templateOptions.Priorities = Enum.GetNames(typeof( NotificationPriority));                     ViewBag.ViewData = templateOptions;                     result = PartialView("_" + templateOptions.NotificationTemplateType);                     break;                 case "Tile":                     templateOptions.TileImages = this.blobClient.GetAllBlobsInContainer(ConfigReader.GetConfigValue("TileImagesContainer")).OrderBy(i => i.FileName).ToList();                     ViewBag.ViewData = templateOptions;                     result = PartialView("_" + templateOptions.NotificationTemplateType);                     break;             }             return result;         } Investigated these types: ToastAudioContent – an enum of different Win8 sound effects for toast notifications GlyphValue – an enum of different Win8 icons for badge notifications · Infrastructure\NotificationTemplateModelBinder.cs WNS Namespace references     using NotificationsExtensions.BadgeContent;     using NotificationsExtensions.RawContent;     using NotificationsExtensions.TileContent;     using NotificationsExtensions.ToastContent; Various NotificationFactory derived types can server as bindable models in MVC for creating INotificationContent types. Default values are also set for IWideTileNotificationContent & IToastNotificationContent. Type factoryType = null;             switch (notificationType)             {                 case "Badge":                     factoryType = typeof(BadgeContentFactory);                     break;                 case "Tile":                     factoryType = typeof(TileContentFactory);                     break;                 case "Toast":                     factoryType = typeof(ToastContentFactory);                     break;                 case "Raw":                     factoryType = typeof(RawContentFactory);                     break;             } Investigated these types: BadgeContentFactory – CreateBadgeGlyph, CreateBadgeNumeric (???) TileContentFactory – many notification content creation methods , apparently one for every tile layout type ToastContentFactory – many notification content creation methods , apparently one for every toast layout type RawContentFactory – passing strings WorkerRole WNS Namespace references using Notifications; using Notifications.WNS; using Windows.Samples.Notifications; OnStart() Method – on Worker Role startup, initialize the NotificationJobSerializer, the CloudQueue, and the WNSNotificationJobProcessor _notificationJobSerializer = new NotificationJobSerializer(); _cloudQueueClient = this.account.CreateCloudQueueClient(); _pushNotificationRequestsQueue = _cloudQueueClient.GetQueueReference(ConfigReader.GetConfigValue("RequestQueueName")); _processor = new WNSNotificationJobProcessor(_notificationJobSerializer, _pushNotificationRequestsQueue); Run() Method – poll the Azure Queue for NotificationJobRequest messages & process them:   while (true)             { Trace.WriteLine("Checking for Messages", "Information"); try                 { Parallel.ForEach( this.pushNotificationRequestsQueue.GetMessages(this.batchSize), this.processor.ProcessJobMessageRequest);                 } catch (Exception e)                 { Trace.WriteLine(e.ToString(), "Error");                 } Trace.WriteLine(string.Format("Sleeping for {0} seconds", this.pollIntervalMiliseconds / 1000)); Thread.Sleep(this.pollIntervalMiliseconds);                                            } How I learned to appreciate Win8 There is really only one application architecture for Windows 8 apps: Metro client side and Azure backend – and that is a good thing. With WNS, tier integration is so automated that you don’t even have to leverage a HTTP push API such as SignalR. This is a pretty powerful development paradigm, and has changed the way I look at Windows 8 for RAD business apps. When I originally looked at Win8 and the WinRT API, my first opinion on Win8 dev was as follows – GOOD:WinRT, WRL, C++/CX, WinJS, XAML (& ease of Direct3D integration); BAD: low projected market penetration,.NET lobotomized (Only 8% of .NET 4.5 classes can be used in Win8 non-desktop apps - http://bit.ly/HRuJr7); UGLY:Metro pascal tiles! Perhaps my 80s teenage years gave me a punk reactionary sense of revulsion towards the Partridge Family 70s style that Metro UX seems to have appropriated: On second thought though, it simplifies UI dev to a single paradigm (although UX guys will need to change career) – you will not find an easier app dev environment. Speculation: If LightSwitch is going to support HTML5 client app generation, then its a safe guess to say that vnext will support Win8 Metro XAML - a much easier port from Silverlight XAML. Given the VS2012 LightSwitch integration as a thumbs up from the powers that be at MS, and given that Win8 C#/XAML Metro apps tend towards a streamlined 'golden straight-jacket' cookie cutter app dev style with an Azure back-end supporting Win8 push notifications... --> its easy to extrapolate than LightSwitch vnext could well be the Win8 Metro XAML to Azure RAD tool of choice! The hook is already there - :) Why else have the space next to the HTML Client box? This high level of application development abstraction will facilitate rapid app cookie-cutter architecture-infrastructure frameworks for wrapping any app. This will allow me to avoid too much XAML code-monkeying around & focus on my area of interest: Technical Computing.

    Read the article

  • MySQL query, 2 similar servers, 2 minute difference in execution times

    - by mr12086
    I had a similar question on stack overflow, but it seems to be more server/mysql setup related than coding. The queries below all execute instantly on our development server where as they can take upto 2 minutes 20 seconds. The query execution time seems to be affected by home ambiguous the LIKE string's are. If they closely match a country that has few matches it will take less time, and if you use something like 'ge' for germany - it will take longer to execute. But this doesn't always work out like that, at times its quite erratic. Sending data appears to be the culprit but why and what does that mean. Also memory on production looks to be quite low (free memory)? Production: Intel Quad Xeon E3-1220 3.1GHz 4GB DDR3 2x 1TB SATA in RAID1 Network speed 100Mb Ubuntu Development Intel Core i3-2100, 2C/4T, 3.10GHz 500 GB SATA - No RAID 4GB DDR3 UPDATE 2 : mysqltuner output: [prod] -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.61-0ubuntu0.10.04.1 [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 103M (Tables: 180) [--] Data in InnoDB tables: 491M (Tables: 19) [!!] Total fragmented tables: 38 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 77d 4h 6m 1s (53M q [7.968 qps], 14M conn, TX: 87B, RX: 12B) [--] Reads / Writes: 98% / 2% [--] Total buffers: 58.0M global + 2.7M per thread (151 max threads) [OK] Maximum possible memory usage: 463.8M (11% of installed RAM) [OK] Slow queries: 0% (12K/53M) [OK] Highest usage of available connections: 22% (34/151) [OK] Key buffer size / total MyISAM indexes: 16.0M/10.6M [OK] Key buffer hit rate: 98.7% (162M cached / 2M reads) [OK] Query cache efficiency: 20.7% (7M cached / 36M selects) [!!] Query cache prunes per day: 3934 [OK] Sorts requiring temporary tables: 1% (3K temp sorts / 230K sorts) [!!] Joins performed without indexes: 71068 [OK] Temporary tables created on disk: 24% (3M on disk / 13M total) [OK] Thread cache hit rate: 99% (690 created / 14M connections) [!!] Table cache hit rate: 0% (64 open / 85M opened) [OK] Open file limit used: 12% (128/1K) [OK] Table locks acquired immediately: 99% (16M immediate / 16M locks) [!!] InnoDB data size / buffer pool: 491.9M/8.0M -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Enable the slow query log to troubleshoot bad queries Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Variables to adjust: query_cache_size (> 16M) join_buffer_size (> 128.0K, or always use indexes with joins) table_cache (> 64) innodb_buffer_pool_size (>= 491M) [dev] -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.62-0ubuntu0.11.10.1 [!!] Switch to 64-bit OS - MySQL cannot currently use all of your RAM -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 185M (Tables: 632) [--] Data in InnoDB tables: 967M (Tables: 38) [!!] Total fragmented tables: 73 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 1d 2h 26m 9s (5K q [0.058 qps], 1K conn, TX: 4M, RX: 1M) [--] Reads / Writes: 99% / 1% [--] Total buffers: 58.0M global + 2.7M per thread (151 max threads) [OK] Maximum possible memory usage: 463.8M (11% of installed RAM) [OK] Slow queries: 0% (0/5K) [OK] Highest usage of available connections: 1% (2/151) [OK] Key buffer size / total MyISAM indexes: 16.0M/18.6M [OK] Key buffer hit rate: 99.9% (60K cached / 36 reads) [OK] Query cache efficiency: 44.5% (1K cached / 2K selects) [OK] Query cache prunes per day: 0 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 44 sorts) [OK] Temporary tables created on disk: 24% (162 on disk / 666 total) [OK] Thread cache hit rate: 99% (2 created / 1K connections) [!!] Table cache hit rate: 1% (64 open / 4K opened) [OK] Open file limit used: 8% (88/1K) [OK] Table locks acquired immediately: 100% (1K immediate / 1K locks) [!!] InnoDB data size / buffer pool: 967.7M/8.0M -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Enable the slow query log to troubleshoot bad queries Increase table_cache gradually to avoid file descriptor limits Variables to adjust: table_cache (> 64) innodb_buffer_pool_size (>= 967M) UPDATE 1: When testing the queries listed here there is usually no more than one other query taking place, and usually none. Because production is actually handling apache requests that development gets very few of as it's only myself and 1 other who accesses it - could the 4GB of RAM be getting exhausted by using the single machine for both apache and mysql server? Production: sudo hdparm -tT /dev/sda /dev/sda: Timing cached reads: 24872 MB in 2.00 seconds = 12450.72 MB/sec Timing buffered disk reads: 368 MB in 3.00 seconds = 122.49 MB/sec sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 24786 MB in 2.00 seconds = 12407.22 MB/sec Timing buffered disk reads: 350 MB in 3.00 seconds = 116.53 MB/sec Server version(mysql + ubuntu versions): 5.1.61-0ubuntu0.10.04.1 Development: sudo hdparm -tT /dev/sda /dev/sda: Timing cached reads: 10632 MB in 2.00 seconds = 5319.40 MB/sec Timing buffered disk reads: 400 MB in 3.01 seconds = 132.85 MB/sec Server version(mysql + ubuntu versions): 5.1.62-0ubuntu0.11.10.1 ORIGINAL DATA : This query is NOT the query in question but is related so ill post it. SELECT f.form_question_has_answer_id FROM form_question_has_answer f INNER JOIN project_company_has_user p ON f.form_question_has_answer_user_id = p.project_company_has_user_user_id INNER JOIN company c ON p.project_company_has_user_company_id = c.company_id INNER JOIN project p2 ON p.project_company_has_user_project_id = p2.project_id INNER JOIN user u ON p.project_company_has_user_user_id = u.user_id INNER JOIN form f2 ON p.project_company_has_user_project_id = f2.form_project_id WHERE (f2.form_template_name = 'custom' AND p.project_company_has_user_garbage_collection = 0 AND p.project_company_has_user_project_id = '29') AND (LCASE(c.company_country) LIKE '%ge%' OR LCASE(c.company_country) LIKE '%abcde%') AND f.form_question_has_answer_form_id = '174' And the explain plan for the above query is, run on both dev and production produce the same plan. +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+-------------+ | 1 | SIMPLE | p2 | const | PRIMARY | PRIMARY | 4 | const | 1 | Using index | | 1 | SIMPLE | f | ref | form_question_has_answer_form_id,form_question_has_answer_user_id | form_question_has_answer_form_id | 4 | const | 796 | Using where | | 1 | SIMPLE | u | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.f.form_question_has_answer_user_id | 1 | Using index | | 1 | SIMPLE | p | ref | project_company_has_user_unique_key,project_company_has_user_user_id,project_company_has_user_company_id,project_company_has_user_project_id | project_company_has_user_user_id | 4 | new_klarents.f.form_question_has_answer_user_id | 1 | Using where | | 1 | SIMPLE | f2 | ref | form_project_id | form_project_id | 4 | const | 15 | Using where | | 1 | SIMPLE | c | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.p.project_company_has_user_company_id | 1 | Using where | +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+-------------+ This query takes 2 minutes ~20 seconds to execute. The query that is ACTUALLY being run on the server is this one: SELECT COUNT(*) AS num_results FROM (SELECT f.form_question_has_answer_id FROM form_question_has_answer f INNER JOIN project_company_has_user p ON f.form_question_has_answer_user_id = p.project_company_has_user_user_id INNER JOIN company c ON p.project_company_has_user_company_id = c.company_id INNER JOIN project p2 ON p.project_company_has_user_project_id = p2.project_id INNER JOIN user u ON p.project_company_has_user_user_id = u.user_id INNER JOIN form f2 ON p.project_company_has_user_project_id = f2.form_project_id WHERE (f2.form_template_name = 'custom' AND p.project_company_has_user_garbage_collection = 0 AND p.project_company_has_user_project_id = '29') AND (LCASE(c.company_country) LIKE '%ge%' OR LCASE(c.company_country) LIKE '%abcde%') AND f.form_question_has_answer_form_id = '174' GROUP BY f.form_question_has_answer_id;) dctrn_count_query; With explain plans (again same on dev and production): +----+-------------+-------+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+------------------------------+ | 1 | PRIMARY | NULL | NULL | NULL | NULL | NULL | NULL | NULL | Select tables optimized away | | 2 | DERIVED | p2 | const | PRIMARY | PRIMARY | 4 | | 1 | Using index | | 2 | DERIVED | f | ref | form_question_has_answer_form_id,form_question_has_answer_user_id | form_question_has_answer_form_id | 4 | | 797 | Using where | | 2 | DERIVED | p | ref | project_company_has_user_unique_key,project_company_has_user_user_id,project_company_has_user_company_id,project_company_has_user_project_id,project_company_has_user_garbage_collection | project_company_has_user_user_id | 4 | new_klarents.f.form_question_has_answer_user_id | 1 | Using where | | 2 | DERIVED | f2 | ref | form_project_id | form_project_id | 4 | | 15 | Using where | | 2 | DERIVED | c | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.p.project_company_has_user_company_id | 1 | Using where | | 2 | DERIVED | u | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.p.project_company_has_user_user_id | 1 | Using where; Using index | +----+-------------+-------+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+------------------------------+ On the production server the information I have is as follows. Upon execution: +-------------+ | num_results | +-------------+ | 3 | +-------------+ 1 row in set (2 min 14.28 sec) Show profile: +--------------------------------+------------+ | Status | Duration | +--------------------------------+------------+ | starting | 0.000016 | | checking query cache for query | 0.000057 | | Opening tables | 0.004388 | | System lock | 0.000003 | | Table lock | 0.000036 | | init | 0.000030 | | optimizing | 0.000016 | | statistics | 0.000111 | | preparing | 0.000022 | | executing | 0.000004 | | Sorting result | 0.000002 | | Sending data | 136.213836 | | end | 0.000007 | | query end | 0.000002 | | freeing items | 0.004273 | | storing result in query cache | 0.000010 | | logging slow query | 0.000001 | | logging slow query | 0.000002 | | cleaning up | 0.000002 | +--------------------------------+------------+ On development the results are as follows. +-------------+ | num_results | +-------------+ | 3 | +-------------+ 1 row in set (0.08 sec) Again the profile for this query: +--------------------------------+----------+ | Status | Duration | +--------------------------------+----------+ | starting | 0.000022 | | checking query cache for query | 0.000148 | | Opening tables | 0.000025 | | System lock | 0.000008 | | Table lock | 0.000101 | | optimizing | 0.000035 | | statistics | 0.001019 | | preparing | 0.000047 | | executing | 0.000008 | | Sorting result | 0.000005 | | Sending data | 0.086565 | | init | 0.000015 | | optimizing | 0.000006 | | executing | 0.000020 | | end | 0.000004 | | query end | 0.000004 | | freeing items | 0.000028 | | storing result in query cache | 0.000005 | | removing tmp table | 0.000008 | | closing tables | 0.000008 | | logging slow query | 0.000002 | | cleaning up | 0.000005 | +--------------------------------+----------+ If i remove user and/or project innerjoins the query is reduced to 30s. Last bit of information I have: Mysqlserver and Apache are on the same box, there is only one box for production. Production output from top: before & after. top - 15:43:25 up 78 days, 12:11, 4 users, load average: 1.42, 0.99, 0.78 Tasks: 162 total, 2 running, 160 sleeping, 0 stopped, 0 zombie Cpu(s): 0.1%us, 50.4%sy, 0.0%ni, 49.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4037868k total, 3772580k used, 265288k free, 243704k buffers Swap: 3905528k total, 265384k used, 3640144k free, 1207944k cached top - 15:44:31 up 78 days, 12:13, 4 users, load average: 1.94, 1.23, 0.87 Tasks: 160 total, 2 running, 157 sleeping, 0 stopped, 1 zombie Cpu(s): 0.2%us, 50.6%sy, 0.0%ni, 49.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4037868k total, 3834300k used, 203568k free, 243736k buffers Swap: 3905528k total, 265384k used, 3640144k free, 1207804k cached But this isn't a good representation of production's normal status so here is a grab of it from today outside of executing the queries. top - 11:04:58 up 79 days, 7:33, 4 users, load average: 0.39, 0.58, 0.76 Tasks: 156 total, 1 running, 155 sleeping, 0 stopped, 0 zombie Cpu(s): 3.3%us, 2.8%sy, 0.0%ni, 93.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4037868k total, 3676136k used, 361732k free, 271480k buffers Swap: 3905528k total, 268736k used, 3636792k free, 1063432k cached Development: This one doesn't change during or after. top - 15:47:07 up 110 days, 22:11, 7 users, load average: 0.17, 0.07, 0.06 Tasks: 210 total, 2 running, 208 sleeping, 0 stopped, 0 zombie Cpu(s): 0.1%us, 0.2%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4111972k total, 1821100k used, 2290872k free, 238860k buffers Swap: 4183036k total, 66472k used, 4116564k free, 921072k cached

    Read the article

  • Load balancing with multiple gateways

    - by ttouch
    I have to different ISPs, each on each own network. The main connects via ethernet and the secondary via wifi. The two networks have no relation at all. I just connect to them simultaneously. The reason I want to load balance between them is to achieve higher Internet speeds. Note: I have no advanced network hardware. Just my pc and the two routers that I have no access... main network: if: eth0 gw: 192.168.178.1 my ip: 192.168.178.95 speed: 400 kbit/s secondary network: if: wlan0 gw: 192.168.1.1 my ip: 192.168.1.95 speed: 300 kbit/s A diagram to explain the situation: http://i.imgur.com/NZdsv.jpg I'm on Arch Linux x64. I use netcfg to configure the interfaces Configs: # /etc/network.d/main CONNECTION='ethernet' DESCRIPTION='A basic static ethernet connection using iproute' INTERFACE='eth0' IP='static' ADDR='192.168.178.95' # /etc/network.d/second CONNECTION='wireless' DESCRIPTION='A simple WEP encrypted wireless connection' INTERFACE='wlan0' SECURITY='wep' ESSID='wifi_essid' KEY='the_password' IP="static" ADDR='192.168.1.95' And I use iptables to load balance, rules: #!/bin/bash /usr/sbin/ip route flush table ISP1 2>/dev/null /usr/sbin/ip rule del fwmark 101 table ISP1 2>/dev/null /usr/sbin/ip route add table ISP1 192.168.178.0/24 dev eth0 proto kernel scope link src 192.168.178.95 metric 202 /usr/sbin/ip route add table ISP1 default via 192.168.178.1 dev eth0 /usr/sbin/ip rule add fwmark 101 table ISP1 /usr/sbin/ip route flush table ISP2 2>/dev/null /usr/sbin/ip rule del fwmark 102 table ISP2 2>/dev/null /usr/sbin/ip route add table ISP2 192.168.1.0/24 dev wlan0 proto kernel scope link src 192.168.1.95 metric 202 /usr/sbin/ip route add table ISP2 default via 192.168.1.1 dev wlan0 /usr/sbin/ip rule add fwmark 102 table ISP2 /usr/sbin/iptables -t mangle -F /usr/sbin/iptables -t mangle -X /usr/sbin/iptables -t mangle -N MARK-gw1 /usr/sbin/iptables -t mangle -A MARK-gw1 -m comment --comment 'send via 192.168.178.1' -j MARK --set-mark 101 /usr/sbin/iptables -t mangle -A MARK-gw1 -j CONNMARK --save-mark /usr/sbin/iptables -t mangle -A MARK-gw1 -j RETURN /usr/sbin/iptables -t mangle -N MARK-gw2 /usr/sbin/iptables -t mangle -A MARK-gw2 -m comment --comment 'send via 192.168.1.1' -j MARK --set-mark 102 /usr/sbin/iptables -t mangle -A MARK-gw2 -j CONNMARK --save-mark /usr/sbin/iptables -t mangle -A MARK-gw2 -j RETURN /usr/sbin/iptables -t mangle -A PREROUTING -j CONNMARK --restore-mark /usr/sbin/iptables -t mangle -A PREROUTING -m comment --comment "this stream is already marked; escape early" -m mark ! --mark 0 -j ACCEPT /usr/sbin/iptables -t mangle -A PREROUTING -m comment --comment 'prevent asynchronous routing' -i eth0 -m conntrack --ctstate NEW -j MARK-gw1 /usr/sbin/iptables -t mangle -A PREROUTING -m comment --comment 'prevent asynchronous routing' -i wlan0 -m conntrack --ctstate NEW -j MARK-gw2 /usr/sbin/iptables -t mangle -N DEF_POL /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'default balancing' -p tcp -m conntrack --ctstate ESTABLISHED,RELATED -j CONNMARK --restore-mark /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'default balancing' -p udp -m conntrack --ctstate ESTABLISHED,RELATED -j CONNMARK --restore-mark /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'balance gw1 tcp' -p tcp -m conntrack --ctstate NEW -m statistic --mode nth --every 2 --packet 0 -j MARK-gw1 /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'balance gw1 tcp' -p tcp -m conntrack --ctstate NEW -m statistic --mode nth --every 2 --packet 0 -j ACCEPT /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'balance gw2 tcp' -p tcp -m conntrack --ctstate NEW -m statistic --mode nth --every 2 --packet 1 -j MARK-gw2 /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'balance gw2 tcp' -p tcp -m conntrack --ctstate NEW -m statistic --mode nth --every 2 --packet 1 -j ACCEPT /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'balance gw1 udp' -p udp -m conntrack --ctstate NEW -m statistic --mode nth --every 2 --packet 0 -j MARK-gw1 /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'balance gw1 udp' -p udp -m conntrack --ctstate NEW -m statistic --mode nth --every 2 --packet 0 -j ACCEPT /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'balance gw2 udp' -p udp -m conntrack --ctstate NEW -m statistic --mode nth --every 2 --packet 1 -j MARK-gw2 /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'balance gw2 udp' -p udp -m conntrack --ctstate NEW -m statistic --mode nth --every 2 --packet 1 -j ACCEPT /usr/sbin/iptables -t mangle -A PREROUTING -j DEF_POL /usr/sbin/iptables -t nat -A POSTROUTING -m comment --comment 'snat outbound eth0' -o eth0 -s 192.168.0.0/16 -m mark --mark 101 -j SNAT --to-source 192.168.178.95 /usr/sbin/iptables -t nat -A POSTROUTING -m comment --comment 'snat outbound wlan0' -o wlan0 -s 192.168.0.0/16 -m mark --mark 102 -j SNAT --to-source 192.168.1.95 /usr/sbin/ip route flush cache (this script was made by fukawi2, I don't know how to use iptables) but I have no Internet connection... output of iptables -t mangle -nvL Chain PREROUTING (policy ACCEPT 1254K packets, 1519M bytes) pkts bytes target prot opt in out source destination 1278K 1535M CONNMARK all -- * * 0.0.0.0/0 0.0.0.0/0 CONNMARK restore 21532 15M ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 /* this stream is already marked; escape early */ mark match ! 0x0 582 72579 MARK-gw1 all -- eth0 * 0.0.0.0/0 0.0.0.0/0 /* prevent asynchronous routing */ ctstate NEW 2376 696K MARK-gw2 all -- wlan0 * 0.0.0.0/0 0.0.0.0/0 /* prevent asynchronous routing */ ctstate NEW 1257K 1520M DEF_POL all -- * * 0.0.0.0/0 0.0.0.0/0 Chain INPUT (policy ACCEPT 1276K packets, 1535M bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 870K packets, 97M bytes) pkts bytes target prot opt in out source destination Chain POSTROUTING (policy ACCEPT 870K packets, 97M bytes) pkts bytes target prot opt in out source destination Chain DEF_POL (1 references) pkts bytes target prot opt in out source destination 1236K 1517M CONNMARK tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default balancing */ ctstate RELATED,ESTABLISHED CONNMARK restore 15163 2041K CONNMARK udp -- * * 0.0.0.0/0 0.0.0.0/0 /* default balancing */ ctstate RELATED,ESTABLISHED CONNMARK restore 555 33176 MARK-gw1 tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* balance gw1 tcp */ ctstate NEW statistic mode nth every 2 555 33176 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* balance gw1 tcp */ ctstate NEW statistic mode nth every 2 277 16516 MARK-gw2 tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* balance gw2 tcp */ ctstate NEW statistic mode nth every 2 packet 1 277 16516 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* balance gw2 tcp */ ctstate NEW statistic mode nth every 2 packet 1 1442 384K MARK-gw1 udp -- * * 0.0.0.0/0 0.0.0.0/0 /* balance gw1 udp */ ctstate NEW statistic mode nth every 2 1442 384K ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 /* balance gw1 udp */ ctstate NEW statistic mode nth every 2 720 189K MARK-gw2 udp -- * * 0.0.0.0/0 0.0.0.0/0 /* balance gw2 udp */ ctstate NEW statistic mode nth every 2 packet 1 720 189K ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 /* balance gw2 udp */ ctstate NEW statistic mode nth every 2 packet 1 Chain MARK-gw1 (3 references) pkts bytes target prot opt in out source destination 2579 490K MARK all -- * * 0.0.0.0/0 0.0.0.0/0 /* send via 192.168.178.1 */ MARK set 0x65 2579 490K CONNMARK all -- * * 0.0.0.0/0 0.0.0.0/0 CONNMARK save 2579 490K RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 Chain MARK-gw2 (3 references) pkts bytes target prot opt in out source destination 3373 901K MARK all -- * * 0.0.0.0/0 0.0.0.0/0 /* send via 192.168.1.1 */ MARK set 0x66 3373 901K CONNMARK all -- * * 0.0.0.0/0 0.0.0.0/0 CONNMARK save 3373 901K RETURN all -- * * 0.0.0.0/0 0.0.0.0/0

    Read the article

  • Another sound not working post

    - by Thomas Smart
    Tried all the other "sound not working" posts i think, lost count. purge/reinstall alsa and pulse, reboot, add user to audio group, various lines in the alsa config file such as "options snd-hda-intel model=" then tried different options like generic, auto, basic, default, etc. tried pulseaudio -k && sudo alsa force-reload a few times, with and without rebooting. Hardware: 16gb ram, core I7-4790, Intel Haswell mboard with onboard sound and graphics Multimedia: Audio Adapter: HDA-Intel-HDA Intel HDMI OS: Ubuntu server 14.04 with ubuntu-desktop installed. GUI sound settings lists only the dummy sound card alsamixer -c 0 ¦ Card: HDA Intel HDMI F1: Help ¦ ¦ Chip: Intel Haswell HDMI F2: System information ¦ ¦ View: F3:[Playback] F4: Capture F5: All F6: Select sound card ¦ ¦ Item: S/PDIF ¦ ¦ +--+ ¦ ¦ ¦OO¦ ¦ ¦ +--+ ¦ ¦ < S/PDIF > ¦ aplay -l **** List of PLAYBACK Hardware Devices **** card 0: HDMI [HDA Intel HDMI], device 3: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 aplay -L default Playback/recording through the PulseAudio sound server null Discard all samples (playback) or generate zero samples (capture) pulse PulseAudio Sound Server hdmi:CARD=HDMI,DEV=0 HDA Intel HDMI, HDMI 0 HDMI Audio Output dmix:CARD=HDMI,DEV=3 HDA Intel HDMI, HDMI 0 Direct sample mixing device dsnoop:CARD=HDMI,DEV=3 HDA Intel HDMI, HDMI 0 Direct sample snooping device hw:CARD=HDMI,DEV=3 HDA Intel HDMI, HDMI 0 Direct hardware device without any conversions plughw:CARD=HDMI,DEV=3 HDA Intel HDMI, HDMI 0 Hardware device with all software conversions cat /proc/asound/cards 0 [HDMI ]: HDA-Intel - HDA Intel HDMI HDA Intel HDMI at 0xf7d14000 irq 46 cat /proc/asound/devices 1: : sequencer 2: [ 0- 3]: digital audio playback 3: [ 0- 0]: hardware dependent 4: [ 0] : control 33: : timer mplayer -ao alsa:device=hdmi /usr/share/sounds/ubuntu/stereo/system-ready.ogg MPlayer 1.1-4.8 (C) 2000-2012 MPlayer Team mplayer: could not connect to socket mplayer: No such file or directory Failed to open LIRC support. You will not be able to use your remote control. Playing /usr/share/sounds/ubuntu/stereo/system-ready.ogg. libavformat version 54.20.4 (external) Mismatching header version 54.20.3 libavformat file format detected. [lavf] stream 0: audio (vorbis), -aid 0 Load subtitles in /usr/share/sounds/ubuntu/stereo/ ========================================================================== Opening audio decoder: [ffmpeg] FFmpeg/libavcodec audio decoders libavcodec version 54.35.0 (external) AUDIO: 44100 Hz, 1 ch, floatle, 80.0 kbit/5.67% (ratio: 10000->176400) Selected audio codec: [ffvorbis] afm: ffmpeg (FFmpeg Vorbis) ========================================================================== [AO_ALSA] alsa-lib: confmisc.c:768:(parse_card) cannot find card '1' [AO_ALSA] alsa-lib: conf.c:4248:(_snd_config_evaluate) function snd_func_card_driver returned error: No such file or directory [AO_ALSA] alsa-lib: confmisc.c:392:(snd_func_concat) error evaluating strings [AO_ALSA] alsa-lib: conf.c:4248:(_snd_config_evaluate) function snd_func_concat returned error: No such file or directory [AO_ALSA] alsa-lib: confmisc.c:1251:(snd_func_refer) error evaluating name [AO_ALSA] alsa-lib: conf.c:4248:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory [AO_ALSA] alsa-lib: conf.c:4727:(snd_config_expand) Evaluate error: No such file or directory [AO_ALSA] alsa-lib: pcm.c:2239:(snd_pcm_open_noupdate) Unknown PCM hdmi [AO_ALSA] Playback open error: No such file or directory Failed to initialize audio driver 'alsa:device=hdmi' Could not open/initialize audio device -> no sound. Audio: no sound Video: no video Exiting... (End of file) mplayer -ao alsa:device=hw=0.3 /usr/share/sounds/ubuntu/stereo/system-ready.ogg MPlayer 1.1-4.8 (C) 2000-2012 MPlayer Team mplayer: could not connect to socket mplayer: No such file or directory Failed to open LIRC support. You will not be able to use your remote control. Playing /usr/share/sounds/ubuntu/stereo/system-ready.ogg. libavformat version 54.20.4 (external) Mismatching header version 54.20.3 libavformat file format detected. [lavf] stream 0: audio (vorbis), -aid 0 Load subtitles in /usr/share/sounds/ubuntu/stereo/ ========================================================================== Opening audio decoder: [ffmpeg] FFmpeg/libavcodec audio decoders libavcodec version 54.35.0 (external) AUDIO: 44100 Hz, 1 ch, floatle, 80.0 kbit/5.67% (ratio: 10000->176400) Selected audio codec: [ffvorbis] afm: ffmpeg (FFmpeg Vorbis) ========================================================================== [AO_ALSA] Format floatle is not supported by hardware, trying default. AO: [alsa] 44100Hz 2ch s16le (2 bytes per sample) Video: no video Starting playback... A: 0.4 (00.4) of 0.8 (00.7) 0.1% Exiting... (End of file) Thank you for your time and help :)

    Read the article

  • Integrated webcam in lenovo t410 not working with 12.04

    - by kristianp
    I have a Lenovo T410 with an inbuilt webcam and I haven't been able to get the webcam working. I tried skype, cheese, both just give me a black window. The microphone works fine with skype, by the way. Can anyone provide any clues please? The webcam is enabled in the bios, but there is no light indicating the webcam is on (not sure if there should be, though). I tried this on Kubuntu 11.10 and have upgraded to 12.04 with the same results. The Fn-F6 keyboard combination doens't seem to do anything either. EDIT: I got the webcam replaced, it looks like it was a hardware problem, because it works fine now. Thanks guys. $ ls /dev/v4l/* /dev/v4l/by-id: usb-Chicony_Electronics_Co.__Ltd._Integrated_Camera-video-index0 /dev/v4l/by-path: pci-0000:00:1a.0-usb-0:1.6:1.0-video-index0 And lsusb: $ lsusb Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 002: ID 8087:0020 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0020 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 003: ID 147e:2016 Upek Biometric Touchchip/Touchstrip Fingerprint Sensor Bus 001 Device 004: ID 0a5c:217f Broadcom Corp. Bluetooth Controller Bus 001 Device 005: ID 17ef:480f Lenovo Integrated Webcam [R5U877] Bus 002 Device 003: ID 05c6:9204 Qualcomm, Inc. Bus 002 Device 004: ID 17ef:1003 Lenovo Integrated Smart Card Reader Here is the output from guvcview, minus lots of lines describing the available capture formats. It says "unable to start with minimum setup. Please reconnect your camera.". guvcview 1.5.3 ALSA lib pcm_dmix.c:1018:(snd_pcm_dmix_open) unable to open slave ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.rear ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.center_lfe ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.side ALSA lib audio/pcm_bluetooth.c:1614:(audioservice_expect) BT_GET_CAPABILITIES failed : Input/output error(5) ALSA lib audio/pcm_bluetooth.c:1614:(audioservice_expect) BT_GET_CAPABILITIES failed : Input/output error(5) ALSA lib audio/pcm_bluetooth.c:1614:(audioservice_expect) BT_GET_CAPABILITIES failed : Input/output error(5) ALSA lib audio/pcm_bluetooth.c:1614:(audioservice_expect) BT_GET_CAPABILITIES failed : Input/output error(5) ALSA lib pcm_dmix.c:957:(snd_pcm_dmix_open) The dmix plugin supports only playback stream ALSA lib pcm_dmix.c:1018:(snd_pcm_dmix_open) unable to open slave Cannot connect to server socket err = No such file or directory Cannot connect to server socket jack server is not running or cannot be started video device: /dev/video0 Init. Integrated Camera (location: usb-0000:00:1a.0-1.6) { pixelformat = 'YUYV', description = 'YUV 4:2:2 (YUYV)' } { discrete: width = 640, height = 480 } Time interval between frame: 1/30, .... { discrete: width = 1600, height = 1200 } Time interval between frame: 1/15, vid:17ef pid:480f driver:uvcvideo checking format: 1196444237 libv4l2: error setting pixformat: Device or resource busy VIDIOC_S_FORMAT - Unable to set format: Device or resource busy Init v4L2 failed !! Init video returned -2 trying minimum setup ... video device: /dev/video0 Init. Integrated Camera (location: usb-0000:00:1a.0-1.6) { pixelformat = 'YUYV', description = 'YUV 4:2:2 (YUYV)' } { discrete: width = 640, height = 480 } .... vid:17ef pid:480f driver:uvcvideo checking format: 1448695129 libv4l2: error setting pixformat: Device or resource busy VIDIOC_S_FORMAT - Unable to set format: Device or resource busy Init v4L2 failed !! ERROR: Minimum Setup Failed. Exiting... VIDIOC_REQBUFS - Failed to delete buffers: Invalid argument (errno 22) cleaned allocations - 100% Closing portaudio ...OK Terminated.

    Read the article

  • Set up tunnel to HE.net and now only ipv6.google.com works, but other sites ping fine.

    - by AndrejaKo
    I'm setting up IPv6 using my router which is running OpenWRT, version Backfire 10.03.1-rc4. I made a tunnel using Hurricane Electric's tunnel broker and set it up on the router and I'm using RADVD to hand out IPv6 addresses. My problem is that on computers on the network, I can only access ipv6.google.com using a browser, but other sites seem to be loading forever and won't open in any browser. I can ping and traceroute to them fine, but can't open them with a browser. I can open any site normally with a browser from the router. Stopping firewall service on the router doesn't help, so it's probably not a firewall issue. All AAAA records resolve fine, so it's probably not a DNS issue. Computers on the network get their IPv6 addresses fine, so it's probably not a radvd issue. Similar setup worked fine for SixXs, but I'm having problems with my PoP there, so I decided to move to HE. Here are some traceroutes: From a client computer: Tracing route to ipv6.he.net [2001:470:0:64::2] over a maximum of 30 hops: 1 <1 ms 1 ms 1 ms 2001:470:1f0b:de5::1 2 62 ms 63 ms 62 ms andrejako-1.tunnel.tserv6.fra1.ipv6.he.net [2001:470:1f0a:de5::1] 3 60 ms 60 ms 63 ms gige-g2-4.core1.fra1.he.net [2001:470:0:69::1] 4 63 ms 68 ms 68 ms 10gigabitethernet1-4.core1.ams1.he.net [2001:470:0:47::1] 5 84 ms 74 ms 76 ms 10gigabitethernet1-4.core1.lon1.he.net [2001:470:0:3f::1] 6 146 ms 147 ms 151 ms 10gigabitethernet4-4.core1.nyc4.he.net [2001:470:0:128::1] 7 200 ms 198 ms 202 ms 10gigabitethernet5-3.core1.lax1.he.net [2001:470:0:10e::1] 8 219 ms * 210 ms 10gigabitethernet2-2.core1.fmt2.he.net [2001:470:0:18d::1] 9 221 ms 338 ms 209 ms gige-g4-18.core1.fmt1.he.net [2001:470:0:2d::1] 10 206 ms 210 ms 207 ms ipv6.he.net [2001:470:0:64::2] Trace complete. and another from a cliet computer Tracing route to whatismyipv6.com [2001:4870:a24f:2::90] over a maximum of 30 hops: 1 7 ms 1 ms 1 ms 2001:470:1f0b:de5::1 2 69 ms 70 ms 63 ms AndrejaKo-1.tunnel.tserv6.fra1.ipv6.he.net [2001:470:1f0a:de5::1] 3 57 ms 65 ms 58 ms gige-g2-4.core1.fra1.he.net [2001:470:0:69::1] 4 73 ms 74 ms 75 ms 10gigabitethernet1-4.core1.ams1.he.net [2001:470:0:47::1] 5 71 ms 74 ms 76 ms 10gigabitethernet1-4.core1.lon1.he.net [2001:470:0:3f::1] 6 141 ms 149 ms 148 ms 10gigabitethernet2-3.core1.nyc4.he.net [2001:470:0:3e::1] 7 141 ms 147 ms 143 ms 10gigabitethernet1-2.core1.nyc1.he.net [2001:470:0:37::2] 8 144 ms 145 ms 142 ms 2001:504:1::a500:4323:1 9 226 ms 225 ms 218 ms 2001:4870:a240::2 10 220 ms 224 ms 219 ms 2001:4870:a240::2 11 219 ms 218 ms 220 ms 2001:4870:a24f::2 12 221 ms 222 ms 220 ms www.whatismyipv6.com [2001:4870:a24f:2::90] Trace complete. Here's some firewall info on the router: root@OpenWrt:/# iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 syn_flood tcp -- 0.0.0.0/0 0.0.0.0/0 tcp flags:0x17/0x02 input_rule all -- 0.0.0.0/0 0.0.0.0/0 input all -- 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy DROP) target prot opt source destination zone_wan_MSSFIX all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED forwarding_rule all -- 0.0.0.0/0 0.0.0.0/0 forward all -- 0.0.0.0/0 0.0.0.0/0 reject all -- 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 output_rule all -- 0.0.0.0/0 0.0.0.0/0 output all -- 0.0.0.0/0 0.0.0.0/0 Chain forward (1 references) target prot opt source destination zone_lan_forward all -- 0.0.0.0/0 0.0.0.0/0 zone_wan_forward all -- 0.0.0.0/0 0.0.0.0/0 zone_wan_forward all -- 0.0.0.0/0 0.0.0.0/0 Chain forwarding_lan (1 references) target prot opt source destination Chain forwarding_rule (1 references) target prot opt source destination nat_reflection_fwd all -- 0.0.0.0/0 0.0.0.0/0 Chain forwarding_wan (1 references) target prot opt source destination Chain input (1 references) target prot opt source destination zone_lan all -- 0.0.0.0/0 0.0.0.0/0 zone_wan all -- 0.0.0.0/0 0.0.0.0/0 zone_wan all -- 0.0.0.0/0 0.0.0.0/0 Chain input_lan (1 references) target prot opt source destination Chain input_rule (1 references) target prot opt source destination Chain input_wan (1 references) target prot opt source destination Chain nat_reflection_fwd (1 references) target prot opt source destination ACCEPT tcp -- 192.168.1.0/24 192.168.1.2 tcp dpt:80 Chain output (1 references) target prot opt source destination zone_lan_ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 zone_wan_ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 Chain output_rule (1 references) target prot opt source destination Chain reject (7 references) target prot opt source destination REJECT tcp -- 0.0.0.0/0 0.0.0.0/0 reject-with tcp-reset REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable Chain syn_flood (1 references) target prot opt source destination RETURN tcp -- 0.0.0.0/0 0.0.0.0/0 tcp flags:0x17/0x02 limit: avg 25/sec burst 50 DROP all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_lan (1 references) target prot opt source destination input_lan all -- 0.0.0.0/0 0.0.0.0/0 zone_lan_ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_lan_ACCEPT (2 references) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_lan_DROP (0 references) target prot opt source destination DROP all -- 0.0.0.0/0 0.0.0.0/0 DROP all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_lan_MSSFIX (0 references) target prot opt source destination TCPMSS tcp -- 0.0.0.0/0 0.0.0.0/0 tcp flags:0x06/0x02 TCPMSS clamp to PMTU Chain zone_lan_REJECT (1 references) target prot opt source destination reject all -- 0.0.0.0/0 0.0.0.0/0 reject all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_lan_forward (1 references) target prot opt source destination zone_wan_ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 forwarding_lan all -- 0.0.0.0/0 0.0.0.0/0 zone_lan_REJECT all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_wan (2 references) target prot opt source destination ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:68 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 8 ACCEPT 41 -- 0.0.0.0/0 0.0.0.0/0 input_wan all -- 0.0.0.0/0 0.0.0.0/0 zone_wan_REJECT all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_wan_ACCEPT (2 references) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_wan_DROP (0 references) target prot opt source destination DROP all -- 0.0.0.0/0 0.0.0.0/0 DROP all -- 0.0.0.0/0 0.0.0.0/0 DROP all -- 0.0.0.0/0 0.0.0.0/0 DROP all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_wan_MSSFIX (1 references) target prot opt source destination TCPMSS tcp -- 0.0.0.0/0 0.0.0.0/0 tcp flags:0x06/0x02 TCPMSS clamp to PMTU TCPMSS tcp -- 0.0.0.0/0 0.0.0.0/0 tcp flags:0x06/0x02 TCPMSS clamp to PMTU Chain zone_wan_REJECT (2 references) target prot opt source destination reject all -- 0.0.0.0/0 0.0.0.0/0 reject all -- 0.0.0.0/0 0.0.0.0/0 reject all -- 0.0.0.0/0 0.0.0.0/0 reject all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_wan_forward (2 references) target prot opt source destination ACCEPT tcp -- 0.0.0.0/0 192.168.1.2 forwarding_wan all -- 0.0.0.0/0 0.0.0.0/0 zone_wan_REJECT all -- 0.0.0.0/0 0.0.0.0/0 Here's some routing info: root@OpenWrt:/# ip -f inet6 route 2001:470:1f0a:de5::/64 via :: dev 6in4-henet proto kernel metric 256 mtu 1280 advmss 1220 hoplimit 0 2001:470:1f0b:de5::/64 dev br-lan proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 0 fe80::/64 dev eth0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 0 fe80::/64 dev br-lan proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 0 fe80::/64 dev eth0.1 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 0 fe80::/64 dev eth0.2 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 0 fe80::/64 via :: dev 6in4-henet proto kernel metric 256 mtu 1280 advmss 1220 hoplimit 0 default dev 6in4-henet metric 1024 mtu 1280 advmss 1220 hoplimit 0 I have computers running windows 7 SP1 and openSUSE 11.3 and all of them have same problem. I also made a thread about this on HE's forum, but it seems that people there are out of ideas what to do.

    Read the article

  • Session Sharing with another User on *NIX and Windows

    - by Giri Mandalika
    Oracle Solaris Since Solaris is not widely known for its graphical interface, let's just focus on sharing a terminal session in read-only mode with another user on the same system. Here is an example. eg., % finger Login Name TTY Idle When Where root Super-User pts/1 Sat 16:57 dhcp-amer-vpn-rmdc-a sunperf ??? pts/2 4 Sat 16:41 pitcher.sfbay.sun.com In this example, two users root and sunperf are connected to the same system from two different terminals pts/1 and pts/2 respectively. If the root user wants to show something to sunperf user -- what s/he is doing in her/his terminal, for example, it can be accomplished with the following command. script -a /dev/null | tee -a <target_terminal eg., # script -a /dev/null | tee -a /dev/pts/2 Script started, file is /dev/null # # uptime 5:04pm up 1 day(s), 2:56, 2 users, load average: 0.81, 0.81, 0.81 # # isainfo -v 64-bit sparcv9 applications crc32c cbcond pause mont mpmul sha512 sha256 sha1 md5 camellia kasumi des aes ima hpc vis3 fmaf asi_blk_init vis2 vis popc 32-bit sparc applications crc32c cbcond pause mont mpmul sha512 sha256 sha1 md5 camellia kasumi des aes ima hpc vis3 fmaf asi_blk_init vis2 vis popc v8plus div32 mul32 # # exit Script done, file is /dev/null After the script .. | tee .. command, sunperf user should be able to see the root user's stdin and stdout contents in her/his own terminal until the script session exits in root user's terminal. Since this kind of sharing is based on capturing and redirecting the contents to the target terminal, the users on the receiving end won't be able to see whatever is being edited on initiators' terminal [using editors such as vi]. Also it is not possible to share the session with any connected user on the system unless the initiator has the necessary permissions and privileges. The script utility records everything printed in a terminal session, while the tee utility replicates the contents of the screen capture on to the standard output of the target terimal. The tee utility does not buffer the output - so, the screen capture from the initiators' terminal appears almost right away in the target terminal. Though I never tested, this technique may work on all *NIX and Linux flavors with little or no changes. Also there might be other ways to accomplish this. [Thanks to Sujeet for sharing this tip] Microsoft Windows Most of the Windows users may rely on VNC services to share a desktop session. Another way to share the desktop session is to use the Remote Desktop Connection (RDC) client. Here are the steps. Connect to the target Windows system using Remote Desktop Connection client Launch Windows Task Manager Navigate to the "Users" tab Find the user session that you want to connect to and have full control over as the other user who is currently holding that session Select the user name in Windows Task Manager, right click and choose the option "Remote Control" A window pops up on the other user's session with the message "<USER is requesting to control your session remotely. Do you accept the request?" Once the other user says "Yes", you will be granted access to that session. Since then both users should be able to see the same screen and even control the session from their respective workstations.

    Read the article

  • X server with nvidia driver crashing: 12.04

    - by Raster
    My X server consistently crashes. It seems like this is happening when the X server is idle. This behaviour is new with 12.04. This is only happening on the second display of a multiseat system. Is there a configuration change I can make to stop this? X.Org X Server 1.11.3 Release Date: 2011-12-16 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.42-26-generic x86_64 Ubuntu Current Operating System: Linux Desktop 3.2.0-29-generic #46-Ubuntu SMP Fri Jul 27 17:03:23 UTC 2012 x86_64 Kernel command line: BOOT_IMAGE=/vmlinuz-3.2.0-29-generic root=/dev/mapper/Group1-Root ro ramdisk_size=512000 quiet splash vt.handoff=7 Build Date: 04 August 2012 01:51:23AM xorg-server 2:1.11.4-0ubuntu10.7 (For technical support please see http://www.ubuntu.com/support) Current version of pixman: 0.24.4 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.1.log", Time: Sun Sep 2 22:37:06 2012 (==) Using config file: "/etc/X11/xorg.conf" (==) Using system config directory "/usr/share/X11/xorg.conf.d" Backtrace: 0: /usr/bin/X (xorg_backtrace+0x26) [0x7fcee86be846] 1: /usr/bin/X (0x7fcee8536000+0x18c6ea) [0x7fcee86c26ea] 2: /lib/x86_64-linux-gnu/libpthread.so.0 (0x7fcee785c000+0xfcb0) [0x7fcee786bcb0] 3: /usr/lib/x86_64-linux-gnu/xorg/extra-modules/nvidia_drv.so (0x7fcee14ba000+0x902a9) [0x7fcee154a2a9] 4: /usr/lib/x86_64-linux-gnu/xorg/extra-modules/nvidia_drv.so (0x7fcee14ba000+0xfd5e7) [0x7fcee15b75e7] 5: /usr/lib/x86_64-linux-gnu/xorg/extra-modules/nvidia_drv.so (0x7fcee14ba000+0x4d6b92) [0x7fcee1990b92] 6: /usr/lib/x86_64-linux-gnu/xorg/extra-modules/nvidia_drv.so (0x7fcee14ba000+0x4d74d5) [0x7fcee19914d5] 7: /usr/lib/x86_64-linux-gnu/xorg/extra-modules/nvidia_drv.so (0x7fcee14ba000+0x4d767d) [0x7fcee199167d] 8: /usr/bin/X (0x7fcee8536000+0x1196ec) [0x7fcee864f6ec] 9: /usr/bin/X (0x7fcee8536000+0xe8ad5) [0x7fcee861ead5] 10: /usr/bin/X (0x7fcee8536000+0xe9d45) [0x7fcee861fd45] /etc/X11/xorg.conf Section "ServerLayout" Identifier "Desktop" Screen 0 "DesktopScreen" 0 0 InputDevice "DesktopMouse" "CorePointer" InputDevice "DesktopKeyboard" "CoreKeyboard" Option "AutoAddDevices" "false" Option "AllowEmptyInput" "true" Option "AutoEnableDevices" "false" EndSection Section "ServerLayout" Identifier "Desktop2" Screen 1 "Desktop2Screen" 0 0 InputDevice "Desktop2Mouse" "CorePointer" InputDevice "Desktop2Keyboard" "CoreKeyboard" Option "AutoAddDevices" "false" Option "AllowEmptyInput" "true" Option "AutoEnableDevices" "false" EndSection Section "Module" Load "dbe" Load "extmod" Load "type1" Load "freetype" Load "glx" EndSection Section "Files" EndSection Section "ServerFlags" Option "AutoAddDevices" "false" Option "AutoEnableDevices" "false" Option "AllowMouseOpenFail" "on" Option "AllowEmptyInput" "on" Option "ZapWarning" "on" Option "HandleSepcialKeys" "off" # Zapping on Option "DRI2" "on" Option "Xinerama" "0" EndSection # Desktop Mouse Section "InputDevice" Identifier "DesktopMouse" Driver "evdev" Option "Device" "/dev/input/event3" Option "Protocol" "auto" Option "GrabDevice" "on" Option "Emulate3Buttons" "no" Option "Buttons" "5" Option "ZAxisMapping" "4 5" Option "SendCoreEvents" "true" EndSection # Desktop2 Mouse Section "InputDevice" Identifier "Desktop2Mouse" Driver "evdev" Option "Device" "/dev/input/event5" Option "Protocol" "auto" Option "GrabDevice" "on" Option "Emulate3Buttons" "no" Option "Buttons" "5" Option "ZAxisMapping" "4 5" Option "SendCoreEvents" "true" EndSection Section "InputDevice" Identifier "DesktopKeyboard" Driver "evdev" Option "Device" "/dev/input/event4" Option "XkbRules" "xorg" Option "XkbModel" "105" Option "XkbLayout" "us" Option "Protocol" "Standard" Option "GrabDevice" "on" EndSection Section "InputDevice" Identifier "Desktop2Keyboard" Driver "evdev" Option "Device" "/dev/input/event6" Option "XkbRules" "xorg" Option "XkbModel" "105" Option "XkbLayout" "us" Option "Protocol" "Standard" Option "GrabDevice" "on" EndSection Section "Monitor" Identifier "Desktop2Monitor" VendorName "Acer" ModelName "Acer G235H" HorizSync 30.0 - 83.0 VertRefresh 56.0 - 75.0 Option "DPMS" EndSection Section "Monitor" Identifier "DesktopMonitor" VendorName "Acer" ModelName "Acer H213H" HorizSync 30.0 - 83.0 VertRefresh 56.0 - 75.0 Option "DPMS" EndSection Section "Device" Identifier "EVGACard" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 560 Ti" Option "Coolbits" "1" BusID "PCI:2:0:0" Screen 0 EndSection Section "Device" Identifier "XFXCard" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 9800" Option "Coolbits" "1" BusID "PCI:5:0:0" Screen 0 EndSection Section "Screen" Identifier "DesktopScreen" Device "EVGACard" Monitor "DesktopMonitor" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Desktop2Screen" Device "XFXCard" Monitor "Desktop2Monitor" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • LitJSON's JsonMapper.ToJson error: Max allowed object depth reached while trying to export from type

    - by dev.e.loper
    I have an object that I would like to convert to json inside one of the object's methods. I'm using LitJson library. Like so: protected override void Render(HtmlTextWriter writer) { ... writer.AddAttribute(HtmlTextWriterAttribute.Value, JsonMapper.ToJson(this)); .... } However JsonMapper.ToJson(this) produces a server error "Max allowed object depth reached while trying to export from type System.Drawing.Color". My guess is that because its trying to convert an object inside itself its going into some kind of infinite loop. Just curious what is actually happening.

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >