Search Results

Search found 27496 results on 1100 pages for 'distributed source ctrl'.

Page 215/1100 | < Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >

  • Xubuntu 14.04 black screen after monitor off/on

    - by Sampo Smolander
    I have a HP desktop machine with Ubuntu 14.04, with also Xubuntu and Gnome desktops installed. I mostly use the Xubuntu session. When using the Xubuntu session, when I power off my monitor, and power it back on, the screen stays black and I cannot do any work. With ctrl-alt-f6 I can open another virtual terminal, and then the screen works again, but if I to go back to the desktop session with ctrl-alt-f7, all I have is a black screen. But, if I login to Unity (the Ubuntu default session), or to Gnome, and turn the monitor off and back on, everything works ok. Also, if I log out, and while in the lightdm greeter, turn the monitor off and back on, everything works ok. What could be the cause for this, and how to fix this?

    Read the article

  • Assign keys to commands in Terminal?

    - by NES
    Is there a solution to assign special key combinations to words in terminal use. For example the less command is very usefull and i use i a lot to pipe the output of another process through it. The idea would be to set up special key combinations that are only active in terminal use assigned to write different commands? So pressing CTRL + l in terminal window could write | less or CTRL + G could stand for | grep Note: i just mean adding the letters to commandline not execute the finally. A similar way what's tabcompletion but more specific.

    Read the article

  • CodePlex Daily Summary for Tuesday, August 05, 2014

    CodePlex Daily Summary for Tuesday, August 05, 2014Popular ReleasesGMare: GMare Beta 1.3: Fixed: Sprites not being read inLib.Web.Mvc & Yet another developer blog: Lib.Web.Mvc 6.4.2: Lib.Web.Mvc is a library which contains some helper classes for ASP.NET MVC such as strongly typed jqGrid helper, XSL transformation HtmlHelper/ActionResult, FileResult with range request support, custom attributes and more. Release contains: Lib.Web.Mvc.dll with xml documentation file Standalone documentation in chm file and change log Library source code Sample application for strongly typed jqGrid helper is available here. Sample application for XSL transformation HtmlHelper/ActionRe...Virto Commerce Enterprise Open Source eCommerce Platform (asp.net mvc): Virto Commerce 1.11: Virto Commerce Community Edition version 1.11. To install the SDK package, please refer to SDK getting started documentation To configure source code package, please refer to Source code getting started documentation This release includes many bug fixes and minor improvements. More details about this release can be found on our blog at http://blog.virtocommerce.com.Json.NET: Json.NET 6.0 Release 4: New feature - Added Merge to LINQ to JSON New feature - Added JValue.CreateNull and JValue.CreateUndefined New feature - Added Windows Phone 8.1 support to .NET 4.0 portable assembly New feature - Added OverrideCreator to JsonObjectContract New feature - Added support for overriding the creation of interfaces and abstract types New feature - Added support for reading UUID BSON binary values as a Guid New feature - Added MetadataPropertyHandling.Ignore New feature - Improv...Aitso-a platform for spatial optimization and based on artificial immune systems: Aitso_0.14.08.01: Aitso0.14.08.01Installer.zipVidCoder: 1.5.24 Beta: Added NL-Means denoiser. Updated HandBrake core to SVN 6254. Added extra error handling to DVD player code to avoid a crash when the player was moved.EasyMR: v1.0: 1 ????????? 2 ????????? 3 ??????? ????????????????????,???????????????,???????????PowerShell App Deployment Toolkit: PowerShell App Deployment Toolkit v3.1.5: *Added Send-Keys function to send a sequence of keys to an application window (Thanks to mmashwani) *Added 3 optimization/stability improvements to Execute-Process following MS best practice (Thanks to mmashwani) *Fixed issue where Execute-MSI did not use value from XML file for uninstall but instead ran all uninstalls silently by default *Fixed error on 1641 exit code (should be a success like 3010) *Fixed issue with error handling in Invoke-SCCMTask *Fixed issue with deferral dates where th...Facebook Graph Toolkit: Facebook Graph Toolkit 5.0.493: updated to Graph Api v2.0 updated class definitions according to latest documentation fixed a potential memory leak causing socket exhaustion LINQ to FQL simplified usage performance improvement added missing constructorsCrashReporter.NET : Exception reporting library for C# and VB.NET: CrashReporter.NET 1.4: Added French, Italian, German, Russian and Spanish translation. Added DeveloperMessage field so developers can send value of variables or other details they want along with crash report. Fixed bug where subject of crash report email was affected by translation. Fixed bug where extension is not added in SaveFileDialog while saving crash report.AutoUpdater.NET : Auto update library for VB.NET and C# Developer: AutoUpdater.NET 1.3: Fixed problem in DownloadUpdateDialog where download continues even if you close the dialog. Added support for new url field for 64 bit application setup. AutoUpdater.NET will decide which download url to use by looking at the value of IntPtr.Size. Added German translation provided by Rene Kannegiesser. Now developer can handle update logic herself using event suggested by ricorx7. Added italian translation provided by Gianluca Mariani. Fixed bug that prevents Application from exiti...Windows Embedded Board Support Package for BeagleBone: WEC2013 Beaglebone Black 03.02.00: Demo image. Runs on BeagleBone White and Black. On BeagleBone Black works with uSD or eMMC based images. XAML support added. SDK and demo tests included. New SDK refresh (uninstall old version first) IoT M2MQTT client built in. Built and maintained in the U.S.A.!Essence#: Philemon-3 (Alpha Build 19): The Philemon-3 release corrects a problem with the script of the NSIS installation program generator used to build the installer for the previous release. That issue could have been addressed by simply fixing the script and then regenerating the installation program, but a) it was judged to be less work to create a new release based on the current development code, and b) a new release does a better job of communicating the fact that the installation program for the previous release is broken...SEToolbox: SEToolbox 01.041.012 Release 1: Added voxel material textures to read in with mods. Fixed missing texture replacements for mods. Fixed rounding issue in raytrace code. Fixed repair issue with corrupt checkpoint file. Fixed issue with updated SE binaries 01.041.012 using new container configuration.Continuous Affect Rating and Media Annotation: CARMA v6.02: Compiled as v6.02 for MCR 8.3 (2014a) 32-bit Windows version Ratings will now only be saved for full seconds (e.g., a 90.2s video will now yield 90 ratings instead of 91 ratings). This should be more transparent to users and ensure consistency in the number of ratings output by different computers. Separated the XLS and XLSX output formats in the save file drop-down box. This should be more transparent to users than having to add the appropriate extension.Magick.NET: Magick.NET 6.8.9.601: Magick.NET linked with ImageMagick 6.8.9.6 Breaking changes: - Changed arguments for the Map method of MagickImage. - QuantizeSettings uses Riemersma by default.Wallpapr: Wallpapr 1.7: Fixed the "Prohibited" message (Flickr upgraded their API to SSL-only).DbEntry.Net (Leafing Framework): DbEntry.Net 4.2: DbEntry.Net is a lightweight Object Relational Mapping (ORM) database access compnent for .Net 4.0+. It has clearly and easily programing interface for ORM and sql directly, and supoorted Access, Sql Server, MySql, SQLite, Firebird, PostgreSQL and Oracle. It also provide a Ruby On Rails style MVC framework. Asp.Net DataSource and a simple IoC. DbEntry.Net.v4.2.Setup.zip include the setup package. DbEntry.Net.v4.2.Src.zip include source files and unit tests. DbEntry.Net.v4.2.Samples.zip ...Azure Storage Explorer: Azure Storage Explorer 6 Preview 1: Welcome to Azure Storage Explorer 6 Preview 1 This is the first release of the latest Azure Storage Explorer, code-named Phoenix. What's New?Here are some important things to know about version 6: Open Source Now being run as a full open source project. Full source code on CodePlex. Collaboration encouraged! Updated Code Base Brand-new code base (WPF/C#/.NET 4.5) Visual Studio 2013 solution (previously VS2010) Uses the Task Parallel Library (TPL) for asynchronous background operat...Wsus Package Publisher: release v1.3.1407.29: Updated WPP to recognize the very latest console version. Some files was missing into the latest release of WPP which lead to crash when trying to make a custom update. Add a workaround to avoid clipboard modification when double-clicking on a label when creating a custom update. Add the ability to publish detectoids. (This feature is still in a BETA phase. Packages relying on these detectoids to determine which computers need to be updated, may apply to all computers).New ProjectsAPS - Folha de Pagamento: Folha de PagamentoCIS470 Metrics Tracking: A data driven application designed to display various project progress metrics and reports based off of data. CIS470 Metrics Tracking DeVry Capstone project.Excel Converter: A great tools to convert MS excel to PDF and vice verse. We plan support both Window and Web version.FreightSoft: A Web Based Shipping and Freight Management SystemGoogle .Net API: Dot Net warp for Google API Services. API Demo, documentation, simple useful tools. Welcome to join us make more google API works on Dot Net World. Google Driver downloader: A free tool to bulk download file from google driver to local box.Omniprogram: OmniprogramOnline OCR: A free to OCR via google APIOnline Resume Parsing Using Aspose.Words for .NET: Online Resume Parsing Using Aspose.Words for .NET. PDF Text Searcher: This application searches for a word from the PDF file name inside the PDF itself.PEIN Framework: A collection of day to day required libraries.Public Components: Public Components, by Danny VarodRole Based View for Microsoft Dynamic CRM 2011 & 2013: Allow admin to configure entity's view based on role.scmactivitytfs: Test project to play with Team Foundation Server.Selfpub.CrmSolution: Tram-pam-pamWindGE: WindGE????DirectX11??????3D???????,????????????????????,???WPF???????。yao's project: a project for personal study

    Read the article

  • Is this way of using Excel 2007 Pivot table for BI scalable ?

    - by Sim
    Hi all, Background: We need to consolidate sales data across the country to do analysis Our Internet connection/IT expertise/IT investment is not quite strong, therefore full BI solution is out of question I tried several SaaS BI solution (GoodData, ZohoReports) and while they're good, they seem not to fully support what we need We're looking at 'bout 2 millions record for every 2 months My current approach Our (10) sites currently gathers data from all their branches and consolidate them into 1 Excel file with Pivot table and embed source data In HQ, I will request 10 sites to send back those Excel files periodically We will import those Excel to our MSSQL server There will be a master Excel file, that will also have the same pivot table (as those came from site Excel file), and datasource is the MSSQL server More details For testing, I currently use MSSQL 2008 Express on my laptop So far, I imported our transactions for the past 2 months and there are 2 millions+ row in 1 table in MSSQL (we just use 1 table, corresponding to our common pivot table structure). DB size is ~ 600 MB In the master Excel file, if not including the source data, it's just < 10MB. Including the source data will increase the size to 60 MB (so I supposed Office 2007 automatically zip the data ?) I try using the Pivot (drag-and-drop fields) and the performance so far is OK (my laptop specs: C2D T7200, 3GB RAM, Windows XP) So my question is : If we're looking at full year transaction (roughly 15 millions rows in MSSQL 2008 Express, 3.6 GB in size), is there any issue with that 15 million rows in 1 table in SQL Express ? Is there any performance issue with the pivot table at that time ? Can it still embed the source data ? (I google-ed but didn't find the maximum size of source data Excel 2007 can embed) Any other suggestions on how we can better do this ? Given that we can't afford the full BI solution, any light-weight/budget/SaaS BI that you can recommend ? Thanks

    Read the article

  • How to open the console in different browsers?

    - by Šime Vidas
    Chrome: Press CTRL + SHIFT + I to open the Developer Tools. Click on the "Open console." icon in the bottom left corner. IE9: Press F12 to open the developer tools. Open the Script tab, click the "Console" button on the right. Firefox 4: Press CTRL + SHIFT + K to open the Web console. What about Opera 11 and Safari 5? Clarification: By console I mean the JavaScript console that lets you input and execute JavaScript code.

    Read the article

  • Oracle Solaris Zones Physical to virtual (P2V)

    - by user939057
    IntroductionThis document describes the process of creating and installing a Solaris 10 image build from physical system and migrate it into a virtualized operating system environment using the Oracle Solaris 10 Zones Physical-to-Virtual (P2V) capability.Using an example and various scenarios, this paper describes how to take advantage of theOracle Solaris 10 Zones Physical-to-Virtual (P2V) capability with other Oracle Solaris features to optimize performance using the Solaris 10 resource management advanced storage management using Solaris ZFS plus improving operating system visibility with Solaris DTrace. The most common use for this tool is when performing consolidation of existing systems onto virtualization enabled platforms, in addition to that we can use the Physical-to-Virtual (P2V) capability  for other tasks for example backup your physical system and move them into virtualized operating system environment hosted on the Disaster Recovery (DR) site another option can be building an Oracle Solaris 10 image repository with various configuration and a different software packages in order to reduce provisioning time.Oracle Solaris ZonesOracle Solaris Zones is a virtualization and partitioning technology supported on Oracle Sun servers powered by SPARC and Intel processors.This technology provides an isolated and secure environment for running applications. A zone is a virtualized operating system environment created within a single instance of the Solaris 10 Operating System.Each virtual system is called a zone and runs a unique and distinct copy of the Solaris 10 operating system.Oracle Solaris Zones Physical-to-Virtual (P2V)A new feature for Solaris 10 9/10.This feature provides the ability to build a Solaris 10 images from physical system and migrate it into a virtualized operating system environmentThere are three main steps using this tool1. Image creation on the source system, this image includes the operating system and optionally the software in which we want to include within the image. 2. Preparing the target system by configuring a new zone that will host the new image.3. Image installation on the target system using the image we created on step 1. The host, where the image is built, is referred to as the source system and the host, where theimage is installed, is referred to as the target system. Benefits of Oracle Solaris Zones Physical-to-Virtual (P2V)Here are some benefits of this new feature:  Simple- easy build process using Oracle Solaris 10 built-in commands.  Robust- based on Oracle Solaris Zones a robust and well known virtualization technology.  Flexible- support migration between V series servers into T or -M-series systems.For the latest server information, refer to the Sun Servers web page. PrerequisitesThe target Oracle Solaris system should be running the latest version of the patching patch cluster. and the minimum Solaris version on the target system should be Solaris 10 9/10.Refer to the latest Administration Guide for Oracle Solaris for a complete procedure on how todownload and install Oracle Solaris. NOTE: If the source system that used to build the image is an older version then the targetsystem, then during the process, the operating system will be upgraded to Solaris 10 9/10(update on attach).Creating the Image Used to distribute the software.We will create an image on the source machine. We can create the image on the local file system and then transfer it to the target machine, or build it into a NFS shared storage andmount the NFS file system from the target machine.Optional  before creating the image we need to complete the software installation that we want to include with the Solaris 10 image.An image is created by using the flarcreate command:Source # flarcreate -S -n s10-system -L cpio /var/tmp/solaris_10_up9.flarThe command does the following:  -S specifies that we skip the disk space check and do not write archive size data to the archive (faster).  -n specifies the image name.  -L specifies the archive format (i.e cpio). Optionally, we can add descriptions to the archive identification section, which can help to identify the archive later.Source # flarcreate -S -n s10-system -e "Oracle Solaris with Oracle DB10.2.0.4" -a "oracle" -L cpio /var/tmp/solaris_10_up9.flarYou can see example of the archive identification section in Appendix A: archive identification section.We can compress the flar image using the gzip command or adding the -c option to the flarcreate commandSource # gzip /var/tmp/solaris_10_up9.flarAn md5 checksum can be created for the image in order to ensure no data tamperingSource # digest -v -a md5 /var/tmp/solaris_10_up9.flar Moving the image into the target system.If we created the image on the local file system, we need to transfer the flar archive from the source machine to the target machine.Source # scp /var/tmp/solaris_10_up9.flar target:/var/tmpConfiguring the Zone on the target systemAfter copying the software to the target machine, we need to configure a new zone in order to host the new image on that zone.To install the new zone on the target machine, first we need to configure the zone (for the full zone creation options see the following link: http://docs.oracle.com/cd/E18752_01/html/817-1592/index.html  )ZFS integrationA flash archive can be created on a system that is running a UFS or a ZFS root file system.NOTE: If you create a Solaris Flash archive of a Solaris 10 system that has a ZFS root, then bydefault, the flar will actually be a ZFS send stream, which can be used to recreate the root pool.This image cannot be used to install a zone. You must create the flar with an explicit cpio or paxarchive when the system has a ZFS root.Use the flarcreate command with the -L archiver option, specifying cpio or pax as themethod to archive the files. (For example, see Step 1 in the previous section).Optionally, on the target system you can create the zone root folder on a ZFS file system inorder to benefit from the ZFS features (clones, snapshots, etc...).Target # zpool create zones c2t2d0 Create the zone root folder:Target # chmod 700 /zones Target # zonecfg -z solaris10-up9-zonesolaris10-up9-zone: No such zone configuredUse 'create' to begin configuring a new zone.zonecfg:solaris10-up9-zone> createzonecfg:solaris10-up9-zone> set zonepath=/zoneszonecfg:solaris10-up9-zone> set autoboot=truezonecfg:solaris10-up9-zone> add netzonecfg:solaris10-up9-zone:net> set address=192.168.0.1zonecfg:solaris10-up9-zone:net> set physical=nxge0zonecfg:solaris10-up9-zone:net> endzonecfg:solaris10-up9-zone> verifyzonecfg:solaris10-up9-zone> commitzonecfg:solaris10-up9-zone> exit Installing the Zone on the target system using the imageInstall the configured zone solaris10-up9-zone by using the zoneadm command with the install -a option and the path to the archive.The following example shows how to create an Image and sys-unconfig the zone.Target # zoneadm -z solaris10-up9-zone install -u -a/var/tmp/solaris_10_up9.flarLog File: /var/tmp/solaris10-up9-zone.install_log.AJaGveInstalling: This may take several minutes...The following example shows how we can preserve system identity.Target # zoneadm -z solaris10-up9-zone install -p -a /var/tmp/solaris_10_up9.flar Resource management Some applications are sensitive to the number of CPUs on the target Zone. You need tomatch the number of CPUs on the Zone using the zonecfg command:zonecfg:solaris10-up9-zone>add dedicated-cpuzonecfg:solaris10-up9-zone> set ncpus=16DTrace integrationSome applications might need to be analyzing using DTrace on the target zone, you canadd DTrace support on the zone using the zonecfg command:zonecfg:solaris10-up9-zone>setlimitpriv="default,dtrace_proc,dtrace_user" Exclusive IP stack An Oracle Solaris Container running in Oracle Solaris 10 can have a shared IP stack with the global zone, or it can have an exclusive IP stack (which was released in Oracle Solaris 10 8/07). An exclusive IP stack provides a complete, tunable, manageable and independent networking stack to each zone. A zone with an exclusive IP stack can configure Scalable TCP (STCP), IP routing, IP multipathing, or IPsec. For an example of how to configure an Oracle Solaris zone with an exclusive IP stack, see the following example zonecfg:solaris10-up9-zone set ip-type=exclusivezonecfg:solaris10-up9-zone> add netzonecfg:solaris10-up9-zone> set physical=nxge0 When the installation completes, use the zoneadm list -i -v options to list the installedzones and verify the status.Target # zoneadm list -i -vSee that the new Zone status is installedID NAME STATUS PATH BRAND IP0 global running / native shared- solaris10-up9-zone installed /zones native sharedNow boot the ZoneTarget # zoneadm -z solaris10-up9-zone bootWe need to login into the Zone order to complete the zone set up or insert a sysidcfg file beforebooting the zone for the first time see example for sysidcfg file in Appendix B: sysidcfg filesectionTarget # zlogin -C solaris10-up9-zoneTroubleshootingIf an installation fails, review the log file. On success, the log file is in /var/log inside the zone. Onfailure, the log file is in /var/tmp in the global zone.If a zone installation is interrupted or fails, the zone is left in the incomplete state. Use uninstall -F to reset the zone to the configured state.Target # zoneadm -z solaris10-up9-zone uninstall -FTarget # zonecfg -z solaris10-up9-zone delete -FConclusionOracle Solaris Zones P2V tool provides the flexibility to build pre-configuredimages with different software configuration for faster deployment and server consolidation.In this document, I demonstrated how to build and install images and to integrate the images with other Oracle Solaris features like ZFS and DTrace.Appendix A: archive identification sectionWe can use the head -n 20 /var/tmp/solaris_10_up9.flar command in order to access theidentification section that contains the detailed description.Target # head -n 20 /var/tmp/solaris_10_up9.flarFlAsH-aRcHiVe-2.0section_begin=identificationarchive_id=e4469ee97c3f30699d608b20a36011befiles_archived_method=cpiocreation_date=20100901160827creation_master=mdet5140-1content_name=s10-systemcreation_node=mdet5140-1creation_hardware_class=sun4vcreation_platform=SUNW,T5140creation_processor=sparccreation_release=5.10creation_os_name=SunOScreation_os_version=Generic_142909-16files_compressed_method=nonecontent_architectures=sun4vtype=FULLsection_end=identificationsection_begin=predeploymentbegin 755 predeployment.cpio.ZAppendix B: sysidcfg file sectionTarget # cat sysidcfgsystem_locale=Ctimezone=US/Pacificterminal=xtermssecurity_policy=NONEroot_password=HsABA7Dt/0sXXtimeserver=localhostname_service=NONEnetwork_interface=primary {hostname= solaris10-up9-zonenetmask=255.255.255.0protocol_ipv6=nodefault_route=192.168.0.1}name_service=NONEnfs4_domain=dynamicWe need to copy this file before booting the zoneTarget # cp sysidcfg /zones/solaris10-up9-zone/root/etc/

    Read the article

  • How to create a deb package that installs a series of files

    - by fossfreedom
    I would like to create a brand new deb package to install series of files. If at all possible, I would like to untar the folder containing these files as part of the installation into a known folder location. Failing that, some knowledge how to package the source folders and files would be very useful. Question is - is this possible and if so - how? Lets give an example: ~/mypluginfolder/ contains the files x, y, a subfolder called abc and inside that another file called z. I want to tar this folder: tar -cvf myfiles.tar ~/mypluginfolder I presume my debian package would look like myfiles.tar.gz myfiles+ppafoss_0.1-1/ myfiles.tar DEBIAN changelog, compat, control, install, rules source Is it possible to somehow untar myfiles.tar to a known folder location for example /usr/share/rhythmbox/plugins/ Thus the final result would be: /usr/share/rhythmbox/plugins/mypluginfolder /usr/share/rhythmbox/plugins/mypluginfolder\x /usr/share/rhythmbox/plugins/mypluginfolder\y /usr/share/rhythmbox/plugins/mypluginfolder\abc\z If - presuming launchpad needs source, advice is sought as to where I should drop the source folders and files into the deb package structure. This will eventually will become a series of individual launchpad PPA packages. What I prefer (but may not be able to achieve...) is to keep my packaging to a minimum - create a series of packages from a template and adjust the bare minimum (changelog etc + the tar file/file & folder structure).

    Read the article

  • Pay in the future should make you think in the present

    - by BuckWoody
    Distributed Computing - and more importantly “-as-a-Service” models of computing have a different cost model. This is something that sounds obvious on the surface but it’s often forgotten during the design and coding phase of a project. In on-premises computing, we’re used to purchasing a server and all of the hardware infrastructure and software licenses needed not only for one project, but several. This is an up-front or “sunk” cost that we consume by running code the organization needs to perform its function. Using a direct connection over wires you’ve already paid for, we don’t often have to think about bandwidth, hits on the data store or the amount of compute we use - we just know more is better. In a pay-as-you-go model, however, each of these architecture decisions has a potential cost impact. The amount of data you store, the number of times you access it, and the amount you send back all come with a charge. The offset is that you don’t buy anything at all up-front, so that sunk cost is freed up. And financial professionals know that money now is worth more than money later. Saving that up-front cost allows you to invest it in other things. It’s not just that you’re using things that now cost money - it’s that the design itself in distributed computing has a cost impact. That can be a really good thing, such as when you dynamically add capacity for paying customers. If you can tie back the cost of a series of clicks to what a user will pay to do so, you can set a profit margin that is easy to track. Here’s a case in point: Assume you are using a large instance in Windows Azure to compute some data that you retrieve from a SQL Azure database. If you don’t monitor the path of the application, you may not know what you are really using. Since you’re paying by the size of the instance, it’s best to maximize it all the time. Recently I evaluated just this situation, and found that downsizing the instance and adding another one where needed, adding a caching function to the application, moving part of the data into Windows Azure tables not only increased the speed of the application, but reduced the cost and more closely tied the cost to the profit. The key is this: from the very outset - the design - make sure you include metrics to measure for the cost/performance (sometimes these are the same) for your application. Windows Azure opens up awesome new ways of doing things, so make sure you study distributed systems architecture before you try and force in the application design you have on premises into your new application structure.

    Read the article

  • How can I assign actions to all my mouse buttons?

    - by torbengb
    I have a mouse with lots of buttons, but it's not a mainstream make like Logitech. For Windows, I have a driver that lets me assign actions like close-window (Ctrl+W) or next-tab (Ctrl+Tab), but I don't have a Linux driver. Since Linux is so flexible, I thought perhaps there is a general way to do this, regardless of brand? Update: Based on input from Cyrex, I installed and ran sudo apt-get install btnx which found several but not all mouse buttons. Found: left, right, wheel, wheelclick, thumb fwd, thumb back. Not found: wheel left, wheel right, thumb middle button. Vendor ID is 0x04d9, Model ID is 0xa015.

    Read the article

  • Can't set 1280x1024 with Nvidia Geforce 8400 GS

    - by torbengb
    I've just installed 10.04 LTS using the Windows installer. The system hangs during boot; the splash screen is frozen and it accepts neither Ctrl+Alt+F2 nor Ctrl+Alt+Del, only a hard reset. (I'm a linux noob.) When I edit the default Grub boot option to omit quiet splash, it gets to the point saying * setting sensors limit [OK] _ and there it stays. I can only get to the desktop using the Grub recovery boot option, of course with a lower resolution (800x600) but everything else seems to work fine. As I said, this is a new install. The only thing I've done is to use the Update Manager to get everything up to date, and activate the newest Nvidia driver using the "Hardware drivers" window. I had a similar problem when I installed 9.04 a year ago, and at that time posted this question with an answer that worked - this doesn't work with 10.04. Running nvidia-xconfig to create a new xorg.conf didn't fix it either (while in Recovery boot).

    Read the article

  • Submitting software to a competition, it becomes their property?

    - by myrkos
    So I'm about to submit a game to a competition, but as I looked through the rules a chunk grabbed my attention: All Entries become the sole and exclusive property of Sponsor and will not be acknowledged or returned. Sponsor shall own all right, title and interest in and to each Entry, including without limitation all results and proceeds thereof and all elements or constituent parts of Entry (including without limitation the Mobile App, the Design Documents, the Video Trailer, the Playable and all illustrations, logos, mechanicals, renderings, characters, graphics, designs, layouts or other material therein) and all copyrights and renewals and extensions of copyrights therein and thereto. Without limitation of the foregoing, each Eligible Entrant shall and hereby does absolutely and irrevocably assign and transfer all of his or her right, title and interest in his or her Entry to Sponsor, and Sponsor shall have the right and may authorize others to use, copy, sublicense, transmit, modify, manipulate, publish, delete, reproduce, perform, distribute, display and otherwise exploit the Entry (and to create and exploit derivative works thereof) in any manner, including without limitation to embody the Entry, in whole or in part, in apps and other works of any kind or nature created, developed, published or distributed by Sponsor and to and register as a trademark in any country in Sponsor’s name any component of the Entry, without such Eligible Entrant reserving any rights or claims with respect thereto. Sponsor shall have the exclusive right, in perpetuity, throughout the Territory to change, adapt, modify, use, combine with other material and otherwise exploit the Entry in all media now known or hereafter devised and in any manner, in its sole and absolute discretion, without the need for any payment or credit to Entrant. So the game will become the sponsor's property; however, they don't ask for source code. So will I still own the rights to the source code, whatever that means? And if it doesn't win said competition, will I be able to publish it myself without their trademarks? I am very new to software legality stuff, so I would appreciate any clarification. Since there's a possibility I won't even own the source, is it possible to make the game core engine open source software with a not-very-restrictive license and include that in the project, so I at least still own the game engine? Or does it not work that way?

    Read the article

  • How can I restart a service designed to run under supervise

    - by Terrence Brannon
    I have written a service designed to run under daemontools supervise. Because the run script refreshes a source code repository: git pull pod_server # serve up docs on source code via the web I desire for the script to restart every 5 minutes. The manpage for svc says that it applies all options to the service, so I thought this would work in cron: */5 * * * * svc -du etc/pod_server but it does not seem to be refreshing to source code repo with new pushes

    Read the article

  • Taming Hopping Windows

    - by Roman Schindlauer
    At first glance, hopping windows seem fairly innocuous and obvious. They organize events into windows with a simple periodic definition: the windows have some duration d (e.g. a window covers 5 second time intervals), an interval or period p (e.g. a new window starts every 2 seconds) and an alignment a (e.g. one of those windows starts at 12:00 PM on March 15, 2012 UTC). var wins = xs     .HoppingWindow(TimeSpan.FromSeconds(5),                    TimeSpan.FromSeconds(2),                    new DateTime(2012, 3, 15, 12, 0, 0, DateTimeKind.Utc)); Logically, there is a window with start time a + np and end time a + np + d for every integer n. That’s a lot of windows. So why doesn’t the following query (always) blow up? var query = wins.Select(win => win.Count()); A few users have asked why StreamInsight doesn’t produce output for empty windows. Primarily it’s because there is an infinite number of empty windows! (Actually, StreamInsight uses DateTimeOffset.MaxValue to approximate “the end of time” and DateTimeOffset.MinValue to approximate “the beginning of time”, so the number of windows is lower in practice.) That was the good news. Now the bad news. Events also have duration. Consider the following simple input: var xs = this.Application                 .DefineEnumerable(() => new[]                     { EdgeEvent.CreateStart(DateTimeOffset.UtcNow, 0) })                 .ToStreamable(AdvanceTimeSettings.IncreasingStartTime); Because the event has no explicit end edge, it lasts until the end of time. So there are lots of non-empty windows if we apply a hopping window to that single event! For this reason, we need to be careful with hopping window queries in StreamInsight. Or we can switch to a custom implementation of hopping windows that doesn’t suffer from this shortcoming. The alternate window implementation produces output only when the input changes. We start by breaking up the timeline into non-overlapping intervals assigned to each window. In figure 1, six hopping windows (“Windows”) are assigned to six intervals (“Assignments”) in the timeline. Next we take input events (“Events”) and alter their lifetimes (“Altered Events”) so that they cover the intervals of the windows they intersect. In figure 1, you can see that the first event e1 intersects windows w1 and w2 so it is adjusted to cover assignments a1 and a2. Finally, we can use snapshot windows (“Snapshots”) to produce output for the hopping windows. Notice however that instead of having six windows generating output, we have only four. The first and second snapshots correspond to the first and second hopping windows. The remaining snapshots however cover two hopping windows each! While in this example we saved only two events, the savings can be more significant when the ratio of event duration to window duration is higher. Figure 1: Timeline The implementation of this strategy is straightforward. We need to set the start times of events to the start time of the interval assigned to the earliest window including the start time. Similarly, we need to modify the end times of events to the end time of the interval assigned to the latest window including the end time. The following snap-to-boundary function that rounds a timestamp value t down to the nearest value t' <= t such that t' is a + np for some integer n will be useful. For convenience, we will represent both DateTime and TimeSpan values using long ticks: static long SnapToBoundary(long t, long a, long p) {     return t - ((t - a) % p) - (t > a ? 0L : p); } How do we find the earliest window including the start time for an event? It’s the window following the last window that does not include the start time assuming that there are no gaps in the windows (i.e. duration < interval), and limitation of this solution. To find the end time of that antecedent window, we need to know the alignment of window ends: long e = a + (d % p); Using the window end alignment, we are finally ready to describe the start time selector: static long AdjustStartTime(long t, long e, long p) {     return SnapToBoundary(t, e, p) + p; } To find the latest window including the end time for an event, we look for the last window start time (non-inclusive): public static long AdjustEndTime(long t, long a, long d, long p) {     return SnapToBoundary(t - 1, a, p) + p + d; } Bringing it together, we can define the translation from events to ‘altered events’ as in Figure 1: public static IQStreamable<T> SnapToWindowIntervals<T>(IQStreamable<T> source, TimeSpan duration, TimeSpan interval, DateTime alignment) {     if (source == null) throw new ArgumentNullException("source");     // reason about DateTime and TimeSpan in ticks     long d = Math.Min(DateTime.MaxValue.Ticks, duration.Ticks);     long p = Math.Min(DateTime.MaxValue.Ticks, Math.Abs(interval.Ticks));     // set alignment to earliest possible window     var a = alignment.ToUniversalTime().Ticks % p;     // verify constraints of this solution     if (d <= 0L) { throw new ArgumentOutOfRangeException("duration"); }     if (p == 0L || p > d) { throw new ArgumentOutOfRangeException("interval"); }     // find the alignment of window ends     long e = a + (d % p);     return source.AlterEventLifetime(         evt => ToDateTime(AdjustStartTime(evt.StartTime.ToUniversalTime().Ticks, e, p)),         evt => ToDateTime(AdjustEndTime(evt.EndTime.ToUniversalTime().Ticks, a, d, p)) -             ToDateTime(AdjustStartTime(evt.StartTime.ToUniversalTime().Ticks, e, p))); } public static DateTime ToDateTime(long ticks) {     // just snap to min or max value rather than under/overflowing     return ticks < DateTime.MinValue.Ticks         ? new DateTime(DateTime.MinValue.Ticks, DateTimeKind.Utc)         : ticks > DateTime.MaxValue.Ticks         ? new DateTime(DateTime.MaxValue.Ticks, DateTimeKind.Utc)         : new DateTime(ticks, DateTimeKind.Utc); } Finally, we can describe our custom hopping window operator: public static IQWindowedStreamable<T> HoppingWindow2<T>(     IQStreamable<T> source,     TimeSpan duration,     TimeSpan interval,     DateTime alignment) {     if (source == null) { throw new ArgumentNullException("source"); }     return SnapToWindowIntervals(source, duration, interval, alignment).SnapshotWindow(); } By switching from HoppingWindow to HoppingWindow2 in the following example, the query returns quickly rather than gobbling resources and ultimately failing! public void Main() {     var start = new DateTimeOffset(new DateTime(2012, 6, 28), TimeSpan.Zero);     var duration = TimeSpan.FromSeconds(5);     var interval = TimeSpan.FromSeconds(2);     var alignment = new DateTime(2012, 3, 15, 12, 0, 0, DateTimeKind.Utc);     var events = this.Application.DefineEnumerable(() => new[]     {         EdgeEvent.CreateStart(start.AddSeconds(0), "e0"),         EdgeEvent.CreateStart(start.AddSeconds(1), "e1"),         EdgeEvent.CreateEnd(start.AddSeconds(1), start.AddSeconds(2), "e1"),         EdgeEvent.CreateStart(start.AddSeconds(3), "e2"),         EdgeEvent.CreateStart(start.AddSeconds(9), "e3"),         EdgeEvent.CreateEnd(start.AddSeconds(3), start.AddSeconds(10), "e2"),         EdgeEvent.CreateEnd(start.AddSeconds(9), start.AddSeconds(10), "e3"),     }).ToStreamable(AdvanceTimeSettings.IncreasingStartTime);     var adjustedEvents = SnapToWindowIntervals(events, duration, interval, alignment);     var query = from win in HoppingWindow2(events, duration, interval, alignment)                 select win.Count();     DisplayResults(adjustedEvents, "Adjusted Events");     DisplayResults(query, "Query"); } As you can see, instead of producing a massive number of windows for the open start edge e0, a single window is emitted from 12:00:15 AM until the end of time: Adjusted Events StartTime EndTime Payload 6/28/2012 12:00:01 AM 12/31/9999 11:59:59 PM e0 6/28/2012 12:00:03 AM 6/28/2012 12:00:07 AM e1 6/28/2012 12:00:05 AM 6/28/2012 12:00:15 AM e2 6/28/2012 12:00:11 AM 6/28/2012 12:00:15 AM e3 Query StartTime EndTime Payload 6/28/2012 12:00:01 AM 6/28/2012 12:00:03 AM 1 6/28/2012 12:00:03 AM 6/28/2012 12:00:05 AM 2 6/28/2012 12:00:05 AM 6/28/2012 12:00:07 AM 3 6/28/2012 12:00:07 AM 6/28/2012 12:00:11 AM 2 6/28/2012 12:00:11 AM 6/28/2012 12:00:15 AM 3 6/28/2012 12:00:15 AM 12/31/9999 11:59:59 PM 1 Regards, The StreamInsight Team

    Read the article

  • Automated testing tool development challenges (for embedded software)

    - by Karthi prime
    My boss want to come up with the proposal for the following tool: An IDE: Able to build, compile, debug, via JTAG programming for the micro-controller. A Test Suite, reads the code in the IDE, auto generates the test cases, and it gives the in-target unit testing results(which is done by controlling code execution in the micro-controller via IDE). A no-overhead code coverage tool which interacts with the test suite and IDE. My work is to obtain the high level architecture of this tool, so as to proceed further. My current knowledge: There are tool-chains available from the chip manufacturer for the micro-controllers which can be utilized along with an open-source IDE like Eclipse, and along with an open-source burner, a complete IDE for a micro-controller can be done. Test cases can be auto-generated by reading the source file through the process of parsing, scripting, based on keywords. Test suite must be able to command the IDE to control, through breakpoints, and read the register contents from the microcontroller - This enables the in-target unit testing. An no-overhead code coverage should be done by no-overhead code instrumentation so as to execute those in the resource constraint environment of the micro-controller. I have the following questions: Any advice on the validity of my understanding? What are the challenges I will have during the development? What are the helpful open-source tools regarding this? What is the development time for this software? Thanks

    Read the article

  • WPF DataGrid using a DataGridTemplateColumn rather than a DataGridComboBoxColumn to show selected value at load

    - by T
    My problem was that using a DataGridComboBoxColumn I couldn’t get it to show the selected value when the DataGrid loaded.  Instead, the user would have to click in the cells and like magic, the current selected values would appear and it looked the way I wanted it to on load. Here is what I had <DataGridComboBoxColumn MinWidth="150" x:Name="crewColumn" Header="Crew" ItemsSource="{Binding JobEdit.Crews, Source={StaticResource Locator}}" DisplayMemberPath="Name" SelectedItemBinding="{Binding JobEdit.SelectedCrew, Mode=TwoWay, Source={StaticResource Locator}}" />   Here is what I changed it too.  This works great.  It displays the selected item when the DataGrid loads and shows the combo box when the user goes into edit mode.   <DataGridTemplateColumn Header="Crew" MinWidth="150"> <DataGridTemplateColumn.CellTemplate> <DataTemplate> <TextBlock Text="{Binding Crew.Name}"></TextBlock> </DataTemplate> </DataGridTemplateColumn.CellTemplate> <DataGridTemplateColumn.CellEditingTemplate> <DataTemplate> <ComboBox ItemsSource="{Binding JobEdit.Crews, Source={StaticResource Locator}}" DisplayMemberPath="Name" SelectedItem="{Binding JobEdit.SelectedCrew, Mode=TwoWay, Source={StaticResource Locator}}"></ComboBox> </DataTemplate> </DataGridTemplateColumn.CellEditingTemplate> </DataGridTemplateColumn>

    Read the article

  • iptables rule(s) to send openvpn traffic from clients over an sshuttle tunnel?

    - by Sam Martin
    I have an Ubuntu 12.04 box with OpenVPN. The VPN is working as expected -- clients can connect, browse the Web, etc. The OpenVPN server IP is 10.8.0.1 on tun0. On that same box, I can use sshuttle to tunnel into another network to access a Web server on 10.10.0.9. sshuttle does its magic using the following iptables commands: iptables -t nat -N sshuttle-12300 iptables -t nat -F sshuttle-12300 iptables -t nat -I OUTPUT 1 -j sshuttle-12300 iptables -t nat -I PREROUTING 1 -j sshuttle-12300 iptables -t nat -A sshuttle-12300 -j REDIRECT --dest 10.10.0.0/24 -p tcp --to-ports 12300 -m ttl ! --ttl 42 iptables -t nat -A sshuttle-12300 -j RETURN --dest 127.0.0.0/8 -p tcp Is it possible to forward traffic from OpenVPN clients over the sshuttle tunnel to the remote Web server? I'd ultimately like to be able to set up any complicated tunneling on the server, and have relatively "dumb" clients (iPad, etc.) be able to access the remote servers via OpenVPN. Below is a basic diagram of the scenario: [Edit: added output from the OpenVPN box] $ sudo iptables -nL -v -t nat Chain PREROUTING (policy ACCEPT 1498 packets, 252K bytes) pkts bytes target prot opt in out source destination 1512 253K sshuttle-12300 all -- * * 0.0.0.0/0 0.0.0.0/0 Chain INPUT (policy ACCEPT 322 packets, 58984 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 584 packets, 43241 bytes) pkts bytes target prot opt in out source destination 587 43421 sshuttle-12300 all -- * * 0.0.0.0/0 0.0.0.0/0 Chain POSTROUTING (policy ACCEPT 589 packets, 43595 bytes) pkts bytes target prot opt in out source destination 1175 76298 MASQUERADE all -- * eth0 10.8.0.0/24 0.0.0.0/0 Chain sshuttle-12300 (2 references) pkts bytes target prot opt in out source destination 17 1076 REDIRECT tcp -- * * 0.0.0.0/0 10.10.0.0/24 TTL match TTL != 42 redir ports 12300 0 0 RETURN tcp -- * * 0.0.0.0/0 127.0.0.0/8 $ sudo iptables -nL -v -t filter Chain INPUT (policy ACCEPT 97493 packets, 30M bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 131K 109M ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 1370 89160 ACCEPT all -- * * 10.8.0.0/24 0.0.0.0/0 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable [Edit 2: more OpenVPN server output] $ netstat -r Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface default 192.168.1.1 0.0.0.0 UG 0 0 0 eth0 10.8.0.0 10.8.0.2 255.255.255.0 UG 0 0 0 tun0 10.8.0.2 * 255.255.255.255 UH 0 0 0 tun0 192.168.1.0 * 255.255.255.0 U 0 0 0 eth0 [Edit 3: still more debug output] IP forwarding appears to be enabled correctly on the OpenVPN server: # find /proc/sys/net/ipv4/conf/ -name forwarding -ls -execdir cat {} \; 18926 0 -rw-r--r-- 1 root root 0 Mar 5 13:31 /proc/sys/net/ipv4/conf/all/forwarding 1 18954 0 -rw-r--r-- 1 root root 0 Mar 5 13:31 /proc/sys/net/ipv4/conf/default/forwarding 1 18978 0 -rw-r--r-- 1 root root 0 Mar 5 13:31 /proc/sys/net/ipv4/conf/eth0/forwarding 1 19003 0 -rw-r--r-- 1 root root 0 Mar 5 13:31 /proc/sys/net/ipv4/conf/lo/forwarding 1 19028 0 -rw-r--r-- 1 root root 0 Mar 5 13:31 /proc/sys/net/ipv4/conf/tun0/forwarding 1 Client routing table: $ netstat -r Routing tables Internet: Destination Gateway Flags Refs Use Netif Expire 0/1 10.8.0.5 UGSc 8 48 tun0 default 192.168.1.1 UGSc 2 1652 en1 10.8.0.1/32 10.8.0.5 UGSc 1 0 tun0 10.8.0.5 10.8.0.6 UHr 13 0 tun0 10.10.0/24 10.8.0.5 UGSc 0 0 tun0 <snip> Traceroute from client: $ traceroute 10.10.0.9 traceroute to 10.10.0.9 (10.10.0.9), 64 hops max, 52 byte packets 1 10.8.0.1 (10.8.0.1) 5.403 ms 1.173 ms 1.086 ms 2 192.168.1.1 (192.168.1.1) 4.693 ms 2.110 ms 1.990 ms 3 l100.my-verizon-garbage (client-ext-ip) 7.453 ms 7.089 ms 6.248 ms 4 * * * 5 10.10.0.9 (10.10.0.9) 14.915 ms !N * 6.620 ms !N

    Read the article

  • How to configure extra buttons in Logitech Mouse

    - by Rick
    Can anyone tell me how to configure all the buttons on a Logitech MX 620 mouse (http://www.logitech.com/en-us/support/mice/2987) under Ubuntu 12.04? Specifically, I like to make one of them just the ctrl key (for control clicking webpages) and another one ctrl-w to close tabs. I also normally make the scroll wheel page down for each click (otherwise it hurts my arms to be scrolling so much). I make pushing the wheel to the left = pageback and pushing to the right = page forward. I've searched for other answers to this and found something related here: http://ubuntuforums.org/showthread.php?t=1789807 But when I posted a followup post to solve the issue, no one responded --perhaps I made the mistake of posting to a question that had been "solved." I'm not sure how I'm supposed to reopen a question that is pertinent to my question but doesn't quite solve mine. Thank you for any help.

    Read the article

  • Is Ubuntu MAAS free? Will it remain like that?

    - by Bruno Pereira
    Ubuntu MAAS, very cool, awesome in fact, looks like a unique tool for several jobs. It looks free, but part of its documentation starts already with clauses that would scare anyone with interest in it: Documentation is copy righted by Canonical; Documentation must be used only for non-commercial purposes; If documentation is distributed within the non-commercial clause you must retain copyright; It just sounds a lot for a guide on how to install MAAS + Juju + Openstack and that scares me a bit. Under what license is Ubuntu MAAS distributed and what would be the reasoning for being so worried about copyrighting a guide like that so heavily? Is Ubuntu MAAS free? Will it continue like that?

    Read the article

  • Problems installing 13.10 inside VMWare Player 4

    - by Thomas S.
    In the past I had no problems installing Ubuntu 12.04 inside VMWare Player 4.0.4 on my Windows 7 Pro machine. Now I installed Ubuntu 13.10 and have a couple of problems: the default screensize (even during the installation) is gigantic I've told Ubuntu to automatically login - after reboot the gigantic screen occurs without showing anything, without doing anything on mouse clicks, Ctrl+Alt+Backspace does not work switching to console using Ctrl+Alt+F1...F6 works, but is incredible slow invoking 'sudo reboot now' does not succeed, it prints Killing all remining processes... [fail] Restoring resolver state... [ OK ] Will now switch to single-user mode root@ubuntu1310:~# invoking now 'shutdown now' will soon show the same error. What I need to do to get Ubuntu 13.10 up and running in VMWare Player?

    Read the article

  • How to get feedback from the community on large chunks of code?

    - by MainMa
    Code Review.SE is great when you need feedback on a precise, short piece of code. But where to get similar feedback about the code itself when: you have thousands of LOC, don't have colleagues in your workplace ready or willing to review the code¹, don't have thousands of dollars to spend for a professional review by a third party developer?² Places like CodePlex are a good idea to get your project known³, but from what I've seen, the feedback you get on known projects are consumer feedback, i.e. concerns the bugs and feature requests, not the quality of the source code itself. What are the social way to get the community involved in the code review of the codebase of a certain size for an open source project which doesn't have the scale of Firefox or similar products? ¹ Which is the case for most personal and open source projects, or projects done in companies where the practice of regular and complete code review is nonexistent. ² Which is, again, the case for most personal and open source projects. ³ Even if too many projects published on CodePlex never get known, either because nobody cares or because they are presented not very well.

    Read the article

  • How to forward OpenVPN Port to NAT'd XEN domU

    - by John
    I want to install a OpenVPN domU on XEN. Dom0 and domU are running Debian Squeeze, all domU are on a NAT'd privat network 10.0.0.1/24 My VPN-Gate is von 10.0.0.1 and running. How can I make it accessible under the dom0 public IP? I tried forwarding the port using iptables, but without any success. Here is what i did: ~ # iptables -L -n -v Chain INPUT (policy ACCEPT 1397 packets, 118K bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 930 packets, 133K bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED PHYSDEV match --physdev-out vif5.0 0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 PHYSDEV match --physdev-in vif5.0 udp spt:68 dpt:67 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED PHYSDEV match --physdev-out vif5.0 0 0 ACCEPT all -- * * 10.0.0.1 0.0.0.0/0 PHYSDEV match --physdev-in vif5.0 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED PHYSDEV match --physdev-out vif3.0 0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 PHYSDEV match --physdev-in vif3.0 udp spt:68 dpt:67 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED PHYSDEV match --physdev-out vif3.0 0 0 ACCEPT all -- * * 10.0.0.5 0.0.0.0/0 PHYSDEV match --physdev-in vif3.0 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED PHYSDEV match --physdev-out vif2.0 0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 PHYSDEV match --physdev-in vif2.0 udp spt:68 dpt:67 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED PHYSDEV match --physdev-out vif2.0 0 0 ACCEPT all -- * * 10.0.0.2 0.0.0.0/0 PHYSDEV match --physdev-in vif2.0 147 8236 ACCEPT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:80 13 546 ACCEPT udp -- eth0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:1194 Chain OUTPUT (policy ACCEPT 1000 packets, 99240 bytes) pkts bytes target prot opt in out source destination ~ # iptables -L -t nat -n -v Chain PREROUTING (policy ACCEPT 324 packets, 23925 bytes) pkts bytes target prot opt in out source destination 139 7824 DNAT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 to:10.0.0.5:80 1 42 DNAT udp -- * * 0.0.0.0/0 0.0.0.0/0 udp dpt:1194 to:10.0.0.1:1194 Chain POSTROUTING (policy ACCEPT 92 packets, 5030 bytes) pkts bytes target prot opt in out source destination 863 64983 MASQUERADE all -- * eth0 0.0.0.0/0 0.0.0.0/0 0 0 MASQUERADE all -- * eth0 0.0.0.0/0 0.0.0.0/0 0 0 MASQUERADE all -- * eth0 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT 180 packets, 13953 bytes) pkts bytes target prot opt in out source destination

    Read the article

  • Amazon EC2 - Unable to connect to MySQL

    - by alexus
    I'm having issue connecting from one VM to another # nmap -p3306 ip-XX-XX-XX-XX.ec2.internal Starting Nmap 6.40 ( http://nmap.org ) at 2014-06-10 17:50 EDT Nmap scan report for ip-XX-XX-XX-XX.ec2.internal (XX.XX.XX.XX) Host is up (0.000033s latency). PORT STATE SERVICE 3306/tcp closed mysql Nmap done: 1 IP address (1 host up) scanned in 1.05 seconds # in my Security Group I allowed Inbound connectivity via port TCP, portrange 3306 and Source 0.0.0.0/0, so theoratically it should work, but in reality it doesn't( I'm running red hat enterprise linux 7 on both VMs. mariadb.service running fine on another VM and I am able to connect to it locally. DB's: # netstat -anp | grep 3306 tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 2324/mysqld # iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination # Any ideas what else I missed?

    Read the article

  • Encrypt shared files on AD Domain.

    - by Walter
    Can I encrypt shared files on windows server and allow only authenticated domain users have access to these files? The scenario as follows: I have a software development company, and I would like to protect my source code from being copied by my programmers. One problem is that some programmers use their own laptops to developing the company's software. In this scenario it's impossible to prevent developers from copying the source code for their laptops. In this case I thought about the following solution, but i don't know if it's possible to implement. The idea is to encrypt the source code and they are accessible (decrypted) only when developers are logged into the AD domain, ie if they are not logged into the AD domain, the source code would be encrypted be useless. Can be implemented this ? What technology should be used?

    Read the article

  • How do you run XBMC on nvidia dual screen and stop it from taking over the keyboard and mouse?

    - by Paul Swartout
    I have set up dual screen under Ubuntu 12.04. I have a GeForce 8500 GT and have used the nVidia control panel to set up dual screen in "Separate screen mode". Here's the resulting xorg.conf # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 295.33 (buildd@zirconium) Fri Mar 30 13:38:49 UTC 2012 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 Screen 1 "Screen1" RightOf "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "Maxdata/Belinea B1925S1W" HorizSync 31.0 - 83.0 VertRefresh 56.0 - 75.0 Option "DPMS" EndSection Section "Monitor" # HorizSync source: builtin, VertRefresh source: builtin Identifier "Monitor1" VendorName "Unknown" ModelName "CRT-1" HorizSync 28.0 - 55.0 VertRefresh 43.0 - 72.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce 8500 GT" BusID "PCI:1:0:0" Screen 0 EndSection Section "Device" Identifier "Device1" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce 8500 GT" BusID "PCI:1:0:0" Screen 1 EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "0" Option "metamodes" "CRT-0: nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" # Removed Option "metamodes" "CRT-1: 1280x768 +0+0" Identifier "Screen1" Device "Device1" Monitor "Monitor1" DefaultDepth 24 Option "TwinView" "0" Option "metamodes" "CRT-1: 1360x768_60 +0+0" SubSection "Display" Depth 24 EndSubSection EndSection All well and good and I have a nice blank XWindow displayed on my TV (the 2nd monitor). I then fire up XBMC from a terminal on the PC monitor using DISPLAY=:0.1 xbmc XBMC fires up quite nicely on the TV however I can no longer use the main PC monitor / mouse / keyboard as XBMC on the TV screen seems to have focus. I was hoping to have XBMC running on the TV and let the kids use the MCE remote whilst I get on with my work on the PC monitor. Does anyone have any idea how to overcome this? I'm presuming there's some xorg.conf fun and games needed but I've no idea where to start to be honest.

    Read the article

< Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >