Search Results

Search found 18460 results on 739 pages for 'terminal services'.

Page 223/739 | < Previous Page | 219 220 221 222 223 224 225 226 227 228 229 230  | Next Page >

  • How do I change the default for the "df" command in the Unix terminal from KILOBYTES TO MEGABYTES?

    - by user1656014
    I can't really show the code in Unix, but I can explain it very clearly. In the terminal, when you type "df", you get information on the disk free space all in KILOBYTE units. KILOBYTES is currently set as the default in Unix. My problem is trying to change the default from KILOBYTES to MEGABYTES. After changing of the default, I should be able to type in "df" and all the disk free space should come up in MEGABYTES.

    Read the article

  • How to set PATH variable on Mac OS so that even non-terminal apps see it?

    - by dehmann
    I need to add a directory to my PATH variable on Mac OS. I added it in .bash_profile and .profile, and that works for the terminal. But Emacs (http://emacsformacosx.com) still does not use the new PATH variable. (I'm trying to run latex from emacs, but it's not finding the command in my /usr/local/bin, which I'm trying to add to the PATH ...) I even logged out and back in, but still no luck. Any suggestions?

    Read the article

  • What is the best graphical terminal/console for Linux?

    - by bgy
    Well, I'm often tired of the basic functionalities of terminal provided as is when installing a new distribution. What is the best console in a graphical mode? For now, all I want to is: Tabs management Easy copy/paste (^C/^V support) UTF-8 support Should be available for both KDE/Gnome environnement Please be argumentative, don't answer with 'my favorite is' only. Try to tell me why and which features it offers.

    Read the article

  • How to execute a multiple line configure command using Mac OS X terminal?

    - by skiabox
    I am reading a nice article about upgrading php on mac os x mountain lion. At the Install part of the document the author says that the user must execute a multiple line configure command. What is the easiest way to do it, using mac os terminal? Thank you. PS : I have executed yesterday the configure command without the parameters. Can this action cause any problems with the re-execution of configure command (with parameters this time)?

    Read the article

  • SQL SERVER – Mirroring Configured Without Domain – The server network address TCP://SQLServerName:50

    - by pinaldave
    Regular readers of my blog will be aware of my friend who called me few days ago with very a funny SQL Problem SQL SERVER – SSMS Query Command(s) completed successfully without ANY Results. This time, it did not take long before he called me up with another interesting problem, although the issue he was facing this time was not that interesting and also very specific to him, however, he insisted me to share with all of you. Let us understand his situation at first. My friend is preparing for DBA exam Exam 70-450: PRO: Designing, Optimizing and Maintaining a Database Server Infrastructure using Microsoft SQL Server 2008 and for the same, he was trying to set up replication on his local laptop. He had installed two different instances of SQL Server on his computer and every time when he started the mirroring, it failed with common error message. The server network address “TCP://SQLServer:5023? cannot be reached or does not exist. Check the network address name and that the ports for the local and remote endpoints are operational. (Microsoft SQL Server, Error: 1418) Well, before he contacted me, he searched online and checked my article written on the error in mirroring. However, he tried all the four suggestions, but it did not solve his problem. He called me at a reasonable time of late evening (unlike last time, which was midnight!). I even tried all the seven different suggestions myself, as previously proposed in my article; however, none of them worked. While looking at closely at services, I noticed something very simple. He was running all the instances on ‘Network Services’. In fact, his computer was a stand-alone computer. There was no network at all. Also, there was no domain or any other advance network concepts implemented. I just changed services from ‘Network Services’ to ‘Local System’ as his SQL Server was running on his local system and there were no network services. This prompted to restart the services. As this was not the production server and his development machine, we restarted the services on the laptop (do not restart services on production server without proper planning). After changing the ‘services log on’ account to localsystem, when he attempted to reconfigure the mirroring it worked right away. As usually in production server, proper domains are configured and advance network concepts are implemented I had never faced this type of problem earlier. My friend insisted to post this solution to his situation, wherein there was no domain configured and setting up mirroring was throwing an error. According to him, this is bound to help people, like him, who are preparing for certification using single system. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Certifications, SQL Mirroring

    Read the article

  • Python and Ruby in Tuxedo

    - by Maurice Gamanho
    With the release of SALT 11gR1, you can now develop Python/Ruby services/applications on Oracle Tuxedo platform. Python functions or Ruby classes can be invoked as Tuxedo services by other Tuxedo services or clients and, in addition, Python/Ruby applications can invoke existing Tuxedo services. SALT 11gR1 combines the proven scalability, reliability and performance of the Tuxedo runtime infrastructure with the agility provided by these dynamic scripting languages, providing a highly available and almost linearly scalable platform for Python and Ruby application development. Another benefit of developing Python and Ruby applications with Tuxedo is that services are SOA enabled from inception by virtue of Tuxedo's comprehensive integration options with J2EE app servers, mainframe applications, Web services, etc. Other interesting features are dynamic re-loading of scripts, where script changes are picked up automatically or when the administrator decides, and server-side typing, where Python functions and Ruby classes are given interfaces by way of the Tuxedo Metadata Repository. More information can be found on the Oracle SALT 11gR1 documentation page. See also SCA Python and Ruby Programming and Python and Ruby Data Type Mapping.

    Read the article

  • AspNetCompatibility in WCF Services &ndash; easy to trip up

    - by Rick Strahl
    This isn’t the first time I’ve hit this particular wall: I’m creating a WCF REST service for AJAX callbacks and using the WebScriptServiceHostFactory host factory in the service: <%@ ServiceHost Language="C#" Service="WcfAjax.BasicWcfService" CodeBehind="BasicWcfService.cs" Factory="System.ServiceModel.Activation.WebScriptServiceHostFactory" %>   to avoid all configuration. Because of the Factory that creates the ASP.NET Ajax compatible format via the custom factory implementation I can then remove all of the configuration settings that typically get dumped into the web.config file. However, I do want ASP.NET compatibility so I still leave in: <system.serviceModel> <serviceHostingEnvironment aspNetCompatibilityEnabled="true"/> </system.serviceModel> in the web.config file. This option allows you access to the HttpContext.Current object to effectively give you access to most of the standard ASP.NET request and response features. This is not recommended as a primary practice but it can be useful in some scenarios and in backwards compatibility scenerios with ASP.NET AJAX Web Services. Now, here’s where things get funky. Assuming you have the setting in web.config, If you now declare a service like this: [ServiceContract(Namespace = "DevConnections")] #if DEBUG [ServiceBehavior(IncludeExceptionDetailInFaults = true)] #endif public class BasicWcfService (or by using an interface that defines the service contract) you’ll find that the service will not work when an AJAX call is made against it. You’ll get a 500 error and a System.ServiceModel.ServiceActivationException System error. Worse even with the IncludeExceptionDetailInFaults enabled you get absolutely no indication from WCF what the problem is. So what’s the problem?  The issue is that once you specify aspNetCompatibilityEnabled=”true” in the configuration you *have to* specify the AspNetCompatibilityRequirements attribute and one of the modes that enables or at least allows for it. You need either Required or Allow: [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Required)] without it the service will simply fail without further warning. It will also fail if you set the attribute value to NotAllowed. The following also causes the service to fail as above: [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.NotAllowed)] This is not totally unreasonable but it’s a difficult issue to debug especially since the configuration setting is global – if you have more than one service and one requires traditional ASP.NET access and one doesn’t then both must have the attribute specified. This is one reason why you’d want to avoid using this functionality unless absolutely necessary. WCF REST provides some basic access to some of the HTTP features after all, although what’s there is severely limited. I also wish that ServiceActivation errors would provide more error information. Getting an Activation error without further info on what actually is wrong is pretty worthless especially when it is a technicality like a mismatched configuration/attribute setting like this.© Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  WCF  AJAX  

    Read the article

  • ROA on top of SOA

    - by Vaibhav Pujari
    I already have a stable Service Oriented Architecture for my application which exposes services as API calls. (the verbs) Now, I need to build a Resource Oriented Architecture to expose a RESTful API to interact with the application objects. (the nouns) What are the best practices to reuse the existing services: - without any persistence inside my new code. - without putting unnecessary logic into the REST layer i.e. it should ideally just leverage the services provided by SOA API. I want this layer to be as thin as possible. - without modifying the existing SOA API - allow easy extension of the REST API i.e. it should be easy to add more resources without changing the (yet to be written) core code. (I want to make resource names and their associated actions configurable so more contributors can easily add resources without a need to understand my module) Any advices/suggestions how to achieve this? Edit: Adding more info My Stack: My existing stacks is in Java. But since I plan to just use the services, I don't think that should affect the design of new REST code. I am planning to implement the new REST code in PHP. How well the services map to resources? Some services are mapped well i.e. there are services for creating, updating application objects. But for other application objects, there are no direct services available. More importantly, there are actions beyond just create, update etc. that apply to application objects. And I would like to provide some way for these actions to be exposed through REST. Since these are verbs, how do I deal with them? Where exactly I need help? I would appreciate any help towards high level design to accomplish the task along-with making the framework extendible. For instance, tomorrow there are some new services added to my SOA layer, I want to make sure it is easy for a fresh developer to write a REST call by simply registering a new resource (in a config file/db) and write code for connecting it with SOA calls. Just like plugin.

    Read the article

  • How can I set a per terminal bell with xterms?

    - by Owen Maresh
    I'm using natty. I'm using classic. I use raw xterms (the latest build, 270, in fact). I've done xset b 100 pactl upload-sample /usr/share/sounds/ubuntu/stereo/message.ogg bell.ogg But I want something more fine grained than this: I want to say "if the bell originated in some particular pseudoterminal make a particular sound, but if it originated in some other particular pseudoterminal, generate some other sound"

    Read the article

  • New e learning course on Business Intelligence

    - by simonsabin
    I just got this from fello SQL MVP Chris Testa O'Neil   "I am pleased to announce the release of the Author Model eCourseCollection 6233 AE: Implementing and Maintaining Business Intelligence in Microsoft® SQL Server® 2008: Integration Services, Reporting Services and Analysis Services This 24-hour collection provides you with the skills and knowledge required for implementing and maintaining business intelligence solutions on SQL Server 2008. You will learn about the SQL Server technologies, such as Integration Services, Analysis Services, and Reporting Services. This collection also helps students to prepare for Exam 70-448 and can be accessed from: http://www.microsoft.com/learning/elearning/course/6233.mspx   

    Read the article

  • Why are all logins disabled in the virtual terminal after enabling root account from desktop?

    - by Mitch
    Just for testing purposes, I went a head and enabled the root account, by using the commands below: sudo passwd root [sudo] password for abed: Enter new UNIX password: Retype new UNIX password: passwd: password updated successfully Once that happened, I went ahead and did Ctrl+Alt+F1, to get to the first virtual console, and at the login prompt I try logging in as abed, su, root, all comeback with login incorrect. Why is that happening, and how can I fix it.

    Read the article

  • Windows Azure Training Kit October 2012 Release

    - by Clint Edmonson
    The Windows Azure Technical Evangelism team have been busy bees lately and we want to share with you what they’ve been working on. As you know we release the Windows Azure Training Kit on a regular cadence, so I’m pleased to announce the Windows Azure Training Kit October 2012 Release. This update of the training kit includes 47 hands-on labs, 24 demos and 38 presentations designed to help you learn how to build applications that use Windows Azure services, including updated hands-on labs to use the latest version of Visual Studio 2012 and Windows 8, new demos and presentations. Essential Links: Windows Azure Training Kit Download Windows Azure Training Kit Github [Issues] Updated Presentations With Speaker Notes Your voices were heard loud and clear! I am excited to announce Speaker Notes have been added to a the majority of the content we have available. Find the new updated decks which contain speaker notes below: Foundation SQL Federation Virtual Machine Overview Virtual Networks Windows 8 and Windows Azure Web Sites Windows Azure Cloud Services Windows Azure Overview Windows Azure Service Bus Deploying Active Directory Building Apps With IaaS and PaaS Identity and Access Control Linux Virtual Machines Managing Virtual Machines PowerShell Migrating Apps and Workloads Scalable Global and Highly Available Apps Security and Identity SQL Database SQL Database Migration Cloud Service Life Cycle DevCamps Cloud Services iOS, Android and Windows Azure Windows 8 and Windows Azure Web Sites Windows 8 and Windows Azure Mobile Services Added Localized Content Due to the excitement in the community surrounding the mobile services launch, it was apparent that we needed to make localized content available to continue to deliver the exciting message around Windows Azure Mobile Services. Localized content is available in the following languages: French Japanese German Chinese (Taiwan) Spanish Italian Korean Portuguese (Brazilian) Russian Updated Hands-On Labs To support those who have upgraded to Visual Studio 2012 or those trying out the Visual Studio 2012 Express Editions, we have made sure that the content is available and supported (selected labs only) in Visual Studio 2012 Express and up. Visual Studio 2012 Windows Azure Traffic Manager Introduction to Cloud Services Service Bus Messaging Introduction to Access Control Service This adds a significant amount of additional content, so we have revamped the Hands-On Lab Navigation page to include subsections for Visual Studio 2012 Labs, Visual Studio 2010 Labs, Open Source Labs, Scenario Labs, All Labs. Added Demos Demos are available for a number of presentations which are available in Foundation, DevCamp, ITPro Event & Device + Service DevCamps. You can browse through the demos on the respective Demo Navigation page or on Github (links provided in Demo listing below). HelloASP Connecting Cloud Services Service Bus Relay Windows 8 and Mobile Services URL Shortener iOS Client Migrating a Web Farm Deploying Active Directory URL Shortener Service  (PHP) Geo-Location Service (PHP) Geo-Location Android Client Getting Started with VMs Load Balancing Availability Deploying Hybrid Apps Migrate VM AppController Geo-Location iOS Client Scale Up/Down Using CSUpload URL Shortener Android Client Imaging Virtual Machines The Windows Azure Training Kit is open source and available on GitHub, enabling you in the community to Report Issues or Fork and either extend the solution or commit bug fixes back to the Training Kit. You can find out more details about  the training kit from our GitHub Page including guidelines on how to commit back to the project. Stay tuned to my twitter feed for Windows Azure and other Microsoft announcements, updates, and links: @clinted

    Read the article

  • 12.04 boots into terminal after first install. How to boot into GUI permanently?

    - by Deniz
    As a person with a quite limited CLI experience I congratulate myself on installing Ubuntu on an ancient non-pae Fujitsu Amilo M1425 thru the network with mini.iso. However upon reboot I'm met w/ the following: Ubuntu 12.04.1 LTS ubuntu-fujitsu tty1 ubuntu-fujitsu login: for which my specified login during setup is not accepted. (I'm quite sure its correct) Let's assume this screen is passed, how to start the GUI and make it the permanent option during boot? This box will return to a mostly comp-illiterate person, for which the existence of ubuntu will be an enough shock already. Wouldn't wanna leave him w/o the GUI. Other posts here mention the command startx but I probably need a login in the first place.. So "why won't it accept my login & how can I make the GUI-boot permanent?" is my question. Thanks in advance.

    Read the article

  • Windows Azure Tools for Microsoft Visual Studio 1.2 (June 2010)

    - by Eric Nelson
    Yey – we have a public release of the Windows Azure Tools which fully supports Visual Studio 2010 RTM and the .NET 4 Framework. And the biggy I have been waiting for – IntelliTrace support to debug your cloud deployed services (Requires  VS2010 Ultimate) Download today http://bit.ly/azuretoolsjune New for version 1.2: Visual Studio 2010 RTM Support: Full support for Visual Studio 2010 RTM. .NET 4 support: Choose to build services targeting either the .NET 3.5 or .NET 4 framework. Cloud storage explorer: Displays a read-only view of Windows Azure tables and blob containers through Server Explorer. Integrated deployment: Deploy services directly from Visual Studio by selecting ‘Publish’ from Solution Explorer. Service monitoring: Keep track of the state of your services through the ‘compute’ node in Server Explorer. IntelliTrace support for services running in the cloud: Adds support for debugging services in the cloud by using the Visual Studio 2010 IntelliTrace feature. This is enabled by using the deployment feature, and logs are retrieved through Server Explorer. Related Links: http://ukazure.ning.com for UK fans of Windows Azure IntelliTrace explained

    Read the article

  • Easing the Journey to the Private Cloud with Oracle Consulting

    - by MichaelM-Oracle
    By Sanjai Marimadaiah, Senior Director, Strategy & Business Development – Cloud Solutions, Oracle Consulting Services Business leaders are now leading the charge on how their firms can profit from cloud solutions. Agility and innovation are becoming the primary drivers of the business case for the cloud, even more than the anticipated cost savings. Leaders need to find the right strategy and optimize the use of cloud-based applications across their enterprise-computing infrastructure. The Problem – Current State With prevalent IT practices, many organizations find that they run multiple IT solutions serving similar business needs. This has led to the proliferation of technology stacks, for example: Oracle 10g on Sun T4 running Solaris 9; Oracle 11g on Exadata running Linux; or Oracle 12c on commodity x86 servers. This variance has a huge impact on an organization’s agility and expenses, and requires IT professionals with varied skills as well as on-going training for different systems and tools. Fortunately there is a practical business strategy to overcome this unneeded redundancy. Thus begins a journey to the right cloud computing solution. The Solution – Cloud Services from Oracle Consulting Services (OCS) Oracle Consulting Services (OCS ) works closely with our clients as trusted advisors to proactively respond to business needs and IT concerns. OCS understands that making the transition to cloud solutions begins with a strategic conversation, based on its deep expertise for successfully completing private cloud service engagements with several companies. For a journey to the cloud, Oracle Consulting Services leads the client through four phases– standardization, consolidation, service delivery, and enterprise cloud – to achieve optimal returns. Phase 1 - Standardization Oracle Consulting Services (OCS) works with clients to evaluate their business requirements and propose a set of standard solutions stacks for various IT solutions. This is an opportune time to evaluate cloud ready solutions, such as Oracle 12c, Oracle Exadata, and the Oracle Database Appliance (ODA). The OCS consultants, together with the delivery team, then turn to upgrading and migrating existing solution stacks to standardized offerings. OCS has the expertise and tools to complete this stage in a fraction of the time required by other IT services companies. Clients quickly realize cost savings in tools, processes, and type/number of resources required. This standardization also improves agility of the IT organizations and their abilities to respond to the needs of various business units. Phase 2 - Consolidation During the consolidation phase, OCS consultants programmatically consolidate hundreds of databases into a smaller number of servers to improve utilization, reduce floor space, and optimize maintenance costs. Consolidation helps clients realize huge savings in CapEx investments and shrink OpEx costs. The use of engineered systems, such as Oracle Exadata, greatly reduces the client’s risk of moving to a new solution stack. OCS recommends clients to pursue Phase 1 (Standardization) and Phase 2 (Consolidation) simultaneously to reduce the overall time, effort, and expense of the cloud journey. Phase 3 - Service Delivery Once a client is on a path of standardization and consolidation, OCS consultants create Service Catalogues based on the SLAs requirements and the criticality of the solutions. The number and types of Service Catalogues (Platinum, Gold, Silver, Bronze, etc.) vary from client to client. OCS consultants also implement a variety of value-added cloud solutions, including monitoring, metering, and charge-back solutions. At this stage, clients are able to achieve a high level of understanding in their cloud journey. Their IT organizations are operating efficiently and are more agile in responding to the needs of business units. Phase 4 - Enterprise Cloud In the final phase of the cloud journey, the economics of the IT organizations change. Business units can request services on-demand; applications can be deployed and consumed on a pay-as-you-go model. OCS has the expertise and capabilities to establish processes, programs, and solutions required for IT organizations to transform how they interact with business units. The Promise of Cloud Solutions Depending the size and complexity of their business model, some clients are able to abbreviate some phases of their cloud journey. Cloud solutions are still evolving and there is rapid pace of innovation to transform how IT organizations operate. The lesson is clear. Cloud solutions hold a lot of promise for business agility. Business leaders can now leverage an additional set of capabilities and services. They can ramp up their pace of innovation. With cloud maturity, they can compete more effectively in their respective markets. But there are certainly challenges ahead. A skilled consulting services partner can play a pivotal role as a trusted advisor in the successful adoption of cloud solutions. Oracle Consulting Services has expertise and a portfolio of services to help clients succeed on their journey to the cloud.

    Read the article

  • Why can't I upgrade my kernel via the terminal?

    - by Alvar
    If I type sudo apt-get update && sudo apt-get upgrade I can only see that the kernel packages are kept back, and not installed. As the screenshot shows. If I then start the update manager I can install the kernel, with no problems at all. As the second screenshot shows. Why is this? The kernel is a new package and not an upgrade of an old one, this is why you can't use the command upgrade that upgrades packages. You need to use the command dist-upgrade to install new packages.

    Read the article

  • App installed in ~/usr launches from terminal but not Applications menu (or why does setting ld_library_path in .profile not work as it should)

    - by levesque
    I have built and installed an application under a directory of my choosing, let's say under /home/jim/usr, so files have been put in three-four folders, all under this $HOME/usr folder (e.g., bin, include, lib, share, etc.). I can launch this application from the command line just fine as I added the proper paths to my environement variables PATH and LD_LIBRARY_PATH in ~/.bashrc. I added the same paths to the ~/.profile file, which, if I'm not mistaken, is supposed to be parsed by Ubuntu. Doesn't work. Nothing. Where can I go from there? EDIT: I logged out/in and restarted my computer. Both didn't change a thing. The problem seems to come from the fact that no matter what I do the LD_LIBRARY_PATH environment variable is not properly passed to Ubuntu. Using log files, I found that the application I'm trying to run in this example doesn't find one it's dependencies located in ~/usr/lib. One solution would be to add the /home/jim/usr/lib folder inside a file located in /etc/ld.so.conf.d/, but I don't have admin rights on this machine. Making a wrapper script like this one works: #!/bin/bash export LD_LIBRARY_PATH=$HLOC/usr/lib application &> $HOME/application_messages.log but that would force me to wrap all my home compiled applications with this script. Any ideas? Why does Ubuntu/Gnome remove the LD_LIBRARY_PATH environment variable from my set variables? Is it because trying to do this is bad practice? UPDATE (and solution): As found by Christopher, there is a bug report about this on launchpad. LD_LIBRARY_PATH is unset after parsing of the ~/.profile file. See the bug report. Seems the only solution for now is to make a wrapper script.

    Read the article

  • Session Sharing with another User on *NIX and Windows

    - by Giri Mandalika
    Oracle Solaris Since Solaris is not widely known for its graphical interface, let's just focus on sharing a terminal session in read-only mode with another user on the same system. Here is an example. eg., % finger Login Name TTY Idle When Where root Super-User pts/1 Sat 16:57 dhcp-amer-vpn-rmdc-a sunperf ??? pts/2 4 Sat 16:41 pitcher.sfbay.sun.com In this example, two users root and sunperf are connected to the same system from two different terminals pts/1 and pts/2 respectively. If the root user wants to show something to sunperf user -- what s/he is doing in her/his terminal, for example, it can be accomplished with the following command. script -a /dev/null | tee -a <target_terminal eg., # script -a /dev/null | tee -a /dev/pts/2 Script started, file is /dev/null # # uptime 5:04pm up 1 day(s), 2:56, 2 users, load average: 0.81, 0.81, 0.81 # # isainfo -v 64-bit sparcv9 applications crc32c cbcond pause mont mpmul sha512 sha256 sha1 md5 camellia kasumi des aes ima hpc vis3 fmaf asi_blk_init vis2 vis popc 32-bit sparc applications crc32c cbcond pause mont mpmul sha512 sha256 sha1 md5 camellia kasumi des aes ima hpc vis3 fmaf asi_blk_init vis2 vis popc v8plus div32 mul32 # # exit Script done, file is /dev/null After the script .. | tee .. command, sunperf user should be able to see the root user's stdin and stdout contents in her/his own terminal until the script session exits in root user's terminal. Since this kind of sharing is based on capturing and redirecting the contents to the target terminal, the users on the receiving end won't be able to see whatever is being edited on initiators' terminal [using editors such as vi]. Also it is not possible to share the session with any connected user on the system unless the initiator has the necessary permissions and privileges. The script utility records everything printed in a terminal session, while the tee utility replicates the contents of the screen capture on to the standard output of the target terimal. The tee utility does not buffer the output - so, the screen capture from the initiators' terminal appears almost right away in the target terminal. Though I never tested, this technique may work on all *NIX and Linux flavors with little or no changes. Also there might be other ways to accomplish this. [Thanks to Sujeet for sharing this tip] Microsoft Windows Most of the Windows users may rely on VNC services to share a desktop session. Another way to share the desktop session is to use the Remote Desktop Connection (RDC) client. Here are the steps. Connect to the target Windows system using Remote Desktop Connection client Launch Windows Task Manager Navigate to the "Users" tab Find the user session that you want to connect to and have full control over as the other user who is currently holding that session Select the user name in Windows Task Manager, right click and choose the option "Remote Control" A window pops up on the other user's session with the message "<USER is requesting to control your session remotely. Do you accept the request?" Once the other user says "Yes", you will be granted access to that session. Since then both users should be able to see the same screen and even control the session from their respective workstations.

    Read the article

  • How can I create a zip archive of a whole directory via terminal without hidden files?

    - by moose
    I have a project with lots of hidden folders / files in it. I want to create a zip-archive of it, but in the archive shouldn't be any hidden folders / files. If files in a hidden folder are not hidden, they should also not be included. I know that I can create a zip archive of a directory like this: zip -r zipfile.zip directory I also know that I can exclude files with the -x option, so I thought this might work: zip -r zipfile.zip directory -x .* It didn't work. All hidden directories were still in the zip-file.

    Read the article

  • Why i see the following errors while updating in terminal?

    - by Harsh
    Fetched 1,103 kB in 1min 2s (17.6 kB/s) Reading package lists... Done W: A error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://extras.ubuntu.com precise Release: The following signatures were invalid: BADSIG 16126D3A3E5C1192 Ubuntu Extras Archive Automatic Signing Key W: Failed to fetch http://extras.ubuntu.com/ubuntu/dists/precise/Release W: Some index files failed to download. They have been ignored, or old ones used instead.

    Read the article

  • How do I throttle a command in a terminal window?

    - by To Do
    I needed to run convert with a lot of images at the same time. The command took quite a while but this doesn't bother me. The issue is that this command rendered my computer unusable while the command was running (for about 15 minutes). So is it possible to throttle the command by limiting resources (processor and memory) to the command, directly from the command line? This can only work if I add something to the same line before pressing Enter because once I start the process the computer slows so much that it is impossible for example to switch to "System monitor" and reduce priority. Edit: top and iotop results I managed to run top and sudo iotop >iotop.txt while doing one of these convert operations. (The iotop.txt file produced is difficult to read) Results of top: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 14275 username 20 0 4043m 3.0g 1448 D 7.0 80.4 0:16.45 convert Results of iotop: [?1049h[1;24r(B[m[4l[?7h[?1h=[39;49m[?25l[39;49m(B[m[H[2JTotal DISK READ: 1269.04 K/s | Total DISK WRITE:[59G0.00 B/s (B[0;7m TID PRIO USER DISK READ DISK WRITE SWAPIN(B[0;1;7m IO(B[0;7m COMMAND [3;2H(B[m2516 be/4 username 350.08 K/s 0.00 B/s 0.00 % 0.00 % zeitgeist-datahub 7394 be/4 username 568.88 K/s 0.00 B/s 77.41 % 0.00 % --rendere~.530483991[5;1H14275 idle username 350.08 K/s 0.00 B/s 37.49 % 0.00 % convert S~f test.pdf[6;2H2048 be/4 root[6;24H0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/3:2] [5G1 be/4 root[7;24H0.00 B/s 0.00 B/s 0.00 % 0.00 % init Furthermore, even after the process ends, the computer does not return to the previous performance. I found a way around this by running sudo swapoff -a followed by sudo swapon -a

    Read the article

< Previous Page | 219 220 221 222 223 224 225 226 227 228 229 230  | Next Page >