Search Results

Search found 6301 results on 253 pages for 'dos commands'.

Page 62/253 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • Multiple variables for the same command

    - by Lowzenza
    I'm new to cmd and was wondering if there's an easier way of retpying a variable in a command. For example, I have to do two commands for a set of 96 files and each time I would hit the up arrrow key, get my old commands back and change a variable from 1 to 2, then 2 to 3 and so forth. i.e.: Desktop\InitialProcess_230 Process230input.fasta -output Process230.fasta Then each time I want to do the next file which would be InitialProcess_231 and so on, I would change that in the command by scrolling along and removing 0 and putting a 1. Doing that for almost a 100 files seems like a hassle.

    Read the article

  • How to run script from root as another user (with user PATH)

    - by Sandra
    I would like to have these commands run as the ss user from root mkdir bin cp -r /opt/gitolite . gitolite/install -ln gitolite setup -pk ss.pub mkdir -p .gitolite/hooks/common ln -s /opt/pre-receive .gitolite/hooks/common/ so everything is executed in /home/ss. The 4th line requires $HOME/bin as you can see from the 3rd line. The only way I can get it to work is by adding su -c "command" ss to each line, which is not a nice hack. This is an extension to my previous question, where I wasn't precise enough. Question How do I run all these commands as a script in a practical way?

    Read the article

  • Xubuntu 12.04: Slow Login

    - by naxchange
    I'm running Xubuntu 12.04, and after I ran the update, my login started slowing down big time. I've dabbled a little bit with the programs in Settings - Settings Manager - Session and Startup, and tried closing and opening these programs, and then rebooting. Anyways, I narrowed it down to a combination of 2 processes, "Network" (the connection manager) and "Xfce Volume Daemon". Their corresponding terminal commands are: ~$ nm-applet ~$ xfce4-volumed If I disable them, and then run them after login is complete, everything works just fine. Is there a way for me to run these commands automatically at start up? I want to maybe write a shell script and save it somewhere, or at least edit the order things happen after login, so the desktop can load before these processes. Hope I've been clear enough, and thanks in advance! NAX

    Read the article

  • How do I fix "bzr: ERROR: Unable to determine your name. "?

    - by Daniel
    I am trying to quickly create my first app and am getting gtk errors when I try to run or create an application. Here is a copy of what I executed and what results I got: daniel@laptop:~/PyDevelopment$ quickly create ubuntu-application app001 Creating project directory app001 Creating bzr repository and committing Launching your newly created project! /usr/lib/python2.7/dist-packages/gi/overrides/Gtk.py:391: Warning: g_object_set_property: construct property "type" for object `Window' can't be set after construction Gtk.Window.__init__(self, type=type, **kwds) /usr/lib/python2.7/dist-packages/gi/overrides/Gtk.py:391: Warning: g_object_set_property: construct property "type" for object `App001Window' can't be set after construction Gtk.Window.__init__(self, type=type, **kwds) Congrats, your new project is setup! cd /home/daniel/PyDevelopment/app001/ to start hacking. daniel@laptop:~/PyDevelopment$ cd app001 daniel@laptop:~/PyDevelopment/app001$ quickly design daniel@laptop:~/PyDevelopment/app001$ quickly rub ERROR: No rub command found in template ubuntu-application. Candidate commands are: add, commands, configure, create, debug, design, edit, getstarted, help, license, package, quickly, release, run, save, share, submitubuntu, test, tutorial, upgrade daniel@laptop:~/PyDevelopment/app001$ quickly run /usr/lib/python2.7/dist-packages/gi/overrides/Gtk.py:391: Warning: g_object_set_property: construct property "type" for object `Window' can't be set after construction Gtk.Window.__init__(self, type=type, **kwds) /usr/lib/python2.7/dist-packages/gi/overrides/Gtk.py:391: Warning: g_object_set_property: construct property "type" for object `App001Window' can't be set after construction Gtk.Window.__init__(self, type=type, **kwds) daniel@laptop:~/PyDevelopment/app001$ quickly package .......Ubuntu packaging created in debian/ ....... ---------------------------------- Command returned some ERRORS: ---------------------------------- bzr: ERROR: Unable to determine your name. ---------------------------------- ERROR: can't create or update ubuntu package ERROR: package command failed Aborting

    Read the article

  • Rename Applications and Virtual Directories in IIS7

    - by AngelEyes
    from http://lanitdev.wordpress.com/2010/09/02/rename-applications-and-virtual-directories-in-iis7/   Rename Applications and Virtual Directories in IIS7 September 2, 2010 — Brian Grinstead Have you ever wondered why the box to change the name or “Alias” on an application or virtual directory is greyed out (see screenshot below)? I found a way to change the name without recreating all your settings. It uses the built in administration commands in IIS7, called appcmd. Renaming Applications In IIS7 Open a command prompt to see all of your applications. 1 C:> %systemroot%\system32\inetsrv\appcmd list app 2   3     APP "Default Web Site/OldApplicationName" 4     APP "Default Web Site/AnotherApplication" Run a command like this to change your “OldApplicationName” path to “NewApplicationName”. Now you can use http://localhost/newapplicationname 1 C:> %systemroot%\system32\inetsrv\appcmd set app "Default Web Site/OldApplicationName" -path:/NewApplicationName 2   3     APP object "Default Web Site/OldApplicationName" changed Renaming Virtual Directories In IIS7 Open a command prompt to see all of your virtual directories. 1 C:> %systemroot%\system32\inetsrv\appcmd list appcmd 2   3     VDIR "Default Web Site/OldApplicationName/Images" (physicalPath:\\server\images) 4     VDIR "Default Web Site/OldApplicationName/Data/Config" (physicalPath:\\server\config) We want to rename /Images to /Images2 and /Data/Config to /Data/Config2. Here are the example commands: 1 C:> %systemroot%\system32\inetsrv\appcmd set vdir "Default Web Site/OldApplicationName/Images" -path:/Images2 2   3     VDIR object "Default Web Site/OldApplicationName/Images" changed 4   5 C:> %systemroot%\system32\inetsrv\appcmd set vdir "Default Web Site/OldApplicationName/Data/Config" -path:/Data/Config2 6   7     VDIR object "Default Web Site/OldApplicationName/Data/Config" changed

    Read the article

  • Re-running SSRS subscription jobs that have failed

    - by Rob Farley
    Sometimes, an SSRS subscription for some reason. It can be annoying, particularly as the appropriate response can be hard to see immediately. There may be a long list of jobs that failed one morning if a Mail Server is down, and trying to work out a way of running each one again can be painful. It’s almost an argument for using shared schedules a lot, but the problem with this is that there are bound to be other things on that shared schedule that you wouldn’t want to be re-run. Luckily, there’s a table in the ReportServer database called dbo.Subscriptions, which is where LastStatus of the Subscription is stored. Having found the subscriptions that you’re interested in, finding the SQL Agent Jobs that correspond to them can be frustrating. Luckily, the jobstep command contains the subscriptionid, so it’s possible to look them up based on that. And of course, once the jobs have been found, they can be executed easily enough. In this example, I produce a list of the commands to run the jobs. I can copy the results out and execute them. select 'exec sp_start_job @job_name = ''' + cast(j.name as varchar(40)) + '''' from msdb.dbo.sysjobs j  join  msdb.dbo.sysjobsteps js on js.job_id = j.job_id join  [ReportServer].[dbo].[Subscriptions] s  on js.command like '%' + cast(s.subscriptionid as varchar(40)) + '%' where s.LastStatus like 'Failure sending mail%'; Another option could be to return the job step commands directly (js.command in this query), but my preference is to run the job that contains the step. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Windows Azure Evolution &ndash; Preview Developer Portal

    - by Shaun
    With the MEET Windows Azure event on 7th June, there are many new features and updates in windows azure platform. In the coming several posts I will try to cover some of them. And in the first post here I would like to just have a quick walkthrough of the new preview developer portal.   History of the Developer Portal If you have been working with windows azure since 2009 or 2010, you should remember the first version of the developer portal. It was built in HTML with very limited features. I have the impression when I was using is old one. The layout is not that attractive and you have very limited features. On November, 2010 alone with the SDK 1.3 release, the developer portal was getting a big jump. In order to give more usability and features this it turned to be built on Silverlight. Hence it runs like a desktop application with many windows, lists, commands and context menus. From 2010 till now many features were involved into this portal, such as the remote desktop, co-admin, virtual connect, VM role, etc.. And the portal itself became more and more complicated. But it brought some problems by using the Silverlight. The first one is the browser capability. As you know in most mobile and tablet device the browser doesn’t allow the rich content plugin, such as Flash and Silverlight. This means people cannot open and configure their azure services from their iPad, iPhone and Windows Phone, etc., even though what they need may just be restart a hosted service, or view the status of their databases. Another problem is the performance. Silverlight provides rich experience to the users, but also needs more bandwidth. So in this upgrade the preview developer portal will be back to use HTML, with JavaScript, as a mobile friendly, cross browser, interactively web site.   Preview Portal vs. Silverlight Portal Before I started to talk about the new preview portal I’d better highlight that, this preview portal is a PREVIEW version, which means even though you can do almost all features that already in the old one, as long as some cool new features I will mention in the coming several posts, there are something still under developed and migrated. So sometimes you need to switch back to the old one. For example, in preview portal there is no co-admin manage function, no remote desktop function and the SQL database manage function will take you back to the old SQL Azure Manage Portal. But as Microsoft said these missing features will be moved in the preview portal in the couple of next few months. Since the public URL of the developer portal, https://windows.azure.com/, had been changed to point to this preview one, you need to click to preview button on top of the page and click the “Take me to the previous portal” link.   Overview There are four parts in the preview portal. On the top is the header which shows the account you are currently logging in. If you click on the header it will show the top menu of windows azure, where you can navigate to the windows azure home page, the price information page, community and account, etc.. The navigation bar is on the left hand side, with the categories listed below. ALL ITEMS All items in your windows azure account, includes the web sites, services, databases, etc.. WEB SITES The web sites in your windows azure account. It will only show the web sites you have. The linked resources will be shown if you drill down into a web site. VIRTUAL MACHINES The virtual machines that you had been deployed to azure. CLOUD SERVICES All windows azure hosted services in your account. SQL DATABASES All SQL databases (SQL Azure) in your account. STORAGE All windows azure storage services in your account. NETWORKS The virtual network (Windows Azure Connect) you had been created. The available items will be listed in the main part of the page based on which category your currently selected. If there’s no item it will show the link to you to quick create. At the bottom of the page there will be the command and information bar. Based on what is selected and what is performed by the user, it will show the related information and commands. For example, in the image below when I was creating a new web site, the information bar told me that my web site is being provisioned; and there are two commands in the command bar. And once it ready the command bar will show some commands that I can do to my new web site. The “Web Sites” is a new feature introduced alone with this upgrade. It gives us an easier and quicker way to establish a website from the scratch or from some existing library. I will introduce it more details in the coming next post. Also in the command bar you can create a service by clicking the NEW button. It will slide the creation panel up to you.   Where’s My Hosted Services The Windows Azure Hosted Services had been renamed to the Cloud Services. Create a new service would be very easy. Just click the NEW button at the bottom of the page, and select the CLOUD SERVICE and QIUICK CREATE. This will create a blank hosted service without deployment and certificate. It just needs you to specify the service URL and the affinity/region. Then the service will be shown in the list. If you clicked the item all information will be shown in the main part. Since there’s no package deployed to this service so currently we cannot see any information about it. But we can upload the package by using the command at the bottom. And as you can see, we could manage the configuration, instances, certificates and we can scale up and down (change the VM size), in and out (increase and decrease the instance count) to our service. Assuming I had created an ASP.NET MVC 3 web role project in Visual Studio and completed the package. Then I can click the UPLOAD button in this page to deploy my package. In the popping up window I just specify my deployment name, package file and configure file. Also I can check the box below so that it will NOT warn me if only one instance of this deployment. Once we clicked the OK button our package will be uploaded and provisioned by the platform. After a while we can see the service was ready from the information bar. We can have the basic information about this service and deployment if we to the dashboard page. For example the usage overview diagram, status, URL, public IP address, etc.. In the configure page we can view and change the CSCFG content such as the monitor setting, connection strings, OS family. In scale page we can increase and decrease the count of the instances. And in the instances page we can view all instances status. And, if your services is using some SQL databases and storages they will be shown as the linked resources under the linked resources page. And you can manage the certificates of this service as well under the certificates page.   How About My Storage Services The storage service can be managed by clicking into the STORAGES link in the navigation bar. And we can create a new storage service from the NEW button. After specify the storage name and region it will be previsioned by the platform. If you want to copy or manage the storage key you can just click the Manage Keys button at the bottom, which is very easy. What I want to highlight here is that, you can monitor your storage service by enabling the monitor configuration. Click the storage item in the list and navigate to the configure page. As you can see in the page you can enable the monitoring for blob, table and queue. And you can also enable the logging when any requests come to the storage. But as the tooltip shown in the page, enabling the monitoring and logging will increase the usage of the storage, which means increase the bill of them. So make sure you enable them properly.   And My SQL Databases (SQL Azure) The last thing I want to quick introduce is the SQL databases, which was formally named SQL Azure. You can create a new SQL Database Server and a new database by clicking the ADD button under the SQL Database navigation item. In the popping up windows just specify the database name, the edition, size, collation and the server. You can select an existing SQL Database Server if you have, or cerate a new one. If you selected to create a new server, there will be another step you need to do, which is specify the server login, password and the region. Once it ready you can mange your databases as well as the servers in the portal. In a particular server you can update the firewall settings in its Configure page. So, What Else There are some other area on the preview portal I didn’t cover, such as the virtual machines, virtual network and web sites. Regarding the virtual machines and web sites I will talk about them in the future separated post. Regarding the virtual network, it the Windows Azure Connect we are familiar with. But as I mention in the beginning of this post, the preview portal is still under developed. Some features are not available here. For example, you cannot manage the co-admin of your subscriptions, you cannot open the remote desktop on your hosted services, and you cannot navigate to the Windows Azure Service Bus, Access Control and Caching, which formally named Windows Azure AppFabric directly. In these cases you need to navigate back to the old portal. So in the coming several months we might need to use both these two sites.   Summary In this post I quick introduced the new windows azure developer portal. Since it had been rearranged and renamed I demonstrated some features that existing in the old portal, such as how to create and deploy a hosted service, how to provision a storage service and SQL database. All features in the old portal had been, is being and will be migrated into this new portal, but some of them were in a different category and page we need to figure out.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • What's The Difference Between Imperative, Procedural and Structured Programming?

    - by daniels
    By researching around (books, Wikipedia, similar questions on SE, etc) I came to understand that Imperative programming is one of the major programming paradigms, where you describe a series of commands (or statements) for the computer to execute (so you pretty much order it to take specific actions, hence the name "imperative"). So far so good. Procedural programming, on the other hand, is a specific type (or subset) of Imperative programming, where you use procedures (i.e., functions) to describe the commands the computer should perform. First question: Is there an Imperative programming language which is not procedural? In other words, can you have Imperative programming without procedures? Update: This first question seems to be answered. A language CAN be imperative without being procedural or structured. An example is pure Assembly language. Then you also have Structured programming, which seems to be another type (or subset) of Imperative programming, which emerged to remove the reliance on the GOTO statement. Second question: What is the difference between procedural and structured programming? Can you have one without the other, and vice-versa? Can we say procedural programming is a subset of structured programming, as in the image?

    Read the article

  • ubuntu one not syncing

    - by Martin
    I am really starting to despair as I have been trying ubuntu one for several months, trying it on several machines, and it has caused me loads of different issues wasting me a lot of time. It is not straight forward to use, it should be a piece of software that runs in background and users should not think about checking all the time if it is really doing it's job. Of course I have been searching around this website and other forums but couldn't find an answer to my situation. Yesterday I had several problems with the client not syncing and using a lot of the machine's RAM, up and CPU. I had to reboot on several occasions and leave the office's PC on overnight in order to sync a few files of not more than a few MB. Today I am experiencing another problem: I have decided to do a test putting a small file in my ubuntu one shared folder. Ubuntu one is not detecting it (now already more than an hour), therefore not uploading it to the server. martin@ubuntu-desktop:~$ u1sdtool --status State: QUEUE_MANAGER connection: With User With Network description: processing the commands pool is_connected: True is_error: False is_online: True queues: IDLE and martin@ubuntu-desktop:~$ u1sdtool --current-transfers Current uploads: 0 Current downloads: 0 I am running Ubuntu 11.04 64 with all recent updates. On my other machine the transfer of files seems to be completely frozen, with around 10 files in the queue but no transfer whatsoever. Another curious issue is on my Ubuntu 10.10 laptop where ubuntu one seems to have completly disappeared from Nautilus context menu, folder/file sync status icons missing. I have therefore been forced to upgrade to 11.04 on this machine. Anyway, now I would like to solve the ** processing the commands pool ** issue and make sure the client

    Read the article

  • GPLv2 - Multiple AI chess engines to bypass GPL

    - by Dogbert
    I have gone through a number of GPL-related questions, the most recent being this one: http://stackoverflow.com/questions/3248823/legal-question-about-the-gpl-license-net-dlls/3249001#3249001 I'm trying to see how this would work, so bear with me. I have a simple GUI interface for a game of Chess. It essentially can send/receive commands to/from an external chess engine (ie: Tong, Fruit, etc). The application/GUI is similar in nature to XBoard ( http://www.gnu.org/software/xboard/ ), but was independently designed. After going through a number of threads on this topic, it seems that the FSF considers dynamically linking against a GPLv2 library as a derivative work, and that by doing so, the GPLv2 extends to my proprietary code, and I must release the source to my entire project. Other legal precedents indicate the opposite, and that dynamic linking doesn't cause the "viral" effect of the GPL to propagate to my proprietary code. Since there is no official consensus that can give a "hard-and-fast" answer to the dynamic linking question, would this be an acceptable alternative: I build my chess GUI so that it sends/receives the chess engine AI logic as text commands from an external interface library that I write The interface library I wrote itself is then released under the GPL The interface library is only used to communicate via a generic text pipe to external command-line chess engines The chess engine itself would be built as a command-line utility rather than as a library of any sort, and just sends strings in the Universal Chess Interface of Chess Engine Communication Protocol ( http://en.wikipedia.org/wiki/Chess_Engine_Communication_Protocol ) format. The one "gotcha" is that the interface library should not be specific to one single GPL'ed chess engine, otherwise the entire GUI would be "entirely dependent" on it. So, I just make my interface library so that it is able to connect to any command-line chess engine that uses a specific format, rather than just one unique engine. I could then include pre-built command-line-app versions of any of the chess engines I'm using. Would that sort of approach allow me to do the following: NOT release the source for my UI Release the source of the interface library I built (if necessary) Use one or more chess engines and bundle them as external command-line utilities that ship with a binary version of my UI Thank you.

    Read the article

  • A Simple Approach For Presenting With Code Samples

    - by Jesse Taber
    Originally posted on: http://geekswithblogs.net/GruffCode/archive/2013/07/31/a-simple-approach-for-presenting-with-code-samples.aspxI’ve been getting ready for a presentation and have been struggling a bit with the best way to show and execute code samples. I don’t present often (hardly ever), but when I do I like the presentation to have a lot of succinct and executable code snippets to help illustrate the points that I’m making. Depending on what the presentation is about, I might just want to build an entire sample application that I would run during the presentation. In other cases, however, building a full-blown application might not really be the best way to present the code. The presentation I’m working on now is for an open source utility library for dealing with dates and times. I could have probably cooked up a sample app for accepting date and time input and then contrived ways in which it could put the library through its paces, but I had trouble coming up with one app that would illustrate all of the various features of the library that I wanted to highlight. I finally decided that what I really needed was an approach that met the following criteria: Simple: I didn’t want the user interface or overall architecture of a sample application to serve as a distraction from the demonstration of the syntax of the library that the presentation is about. I want to be able to present small bits of code that are focused on accomplishing a single task. Several of these examples will look similar, and that’s OK. I want each sample to “stand on its own” and not rely much on external classes or methods (other than the library that is being presented, of course). “Debuggable” (not really a word, I know): I want to be able to easily run the sample with the debugger attached in Visual Studio should I want to step through any bits of code and show what certain values might be at run time. As far as I know this rules out something like LinqPad, though using LinqPad to present code samples like this is actually a very interesting idea that I might explore another time. Flexible and Selectable: I’m going to have lots of code samples to show, and I want to be able to just package them all up into a single project or module and have an easy way to just run the sample that I want on-demand. Since I’m presenting on a .NET framework library, one of the simplest ways in which I could execute some code samples would be to just create a Console application and use Console.WriteLine to output the pertinent info at run time. This gives me a “no frills” harness from which to run my code samples, and I just hit ‘F5’ to run it with the debugger. This satisfies numbers 1 and 2 from my list of criteria above, but item 3 is a little harder. By default, just running a console application is going to execute the ‘main’ method, and then terminate the program after all code is executed. If I want to have several different code samples and run them one at a time, it would be cumbersome to keep swapping the code I want in and out of the ‘main’ method of the console application. What I really want is an easy way to keep the console app running throughout the whole presentation and just have it run the samples I want when I want. I could setup a simple Windows Forms or WPF desktop application with buttons for the different samples, but then I’m getting away from my first criteria of keeping things as simple as possible. Infinite Loops To The Rescue I found a way to have a simple console application satisfy all three of my requirements above, and it involves using an infinite loop and some Console.ReadLine calls that will give the user an opportunity to break out and exit the program. (All programs that need to run until they are closed explicitly (or crash!) likely use similar constructs behind the scenes. Create a new Windows Forms project, look in the ‘Program.cs’ that gets generated, and then check out the docs for the Application.Run method that it calls.). Here’s how the main method might look: 1: static void Main(string[] args) 2: { 3: do 4: { 5: Console.Write("Enter command or 'exit' to quit: > "); 6: var command = Console.ReadLine(); 7: if ((command ?? string.Empty).Equals("exit", StringComparison.OrdinalIgnoreCase)) 8: { 9: Console.WriteLine("Quitting."); 10: break; 11: } 12: 13: } while (true); 14: } The idea here is the app prompts me for the command I want to run, or I can type in ‘exit’ to break out of the loop and let the application close. The only trick now is to create a set of commands that map to each of the code samples that I’m going to want to run. Each sample is already encapsulated in a single public method in a separate class, so I could just write a big switch statement or create a hashtable/dictionary that maps command text to an Action that will invoke the proper method, but why re-invent the wheel? CLAP For Your Own Presentation I’ve blogged about the CLAP library before, and it turns out that it’s a great fit for satisfying criteria #3 from my list above. CLAP lets you decorate methods in a class with an attribute and then easily invoke those methods from within a console application. CLAP was designed to take the arguments passed into the console app from the command line and parse them to determine which method to run and what arguments to pass to that method, but there’s no reason you can’t re-purpose it to accept command input from within the infinite loop defined above and invoke the corresponding method. Here’s how you might define a couple of different methods to contain two different code samples that you want to run during your presentation: 1: public static class CodeSamples 2: { 3: [Verb(Aliases="one")] 4: public static void SampleOne() 5: { 6: Console.WriteLine("This is sample 1"); 7: } 8:   9: [Verb(Aliases="two")] 10: public static void SampleTwo() 11: { 12: Console.WriteLine("This is sample 2"); 13: } 14: } A couple of things to note about the sample above: I’m using static methods. You don’t actually need to use static methods with CLAP, but the syntax ends up being a bit simpler and static methods happen to lend themselves well to the “one self-contained method per code sample” approach that I want to use. The methods are decorated with a ‘Verb’ attribute. This tells CLAP that they are eligible targets for commands. The “Aliases” argument lets me give them short and easy-to-remember aliases that can be used to invoke them. By default, CLAP just uses the full method name as the command name, but with aliases you can simply the usage a bit. I’m not using any parameters. CLAP’s main feature is its ability to parse out arguments from a command line invocation of a console application and automatically pass them in as parameters to the target methods. My code samples don’t need parameters ,and honestly having them would complicate giving the presentation, so this is a good thing. You could use this same approach to invoke methods with parameters, but you’d have a couple of things to figure out. When you invoke a .NET application from the command line, Windows will parse the arguments and pass them in as a string array (called ‘args’ in the boilerplate console project Program.cs). The parsing that gets done here is smart enough to deal with things like treating strings in double quotes as one argument, and you’d have to re-create that within your infinite loop if you wanted to use parameters. I plan on either submitting a pull request to CLAP to add this capability or maybe just making a small utility class/extension method to do it and posting that here in the future. So I now have a simple class with static methods to contain my code samples, and an infinite loop in my ‘main’ method that can accept text commands. Wiring this all up together is pretty easy: 1: static void Main(string[] args) 2: { 3: do 4: { 5: try 6: { 7: Console.Write("Enter command or 'exit' to quit: > "); 8: var command = Console.ReadLine(); 9: if ((command ?? string.Empty).Equals("exit", StringComparison.OrdinalIgnoreCase)) 10: { 11: Console.WriteLine("Quitting."); 12: break; 13: } 14:   15: Parser.Run<CodeSamples>(new[] { command }); 16: Console.WriteLine("---------------------------------------------------------"); 17: } 18: catch (Exception ex) 19: { 20: Console.Error.WriteLine("Error: " + ex.Message); 21: } 22:   23: } while (true); 24: } Note that I’m now passing the ‘CodeSamples’ class into the CLAP ‘Parser.Run’ as a type argument. This tells CLAP to inspect that class for methods that might be able to handle the commands passed in. I’m also throwing in a little “----“ style line separator and some basic error handling (because I happen to know that some of the samples are going to throw exceptions for demonstration purposes) and I’m good to go. Now during my presentation I can just have the console application running the whole time with the debugger attached and just type in the alias of the code sample method that I want to run when I want to run it.

    Read the article

  • xcopy file, suppress &ldquo;Does xxx specify a file name&hellip;&rdquo; message

    - by MarkPearl
    Today we had an interesting problem with file copying. We wanted to use xcopy to copy a file from one location to another and rename the copied file but do this impersonating another user. Getting the impersonation to work was fairly simple, however we then had the challenge of getting xcopy to work. The problem was that xcopy kept prompting us with a prompt similar to the following… Does file.xxx specify a file name or directory name on the target (F = file, D = directory)? At which point we needed to press ‘Y’. This seems to be a fairly common challenge with xcopy, as illustrated by the following stack overflow link… One of the solutions was to do the following… echo f | xcopy /f /y srcfile destfile This is fine if you are running from the command prompt, but if you are triggering this from c# how could we daisy chain a bunch of commands…. The solution was fairly simple, we eventually ended up with the following method… public void Copy(string initialFile, string targetFile) { string xcopyExe = @"C:\windows\system32\xcopy.exe"; string cmdExe = @"C:\windows\system32\cmd.exe"; ProcessStartInfo p = new ProcessStartInfo(); p.FileName = cmdExe; p.Arguments = string.Format(@"/c echo f | {2} {0} {1} /Y", initialFile, targetFile, xcopyExe); Process.Start(p); } Where we wrapped the commands we wanted to chain as arguments and instead of calling xcopy directly, we called cmd.exe passing xcopy as an argument.

    Read the article

  • How do I make my USB Bluetooth dongle work in Ubuntu 11.04 ? (Can't init device hci0: Connection timed out (110)) [closed]

    - by MaikoID
    I've a USB bluetooth dongle root@maiko-cce-lin:~# lsusb | grep Bluetooth Bus 001 Device 007: ID 0a12:0001 Cambridge Silicon Radio, Ltd Bluetooth Dongle (HCI mode) that isn't working properly, hardly-ever it works but stops working in my next reboot. what I've tried it isn't software blocked root@maiko-cce-lin:~# rfkill list 0: phy0: Wireless LAN Soft blocked: no Hard blocked: no 1: hci0: Bluetooth Soft blocked: no Hard blocked: no my device is recognized by hciconfig root@maiko-cce-lin:~# hciconfig -a hci0: Type: BR/EDR Bus: USB BD Address: 00:1F:81:00:01:1C ACL MTU: 1021:4 SCO MTU: 180:1 DOWN RX bytes:330 acl:0 sco:0 events:8 errors:0 TX bytes:24 acl:0 sco:0 commands:30 errors:22 Features: 0xff 0x3e 0x09 0x76 0x80 0x01 0x00 0x80 Packet type: DM1 DM3 DM5 DH1 DH3 DH5 HV1 HV2 HV3 Link policy: Link mode: SLAVE ACCEPT but I can't turn on my hci interface root@maiko-cce-lin:~# hciconfig hci up Can't init device hci0: Connection timed out (110) I don't understand why.. the hcitool command doesn't show any device. root@maiko-cce-lin:~# hcitool dev Devices: I've tried to restart my bluetooth service too with this command and make all these previous commands again but without success. root@maiko-cce-lin:~# service bluetooth restart * Stopping bluetooth [ OK ] * Starting bluetooth [ OK ] root@maiko-cce-lin:~# The dongle works if you disconnect it from usb, wait a few seconds and connect it again. so there must be better solution for it ( a solution not involving physically removing the dongle!)

    Read the article

  • Rails/Node.js interaction

    - by lpvn
    I and my co-worker are developing a web application with rails and node.js and we can't reach a consensus regarding a particular architectural decision. Our setup is basically a rails server working with node.js and redis, when a client makes a http request to our rails API in some cases our rails application posts the response to a redis database and then node.js transmits the response via websocket. Our disagreement occurs in the following point: my co-worker thinks that using node.js to send data to clients is somewhat business logic and should be inside the model, so in the first code he wrote he used commands of broadcast in callbacks and other places of the model, he's convinced that the models are the best place for the interaction between rails and node. I on the other hand think that using node.js belongs to the runtime realm, my take is that the broadcast commands and other node.js interactions should be in the controller and should only be used in a model if passed through a well defined interface, just like the situation when a model needs to access the current user of a session. At this point we're tired of arguing over this same thing and our discussion consists in us repeating to ourselves our same opinions over and over. Could anyone, preferably with experience in the same setup, give us an unambiguous response saying which solution is more adequate and why it is?

    Read the article

  • How to list all my packages from command line which can show package name, license, source url, etc?

    - by YumYumYum
    How to get all the installed package list with there license, source url? Such as following only shows name of the package only. $ dpkg --get-selections acpi-support install acpid install adduser install adium-theme-ubuntu install aisleriot install alacarte install For example in Fedora/CentOS (RED HAT LINUX BRANCH), you can see that: $ yum info busybox Loaded plugins: auto-update-debuginfo, langpacks, presto, refresh-packagekit Available Packages Name : busybox Arch : i686 Epoch : 1 Version : 1.18.2 Release : 5.fc15 Size : 615 k Repo : updates Summary : Statically linked binary providing simplified versions of system commands URL : http://www.busybox.net License : GPLv2 Description : Busybox is a single binary which includes versions of a large number : of system commands, including a shell. This package can be very : useful for recovering from certain types of system failures, : particularly those involving broken shared libraries. Follow up: /var/lib/apt/lists$ ls extras.ubuntu.com_ubuntu_dists_natty_main_binary-amd64_Packages extras.ubuntu.com_ubuntu_dists_natty_main_source_Sources extras.ubuntu.com_ubuntu_dists_natty_Release extras.ubuntu.com_ubuntu_dists_natty_Release.gpg lock partial security.ubuntu.com_ubuntu_dists_natty-security_main_binary-amd64_Packages security.ubuntu.com_ubuntu_dists_natty-security_main_source_Sources security.ubuntu.com_ubuntu_dists_natty-security_multiverse_binary-amd64_Packages security.ubuntu.com_ubuntu_dists_natty-security_multiverse_source_Sources security.ubuntu.com_ubuntu_dists_natty-security_Release security.ubuntu.com_ubuntu_dists_natty-security_Release.gpg security.ubuntu.com_ubuntu_dists_natty-security_restricted_binary-amd64_Packages security.ubuntu.com_ubuntu_dists_natty-security_restricted_source_Sources security.ubuntu.com_ubuntu_dists_natty-security_universe_binary-amd64_Packages security.ubuntu.com_ubuntu_dists_natty-security_universe_source_Sources us.archive.ubuntu.com_ubuntu_dists_natty_main_binary-amd64_Packages us.archive.ubuntu.com_ubuntu_dists_natty_main_source_Sources us.archive.ubuntu.com_ubuntu_dists_natty_multiverse_binary-amd64_Packages us.archive.ubuntu.com_ubuntu_dists_natty_multiverse_source_Sources us.archive.ubuntu.com_ubuntu_dists_natty_Release us.archive.ubuntu.com_ubuntu_dists_natty_Release.gpg us.archive.ubuntu.com_ubuntu_dists_natty_restricted_binary-amd64_Packages us.archive.ubuntu.com_ubuntu_dists_natty_restricted_source_Sources us.archive.ubuntu.com_ubuntu_dists_natty_universe_binary-amd64_Packages us.archive.ubuntu.com_ubuntu_dists_natty_universe_source_Sources us.archive.ubuntu.com_ubuntu_dists_natty-updates_main_binary-amd64_Packages us.archive.ubuntu.com_ubuntu_dists_natty-updates_main_source_Sources us.archive.ubuntu.com_ubuntu_dists_natty-updates_multiverse_binary-amd64_Packages us.archive.ubuntu.com_ubuntu_dists_natty-updates_multiverse_source_Sources us.archive.ubuntu.com_ubuntu_dists_natty-updates_Release us.archive.ubuntu.com_ubuntu_dists_natty-updates_Release.gpg us.archive.ubuntu.com_ubuntu_dists_natty-updates_restricted_binary-amd64_Packages us.archive.ubuntu.com_ubuntu_dists_natty-updates_restricted_source_Sources us.archive.ubuntu.com_ubuntu_dists_natty-updates_universe_binary-amd64_Packages us.archive.ubuntu.com_ubuntu_dists_natty-updates_universe_source_Sources

    Read the article

  • Closer Aero-snap in ubuntu

    - by Andrew Redd
    Before I get screamed at for duplicating a question. Ive read windows 7-like snap window maximize and vertical feature and http://www.omgubuntu.co.uk/2009/11/get-aero-snap-in-ubuntu/ There are two problems with this solution that I am trying to get around. It's not sensitive to dragging a window It's not intelligent for Twin-view monitors. The first problem is the more pressing. I have the compiz settings with wmctrl, but this is not sensitive to dragging windows if I have a window with the focus and place my mouse in the panel I get the window maximized, even though I'm not dragging the window. A good solution would be sensitive to the state of the mouse, clicked, right clicked, middle clicked. An ideal solution would be sensitive to dragging a window or not. Second is a minor annoyance to me at least. With the commands as they are listed are equivalent to maximizing the windows since I have a Twinview monitors setup. Is there any way to add these sensitivities to the commands?

    Read the article

  • Boost your infrastructure with Coherence into the Cloud

    - by Nino Guarnacci
    Authors: Nino Guarnacci & Francesco Scarano,  at this URL could be found the original article:  http://blogs.oracle.com/slc/coherence_into_the_cloud_boost. Thinking about the enterprise cloud, come to mind many possible configurations and new opportunities in enterprise environments. Various customers needs that serve as guides to this new trend are often very different, but almost always united by two main objectives: Elasticity of infrastructure both Hardware and Software Investments related to the progressive needs of the current infrastructure Characteristics of innovation and economy. A concrete use case that I worked on recently demanded the fulfillment of two basic requirements of economy and innovation.The client had the need to manage a variety of data cache, which can process complex queries and parallel computational operations, maintaining the caches in a consistent state on different server instances, on which the application was installed.In addition, the customer was looking for a solution that would allow him to manage the likely situations in load peak during certain times of the year.For this reason, the customer requires a replication site, on which convey part of the requests during periods of peak; the desire was, however, to prevent the immobilization of investments in owned hardware-software architectures; so, to respond to this need, it was requested to seek a solution based on Cloud technologies and architectures already offered by the market. Coherence can already now address the requirements of large cache between different nodes in the cluster, providing further technology to search and parallel computing, with the simultaneous use of all hardware infrastructure resources. Moreover, thanks to the functionality of "Push Replication", which can replicate and update the information contained in the cache, even to a site hosted in the cloud, it is satisfied the need to make resilient infrastructure that can be based also on nodes temporarily housed in the Cloud architectures. There are different types of configurations that can be realized using the functionality "Push-Replication" of Coherence. Configurations can be either: Active - Passive  Hub and Spoke Active - Active Multi Master Centralized Replication Whereas the architecture of this particular project consists of two sites (Site 1 and Site Cloud), between which only Site 1 is enabled to write into the cache, it was decided to adopt an Active-Passive Configuration type (Hub and Spoke). If, however, the requirement should change over time, it will be particularly easy to change this configuration in an Active-Active configuration type. Although very simple, the small sample in this post, inspired by the specific project is effective, to better understand the features and capabilities of Coherence and its configurations. Let's create two distinct coherence cluster, located at miles apart, on two different domain contexts, one of them "hosted" at home (on-premise) and the other one hosted by any cloud provider on the network (or just the same laptop to test it :)). These two clusters, which we call Site 1 and Site Cloud, will contain the necessary information, so a simple client can insert data only into the Site 1. On both sites will be subscribed a listener, who listens to the variations of specific objects within the various caches. To implement these features, you need 4 simple classes: CachedResponse.java Represents the POJO class that will be inserted into the cache, and fulfills the task of containing useful information about the hypothetical links navigation ResponseSimulatorHelper.java Represents a link simulator, which has the task of randomly creating objects of type CachedResponse that will be added into the caches CacheCommands.java Represents the model of our example, because it is responsible for receiving instructions from the controller and performing basic operations against the cache, such as insert, delete, update, listening, objects within the cache Shell.java It is our controller, which give commands to be executed within the cache of the two Sites So, summarily, we execute the java class "Shell", asking it to put into the cache 100 objects of type "CachedResponse" through the java class "CacheCommands", then the simulator "ResponseSimulatorHelper" will randomly create new instances of objects "CachedResponse ". Finally, the Shell class will listen to for events occurring within the cache on the Site Cloud, while insertions and deletions are performed on Site 1. Now, we realize the two configurations of two respective sites / cluster: Site 1 and Site Cloud.For the Site 1 we define a cache of type "distributed" with features of "read and write", using the cache class store for the "push replication", a functionality offered by the project "incubator" of Oracle Coherence.For the "Site Cloud" we expect even the definition of “distributed” cache type with tcp proxy feature enabled, so it can receive updates from Site 1.  Coherence Cache Config XML file for "storage node" on "Site 1" site1-prod-cache-config.xml Coherence Cache Config XML file for "storage node" on "Site Cloud" site2-prod-cache-config.xml For two clients "Shell" which will connect respectively to the two clusters we have provided two easy access configurations.  Coherence Cache Config XML file for Shell on "Site 1" site1-shell-prod-cache-config.xml Coherence Cache Config XML file for Shell on "Site Cloud" site2-shell-prod-cache-config.xml Now, we just have to get everything and run our tests. To start at least one "storage" node (which holds the data) for the "Cloud Site", we can run the standard class  provided OOTB by Oracle Coherence com.tangosol.net.DefaultCacheServer with the following parameters and values:-Xmx128m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.cacheconfig=config/site2-prod-cache-config.xml-Dtangosol.coherence.clusterport=9002-Dtangosol.coherence.site=SiteCloud To start at least one "storage" node (which holds the data) for the "Site 1", we can perform again the standard class provided by Coherence  com.tangosol.net.DefaultCacheServer with the following parameters and values:-Xmx128m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.cacheconfig=config/site1-prod-cache-config.xml-Dtangosol.coherence.clusterport=9001-Dtangosol.coherence.site=Site1 Then, we start the first client "Shell" for the "Cloud Site", launching the java class it.javac.Shell  using these parameters and values: -Xmx64m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=false -Dtangosol.coherence.cacheconfig=config/site2-shell-prod-cache-config.xml-Dtangosol.coherence.clusterport=9002-Dtangosol.coherence.site=SiteCloud Finally, we start the second client "Shell" for the "Site 1", re-launching a new instance of class  it.javac.Shell  using  the following parameters and values: -Xmx64m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=false -Dtangosol.coherence.cacheconfig=config/site1-shell-prod-cache-config.xml-Dtangosol.coherence.clusterport=9001-Dtangosol.coherence.site=Site1  And now, let’s execute some tests to validate and better understand our configuration. TEST 1The purpose of this test is to load the objects into the "Site 1" cache and seeing how many objects are cached on the "Site Cloud". Within the "Shell" launched with parameters to access the "Site 1", let’s write and run the command: load test/100 Within the "Shell" launched with parameters to access the "Site Cloud" let’s write and run the command: size passive-cache Expected result If all is OK, the first "Shell" has uploaded 100 objects into a cache named "test"; consequently the "push-replication" functionality has updated the "Site Cloud" by sending the 100 objects to the second cluster where they will have been posted into a respective cache, which we named "passive-cache". TEST 2The purpose of this test is to listen to deleting and adding events happening on the "Site 1" and that are replicated within the cache on "Cloud Site". In the "Shell" launched with parameters to access the "Site Cloud" let’s write and run the command: listen passive-cache/name like '%' or a "cohql" query, with your preferred parameters In the "Shell" launched with parameters to access the "Site 1" let’s write and run the following commands: load test/10 load test2/20 delete test/50 Expected result If all is OK, the "Shell" to Site Cloud let us to listen to all the add and delete events within the cache "cache-passive", whose objects satisfy the query condition "name like '%' " (ie, every objects in the cache; you could change the tests and create different queries).Through the Shell to "Site 1" we launched the commands to add and to delete objects on different caches (test and test2). With the "Shell" running on "Site Cloud" we got the evidence (displayed or printed, or in a log file) that its cache has been filled with events and related objects generated by commands executed from the" Shell "on" Site 1 ", thanks to "push-replication" feature.  Other tests can be performed, such as, for example, the subscription to the events on the "Site 1" too, using different "cohql" queries, changing the cache configuration,  to effectively demonstrate both the potentiality and  the versatility produced by these different configurations, even in the cloud, as in our case. More information on how to configure Coherence "Push Replication" can be found in the Oracle Coherence Incubator project documentation at the following link: http://coherence.oracle.com/display/INC10/Home More information on Oracle Coherence "In Memory Data Grid" can be found at the following link: http://www.oracle.com/technetwork/middleware/coherence/overview/index.html To download and execute the whole sources and configurations of the example explained in the above post,  click here to download them; After download the last available version of the Push-Replication Pattern library implementation from the Oracle Coherence Incubator site, and download also the related and required version of Oracle Coherence. For simplicity the required .jarS to execute the example (that can be found into the Push-Replication-Pattern  download and Coherence Distribution download) are: activemq-core-5.3.1.jar activemq-protobuf-1.0.jar aopalliance-1.0.jar coherence-commandpattern-2.8.4.32329.jar coherence-common-2.2.0.32329.jar coherence-eventdistributionpattern-1.2.0.32329.jar coherence-functorpattern-1.5.4.32329.jar coherence-messagingpattern-2.8.4.32329.jar coherence-processingpattern-1.4.4.32329.jar coherence-pushreplicationpattern-4.0.4.32329.jar coherence-rest.jar coherence.jar commons-logging-1.1.jar commons-logging-api-1.1.jar commons-net-2.0.jar geronimo-j2ee-management_1.0_spec-1.0.jar geronimo-jms_1.1_spec-1.1.1.jar http.jar jackson-all-1.8.1.jar je.jar jersey-core-1.8.jar jersey-json-1.8.jar jersey-server-1.8.jar jl1.0.jar kahadb-5.3.1.jar miglayout-3.6.3.jar org.osgi.core-4.1.0.jar spring-beans-2.5.6.jar spring-context-2.5.6.jar spring-core-2.5.6.jar spring-osgi-core-1.2.1.jar spring-osgi-io-1.2.1.jar At this URL could be found the original article: http://blogs.oracle.com/slc/coherence_into_the_cloud_boost Authors: Nino Guarnacci & Francesco Scarano

    Read the article

  • How to find location of installed library

    - by Raven
    Background: I'm trying to build my program but first I need to set up libraries in netbeans. My project is using GLU and therefore I installed libglu-dev. I didn't note location where the libraries were located and now I can't find them.. I've switched to Linux just a few days ago and so far I'm very content with it, however I couldn't google this one out and becoming frustrated.. Is there way to find out where files of package were installed without running installation again? I mean if I got library xxx and installed it some time ago, is there somecommand xxx that will print this info? I've already tried locate, find and whereis commands but either I'm missing something or I just can't do it correctly.. for libglu, locate returns: /usr/share/bug/libglu1-mesa /usr/share/bug/libglu1-mesa/control /usr/share/bug/libglu1-mesa/script /usr/share/doc/libglu1-mesa /usr/share/doc/libglu1-mesa/changelog.Debian.gz /usr/share/doc/libglu1-mesa/copyright /usr/share/lintian/overrides/libglu1-mesa /var/lib/dpkg/info/libglu1-mesa:i386.list /var/lib/dpkg/info/libglu1-mesa:i386.md5sums /var/lib/dpkg/info/libglu1-mesa:i386.postinst /var/lib/dpkg/info/libglu1-mesa:i386.postrm /var/lib/dpkg/info/libglu1-mesa:i386.shlibs Other two commands fail to find anything. Now locate did it's job but I'm sure none of those paths is where the library actually resides (at least everything I was linking so far was in /usr/lib or usr/local/lib). The libglu was introduced just as example, I'm looking for general solution for this problem.

    Read the article

  • How to join video files from terminal?

    - by Leon Vitanos
    I have tried avidemux2_cli, mencoder, ffmpeg, cat.. But this doesn't always work (With the most of the times the error is that the audio codec is not the same) Maybe i put wrong options in the commands. So the commands: cat Sample.avi rrr.avi > complete.avi ffmpeg -i Sample.avi -i output.avi -vcodec copy -acodec copy complete.avi mencoder -ovc lavc -oac copy Sample.avi rrr.avi -o complete.avi avidemux2_cli --audio-codec copy --video-codec copy --output-format avi --load Sample.avi -append output.avi --save video.avi The cat problem is that it doesn't show error but it doesn't work always..Like the complete.avi will be exactly the same with Sample.avi Fmmpeg does nothing. The complete.avi is always the same with Sample.avi Mencoder error: All files must have identical audio codec and format for -oac copy. So the complete.avi is the same with Sample.avi avidemux2_cli there is no error but the complete.avi is again the same with Sample.avi.. So to sum up, all complete.avi are the same with Sample.avi.. And the problem is that they don't have the same audio codec ( i quess ).. Any ideas?

    Read the article

  • Proper way to let user enter password for a bash script using only the GUI (with the terminal hidden)

    - by MountainX
    I have made a bash script that uses kdialog exclusively for interacting with the user. It is launched from a ".desktop" file so the user never sees the terminal. It looks 100% like a GUI app (even though it is just a bash script). It runs in KDE only (Kubuntu 12.04). My only problem is handling password input securely and conveniently. I can't find a satisfactory solution. The script was designed to be run as a normal user and to prompt for the password when a sudo command is first needed. In this way, most commands, those not requiring sudo rights, are run as the normal user. What happens (when the script is run from the terminal) is that the user is prompted for their password once and the default sudo timeout allows the script to finish, including any additional sudo commands, without prompting the user again. This is how I want it to work when run behind the GUI too. The main problem is that using kdesudo to launch my script, which is the standard GUI way, means that the entire script is executed by the root user. So file ownerships get assigned to the root user, I can't rely upon ~/ in paths, and many other things are less than ideal. Running the entire script as the root user is just a very unsatisfactory solution and I think it is a bad practice. I appreciate any ideas for letting a user enter the sudo password just once via GUI while not running the whole script as root. Thanks.

    Read the article

  • How to implement lockstep model for RTS game?

    - by user11177
    In my effort to learn programming I'm trying to make a small RTS style game. I've googled and read a lot of articles and gamedev q&a's on the topic of lockstep synchronization in multiplayer RTS games, but am still having trouble wrapping my head around how to implement it in my own game. I currently have a simple server/client system. For example if player1 selects a unit and gives the command to move it, the client sends the command [move, unit, coordinates] to the server, the server runs the pathfinding function and sends [move, unit, path] to all clients which then moves the unit and run animations. So far so good, but not synchronized for clients with latency or lower/higher FPS. How can I turn this into a true lockstep system? Is the right methodology supposed to be something like the following, using the example from above: Turn 1 start gather command inputs from player1 send to the server turn number and commands end turn, increment turn number The server receives the commands, runs pathfinding and sends the paths to all clients. Next turn receive paths from server, as well as confirmation that all clients completed previous turn, otherwise pause and wait for that confirmation move units gather new inputs end turn Is that the gist of it? Should perhaps pathfinding and other game logic be done client side instead of on the server, if so why? Is there anything else I'm missing? I hope someone can break down the concept, so I understand it better.

    Read the article

  • How to install Oracle Database 11g Express Edition on Ubuntu 12.10?

    - by Praneeth Pj
    I installed the Oracle database following the steps mentioned in this blog. Downloaded 11g express edition Created a new user oracle under the group dba. Following steps are executed using this. Unzipped oracle-xe-11.2.0-1.0.x86_64.rpm.zip and then converted the rpm to the Ubuntu package by running: sudo alien --scripts -d oracle-xe-11.2.0-1.0.x86_64.rpm Created /sbin/chkconfig file and added the entries as specified there. Created /etc/sysctl.d/60-oracle.conf and added the entries as specified in the same link as above. Running the commands: ln -s /usr/bin/awk /bin/awk mkdir /var/lock/subsys touch /var/lock/subsys/listener .deb generated in step 3: sudo dpkg --install oracle-xe_11.2.0-2_amd64.deb Left the default values as it is: sudo /etc/init.d/oracle-xe configure Set the following env variables in ~/.bashrc file: export ORACLE_HOME=/u01/app/oracle/product/11.2.0/xe export ORACLE_SID=XE export NLS_LANG=`$ORACLE_HOME/bin/nls_lang.sh` export ORACLE_BASE=/u01/app/oracle export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH export PATH=$ORACLE_HOME/bin:$PATH Running the commands: chown -R oracle:dba /var/tmp/.oracle chmod -R 755 /var/tmp/.oracle chown -R oracle:dba /tmp/.oracle chmod -R 755 /tmp/.oracle Starting Oracle Database 11g Express Edition instance: sudo service oracle-xe start sqlplus / as sysdba and got the following: SQL*Plus: Release 11.2.0.2.0 Production on Thu Jan 3 09:41:58 2013 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to an idle instance. Now when exectuting any SQL statements on SQLplus, I end up with the following error: SQL> select * from dual; select * from dual * ERROR at line 1: ORA-01034: ORACLE not available Process ID: 0 Session ID: 0 Serial number: 0 I have increased the swap memory as specified here $ free -m total used free shared buffers cached Mem: 3901 3428 473 0 182 1988 -/+ buffers/cache: 1258 2643 Swap: 5066 0 5066

    Read the article

  • SKYPE 4.1 Ubuntu 12.04 - Webcam's Image Upside Down and with Zoom

    - by ErivaldoJunior
    I'm trying do my notebook's webcam works on Skype 4.1, but I am not having success. The problem is that the webcam's image appears upside down and with zoom. This problem don't happen on Skype Version 2.1 (Beta). On Skype 2.1 (Beta) my webcam works perfectly. However, on Skype 4.1 the webcam's image is upside down and with zoom. I managed to solve only the upside down problem with the following commands: $ export LIBV4LCONTROL_FLAGS=3 $ LD_PRELOAD=/usr/lib/i386-linux-gnu/libv4l/v4l1compat.so skype After these commands the webcam's image become normal, but the image is still zoomed. I could not solve the zoom problem. To work around this I'm obligated to use the both versions (2.1 (beta) and 4.1). Does anyone know a way to make the image from my webcam stay perfectly normal? These are the information about my webcam: Bus 002 Device 003: ID 0c45:62c0 Microdia Sonix USB 2.0 Camera It uses the driver "uvcvideo". And I'm using the Ubuntu 12.04 64 bits. Thank you very much!

    Read the article

  • Is it possible to execute keyboard input programmatically in Linux?

    - by Taylor Hawkes
    For example is there a Linux command or way that I could from a program (c++ | python| or other) enter a series of keyboard inputs that are interpreted as though they are keyboard inputs. I have a bad case of Repetitive Stress Injury (RSI) from typing. To ease my pain I developed a voice controlled interface using pocket sphinx and a custom grammar and to run a number of very common commands. ex: "open chrome" , "open vim". Basically what is shown here, but with slightly diff tools: http://bloc.eurion.net/archives/2008/writing-a-command-and-control-application-with-voice-recognition/ I have run into some limitation as I can only execute command line commands given a voice command. Rather than having a "voice command" - "command line command" mapping, I would like to have "voice command" - "keyboard input" mapping. So when my active window is a browser and I type + n, and new tab opens. If I'm in vim and new vim tab opens. Any suggestions, ideas, tools or approaches to this problem would be much appreciated. I understand the answer may not be simple, but would like to develop it none the less.

    Read the article

  • Are very short or abbreviated method/function names that don't use full words bad practice or a matter of style.

    - by Alb
    Is there nowadays any case for brevity over clarity with method names? Tonight I came across the Python method repr() which seems like a bad name for a method to me. It's not an English word. It apparently is an abbreviation of 'representation' and even if you can deduce that, it still doesn't tell you what the method does. A good method name is subjective to a certain degree, but I had assumed that modern best practices agreed that names should be at least full words and descriptive enough to reveal enough about the method that you would easily find one when looking for it. Method names made from words help let your code read like English. repr() seems to have no advantages as a name other than being short and IDE auto-complete makes this a non-issue. An additional reason given in an answer is that python names are brief so that you can do many things on one line. Surely the better way is to just extract the many things to their own function, and repeat until lines are not too long. Are these just a hangover from the unix way of doing things? Commands with names like ls, rm, ps and du (if you could call those names) were hard to find and hard to remember. I know that the everyday usage of commands such as these is different than methods in code so the matter of whether those are bad names is a different matter.

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >