Search Results

Search found 8567 results on 343 pages for 'commands unix'.

Page 41/343 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • gcc options for fastest code

    - by rwallace
    I'm distributing a C++ program with a makefile for the Unix version, and I'm wondering what compiler options I should use to get the fastest possible code (it falls into the category of programs that can use all the computing power they can get and still come back for more), given that I don't know in advance what hardware, operating system or gcc version the user will have, and I want above all else to make sure it at least works correctly on every major Unix-like operating system. Thus far, I have g++ -O3 -Wno-write-strings, are there any other options I should add? On Windows, the Microsoft compiler has options for things like fast calling convention and link time code generation that are worth using, are there any equivalents on gcc? (I'm assuming it will default to 64-bit on a 64-bit platform, please correct me if that's not the case.)

    Read the article

  • How to prepare a codebase for compiling on both Windows and Unix-based systems

    - by Max
    Hi! I am wondering about different solutions to easily compile my cross-platform application for both windows and unix. Right now I am using a makefile on Ubuntu, but before my codebase grows larger I'd like to perform the steps necessary to compile it on Windows, and then continue doing so regularly to see that it still works. I'd preferably not contaminate my SVN codebase repository with multiple "makefile" solutions, such as VC++ solutions and so on, I'd like a more automatic way. I tried using mingw with make for windows, but it seems my secondexpansion awesomeness doesn't work on the Windows version (or something like that). It wouldn't compile, and also complained about _winNT or something like that not being defined. How should I prepare my codebase for cross-platform easy compiling? Things like buildtools, perhaps autogenerate VS file from makefile, or something similar. Some preprocessor magic in a stdinc file perhaps? Thanks!

    Read the article

  • After tar extract, Changing Permissions

    - by Moe
    Just a Question Regarding unix and PHP today. What I am doing on my PHP is using the Unix system to untar a tarred file. exec("tar -xzf foo.tar.gz"); Generally everything works fine until I run into this particular foo.tar.gz, which has a file system as follows: Applications/ Library/ Systems/ After running the tar command, it seems that the file permissions get changed to 644 (instead of 755). This causes Permission denied (errno 13) and therefore disabling most of my code. (I'm guessing from lack of privileges) Any way I can stop this tar command completely ruining my permissions? Thanks. Oh and this seems to only happen when I have a foo.tar.gz file that Has this particular file system. Anything else and I'm good.

    Read the article

  • Perl: Fastest way to get directory (and subdirs) size on unix - using stat() at the moment

    - by ivicas
    I am using Perl stat() function to get the size of directory and it's subdirectories. I have a list of about 20 parent directories which have few thousand recursive subdirs and every subdir has few hundred records. Main computing part of script looks like this: sub getDirSize { my $dirSize = 0; my @dirContent = <*>; my $sizeOfFilesInDir = 0; foreach my $dirContent (@dirContent) { if (-f $dirContent) { my $size = (stat($dirContent))[7]; $dirSize += $size; } elsif (-d $dirContent) { $dirSize += getDirSize($dirContent); } } return $dirSize; } The script is executing for more than one hour and I want to make it faster. I was trying with the shell du command, but the output of du (transfered to bytes) is not accurate. And it is also quite time consuming. I am working on HP-UNIX 11i v1.

    Read the article

  • GLOB_BRACE portability?

    - by Pekka
    In this question, I was made aware of glob()'s GLOB_BRACE option that allows for a limited set of regular expressions when searching for files. This looks just like what I need, but according to the manual, GLOB_BRACE is "not available on some Non-GNU Operating systems." Among those seems to be Solaris. I am building an application that is supposed to be as portable as possible, so I need to check out possible problems as early as possible. Does somebody know of other platforms apart from Solaris where GLOB_BRACE is not supported? How about Mac OS = X for example? It's built on top of a Unix. Is every Unix automatically a "GNU" platform as defined in the manual?

    Read the article

  • Using AND/OR mysql commands with FROM_UNIXTIME

    - by scatteredbomb
    Trying to select a query in php/mysql to get "Upcoming Items" in a calendar. We store the dates in the DB as a unix time. Here's what my query looks like right now SELECT * FROM `calendar` WHERE (`eventDate` > '$yesterday') OR (FROM_UNIXTIME(eventDate, '%m') > '$current_month' AND `$yearly` = '1') ORDER BY `eventDate` LIMIT 4 This is giving me an error "Unknown column '' in 'where clause'". I'm sure it has to do with my use of parenthesis (which I've never used before in a query) and the FROM_UNIXTIME command. Can someone help me out and let me know how I've screwed this up? Thanks!

    Read the article

  • How can I close a port that appears to be orphaned by Xvfb?

    - by Jim Fiorato
    I'm running Xvfb on a FC8 Amazon EC2 image. On occasion Xvfb will crash (unable at the moment to find out the reason for the crash), and after crashing the TCP port will appear to be orphaned. I'm unable to get a PID to kill any process that may be using it. I'm starting Xvfb with: Xvfb :7 -screen 0 1024x768x24 & Examples of what I'm working with are below, the Xvfb port is (was) 6007: # netstat -ap Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 *:ssh *:* LISTEN 1894/sshd tcp 0 0 *:6007 *:* LISTEN - tcp 0 352 ip-10-84-69-165.ec2.int:ssh c-71-194-253-238.hsd1:51689 ESTABLISHED 2981/0 udp 0 0 *:bootpc *:* 1817/dhclient udp 0 0 *:bootpc *:* 1463/dhclient Active UNIX domain sockets (servers and established) Proto RefCnt Flags Type State I-Node PID/Program name Path unix 2 [ ] DGRAM 871 668/udevd @/org/kernel/udev/udevd unix 2 [ ACC ] STREAM LISTENING 5385 1880/dbus-daemon /var/run/dbus/system_bus_socket unix 6 [ ] DGRAM 5353 1867/rsyslogd /dev/log unix 2 [ ] DGRAM 11861 2981/0 unix 2 [ ] DGRAM 5461 1974/crond unix 2 [ ] DGRAM 5451 1904/console-kit-da unix 3 [ ] STREAM CONNECTED 5438 1880/dbus-daemon /var/run/dbus/system_bus_socket unix 3 [ ] STREAM CONNECTED 5437 1904/console-kit-da unix 3 [ ] STREAM CONNECTED 5396 1880/dbus-daemon unix 3 [ ] STREAM CONNECTED 5395 1880/dbus-daemon unix 2 [ ] DGRAM 5361 1871/rklogd # lsof -i COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME dhclient 1463 root 3u IPv4 4704 UDP *:bootpc dhclient 1817 root 4u IPv4 5173 UDP *:bootpc sshd 1894 root 3u IPv4 5414 TCP *:ssh (LISTEN) sshd 2981 root 3u IPv4 11825 TCP ip-10-84-69-165.ec2.internal:ssh->c-71-194-253-238.hsd1.il.comcast.net:51689 (ESTABLISHED) Attempting to force the port closed with iptables doesn't seem to work either. iptables -A INPUT -p tcp --dport 6007 -j DROP I'm at a loss as to how to reclaim/free the port. From what I can tell, this port will remain in this state until the EC2 instance is shut down. So, how can I close this port so I can restart Xvfb?

    Read the article

  • HP ouvre ses serveurs de mission critique à l'architecture x86 et annonce le projet Odyssey de convergence avec les systèmes UNIX

    HP ouvre ses serveurs à mission critique à l'architecture x86 Et annonce le Projet Odyssey de convergence avec les systèmes UNIX HP se lance dans un projet d'envergure qui vise à réunir les architectures serveur UNIX et x86 au sein d'une plateforme unique pour les systèmes critiques : son Projet Odyssey. Les serveurs haut de gamme Integrity vont pouvoir accueillir des processeurs Intel Xeon x86, compatibles Windows et Linux, qui viennent disputer le règne de l'Itanium vieillissant que HP et Intel

    Read the article

  • Log transport and aggregation at scale

    - by markdrayton
    How're you analysing log files from UNIX/Linux machines? We run several hundred servers which all generate their own log files, either directly or through syslog. I'm looking for a decent solution to aggregate these and pick out important events. This problem breaks down into 3 components: 1) Message transport The classic way is to use syslog to log messages to a remote host. This works fine for applications that log into syslog but less useful for apps that write to a local file. Solutions for this might include having the application log into a FIFO connected to a program to send the message using syslog, or by writing something that will grep the local files and send the output to the central syslog host. However, if we go to the trouble of writing tools to get messages into syslog would we be better replacing the whole lot with something like Facebook's Scribe which offers more flexibility and reliability than syslog? 2) Message aggregation Log entries seem to fall into one of two types: per-host and per-service. Per-host messages are those which occur on one machine; think disk failures or suspicious logins. Per-service messages occur on most or all of the hosts running a service. For instance, we want to know when Apache finds an SSI error but we don't want the same error from 100 machines. In all cases we only want to see one of each type of message: we don't want 10 messages saying the same disk has failed, and we don't want a message each time a broken SSI is hit. One approach to solving this is to aggregate multiple messages of the same type into one on each host, send the messages to a central server and then aggregate messages of the same kind into one overall event. SER can do this but it's awkward to use. Even after a couple of days of fiddling I had only rudimentary aggregations working and had to constantly look up the logic SER uses to correlate events. It's powerful but tricky stuff: I need something which my colleagues can pick up and use in the shortest possible time. SER rules don't meet that requirement. 3) Generating alerts How do we tell our admins when something interesting happens? Mail the group inbox? Inject into Nagios? So, how're you solving this problem? I don't expect an answer on a plate; I can work out the details myself but some high-level discussion on what is surely a common problem would be great. At the moment we're using a mishmash of cron jobs, syslog and who knows what else to find events. This isn't extensible, maintainable or flexible and as such we miss a lot of stuff we shouldn't. Updated: we're already using Nagios for monitoring which is great for detected down hosts/testing services/etc but less useful for scraping log files. I know there are log plugins for Nagios but I'm interested in something more scalable and hierarchical than per-host alerts.

    Read the article

  • What should be owner type of a routed command?

    - by viky
    I am using wpf Custom Commands. While writing a custom Command, you need to define the owner type. Description says that it is the type that is registering the command. I was seeing some sample and there the Owner type was UIElement and in some others it was the class name itself. Whats the difference? What should be the owner type?

    Read the article

  • How should I design my database API commands? [closed]

    - by WebDev
    I am developing a database API for a project, with commands for getting data from the database. For example, I have one gib table, so the command for that is: getgib name alias limit fields If the user pass their name: getgib rahul Then it will return all gib data whose name is like rahul. If an alias is given then it will return all the gib owned by the user whose alias (userid) was given. I want to design the commands: limit: to limit the record in query, fields: extra fields I want to add in the select query. So now the commands are set, but: I want the gibs by the gibid, so how to make this or any suggestion to improve my command is welcome. If the user doesn't want to specify the name, and he wants only the gibs by providing alias, then what separator should I use instead of name?

    Read the article

  • 64-bit Archives Needed

    - by user9154181
    A little over a year ago, we received a question from someone who was trying to build software on Solaris. He was getting errors from the ar command when creating an archive. At that time, the ar command on Solaris was a 32-bit command. There was more than 2GB of data, and the ar command was hitting the file size limit for a 32-bit process that doesn't use the largefile APIs. Even in 2011, 2GB is a very large amount of code, so we had not heard this one before. Most of our toolchain was extended to handle 64-bit sized data back in the 1990's, but archives were not changed, presumably because there was no perceived need for it. Since then of course, programs have continued to get larger, and in 2010, the time had finally come to investigate the issue and find a way to provide for larger archives. As part of that process, I had to do a deep dive into the archive format, and also do some Unix archeology. I'm going to record what I learned here, to document what Solaris does, and in the hope that it might help someone else trying to solve the same problem for their platform. Archive Format Details Archives are hardly cutting edge technology. They are still used of course, but their basic form hasn't changed in decades. Other than to fix a bug, which is rare, we don't tend to touch that code much. The archive file format is described in /usr/include/ar.h, and I won't repeat the details here. Instead, here is a rough overview of the archive file format, implemented by System V Release 4 (SVR4) Unix systems such as Solaris: Every archive starts with a "magic number". This is a sequence of 8 characters: "!<arch>\n". The magic number is followed by 1 or more members. A member starts with a fixed header, defined by the ar_hdr structure in/usr/include/ar.h. Immediately following the header comes the data for the member. Members must be padded at the end with newline characters so that they have even length. The requirement to pad members to an even length is a dead giveaway as to the age of the archive format. It tells you that this format dates from the 1970's, and more specifically from the era of 16-bit systems such as the PDP-11 that Unix was originally developed on. A 32-bit system would have required 4 bytes, and 64-bit systems such as we use today would probably have required 8 bytes. 2 byte alignment is a poor choice for ELF object archive members. 32-bit objects require 4 byte alignment, and 64-bit objects require 64-bit alignment. The link-editor uses mmap() to process archives, and if the members have the wrong alignment, we have to slide (copy) them to the correct alignment before we can access the ELF data structures inside. The archive format requires 2 byte padding, but it doesn't prohibit more. The Solaris ar command takes advantage of this, and pads ELF object members to 8 byte boundaries. Anything else is padded to 2 as required by the format. The archive header (ar_hdr) represents all numeric values using an ASCII text representation rather than as binary integers. This means that an archive that contains only text members can be viewed using tools such as cat, more, or a text editor. The original designers of this format clearly thought that archives would be used for many file types, and not just for objects. Things didn't turn out that way of course — nearly all archives contain relocatable objects for a single operating system and machine, and are used primarily as input to the link-editor (ld). Archives can have special members that are created by the ar command rather than being supplied by the user. These special members are all distinguished by having a name that starts with the slash (/) character. This is an unambiguous marker that says that the user could not have supplied it. The reason for this is that regular archive members are given the plain name of the file that was inserted to create them, and any path components are stripped off. Slash is the delimiter character used by Unix to separate path components, and as such cannot occur within a plain file name. The ar command hides the special members from you when you list the contents of an archive, so most users don't know that they exist. There are only two possible special members: A symbol table that maps ELF symbols to the object archive member that provides it, and a string table used to hold member names that exceed 15 characters. The '/' convention for tagging special members provides room for adding more such members should the need arise. As I will discuss below, we took advantage of this fact to add an alternate 64-bit symbol table special member which is used in archives that are larger than 4GB. When an archive contains ELF object members, the ar command builds a special archive member known as the symbol table that maps all ELF symbols in the object to the archive member that provides it. The link-editor uses this symbol table to determine which symbols are provided by the objects in that archive. If an archive has a symbol table, it will always be the first member in the archive, immediately following the magic number. Unlike member headers, symbol tables do use binary integers to represent offsets. These integers are always stored in big-endian format, even on a little endian host such as x86. The archive header (ar_hdr) provides 15 characters for representing the member name. If any member has a name that is longer than this, then the real name is written into a special archive member called the string table, and the member's name field instead contains a slash (/) character followed by a decimal representation of the offset of the real name within the string table. The string table is required to precede all normal archive members, so it will be the second member if the archive contains a symbol table, and the first member otherwise. The archive format is not designed to make finding a given member easy. Such operations move through the archive from front to back examining each member in turn, and run in O(n) time. This would be bad if archives were commonly used in that manner, but in general, they are not. Typically, the ar command is used to build an new archive from scratch, inserting all the objects in one operation, and then the link-editor accesses the members in the archive in constant time by using the offsets provided by the symbol table. Both of these operations are reasonably efficient. However, listing the contents of a large archive with the ar command can be rather slow. Factors That Limit Solaris Archive Size As is often the case, there was more than one limiting factor preventing Solaris archives from growing beyond the 32-bit limits of 2GB (32-bit signed) and 4GB (32-bit unsigned). These limits are listed in the order they are hit as archive size grows, so the earlier ones mask those that follow. The original Solaris archive file format can handle sizes up to 4GB without issue. However, the ar command was delivered as a 32-bit executable that did not use the largefile APIs. As such, the ar command itself could not create a file larger than 2GB. One can solve this by building ar with the largefile APIs which would allow it to reach 4GB, but a simpler and better answer is to deliver a 64-bit ar, which has the ability to scale well past 4GB. Symbol table offsets are stored as 32-bit big-endian binary integers, which limits the maximum archive size to 4GB. To get around this limit requires a different symbol table format, or an extension mechanism to the current one, similar in nature to the way member names longer than 15 characters are handled in member headers. The size field in the archive member header (ar_hdr) is an ASCII string capable of representing a 32-bit unsigned value. This places a 4GB size limit on the size of any individual member in an archive. In considering format extensions to get past these limits, it is important to remember that very few archives will require the ability to scale past 4GB for many years. The old format, while no beauty, continues to be sufficient for its purpose. This argues for a backward compatible fix that allows newer versions of Solaris to produce archives that are compatible with older versions of the system unless the size of the archive exceeds 4GB. Archive Format Differences Among Unix Variants While considering how to extend Solaris archives to scale to 64-bits, I wanted to know how similar archives from other Unix systems are to those produced by Solaris, and whether they had already solved the 64-bit issue. I've successfully moved archives between different Unix systems before with good luck, so I knew that there was some commonality. If it turned out that there was already a viable defacto standard for 64-bit archives, it would obviously be better to adopt that rather than invent something new. The archive file format is not formally standardized. However, the ar command and archive format were part of the original Unix from Bell Labs. Other systems started with that format, extending it in various often incompatible ways, but usually with the same common shared core. Most of these systems use the same magic number to identify their archives, despite the fact that their archives are not always fully compatible with each other. It is often true that archives can be copied between different Unix variants, and if the member names are short enough, the ar command from one system can often read archives produced on another. In practice, it is rare to find an archive containing anything other than objects for a single operating system and machine type. Such an archive is only of use on the type of system that created it, and is only used on that system. This is probably why cross platform compatibility of archives between Unix variants has never been an issue. Otherwise, the use of the same magic number in archives with incompatible formats would be a problem. I was able to find information for a number of Unix variants, described below. These can be divided roughly into three tribes, SVR4 Unix, BSD Unix, and IBM AIX. Solaris is a SVR4 Unix, and its archives are completely compatible with those from the other members of that group (GNU/Linux, HP-UX, and SGI IRIX). AIX AIX is an exception to rule that Unix archive formats are all based on the original Bell labs Unix format. It appears that AIX supports 2 formats (small and big), both of which differ in fundamental ways from other Unix systems: These formats use a different magic number than the standard one used by Solaris and other Unix variants. They include support for removing archive members from a file without reallocating the file, marking dead areas as unused, and reusing them when new archive items are inserted. They have a special table of contents member (File Member Header) which lets you find out everything that's in the archive without having to actually traverse the entire file. Their symbol table members are quite similar to those from other systems though. Their member headers are doubly linked, containing offsets to both the previous and next members. Of the Unix systems described here, AIX has the only format I saw that will have reasonable insert/delete performance for really large archives. Everyone else has O(n) performance, and are going to be slow to use with large archives. BSD BSD has gone through 4 versions of archive format, which are described in their manpage. They use the same member header as SVR4, but their symbol table format is different, and their scheme for long member names puts the name directly after the member header rather than into a string table. GNU/Linux The GNU toolchain uses the SVR4 format, and is compatible with Solaris. HP-UX HP-UX seems to follow the SVR4 model, and is compatible with Solaris. IRIX IRIX has 32 and 64-bit archives. The 32-bit format is the standard SVR4 format, and is compatible with Solaris. The 64-bit format is the same, except that the symbol table uses 64-bit integers. IRIX assumes that an archive contains objects of a single ELFCLASS/MACHINE, and any archive containing ELFCLASS64 objects receives a 64-bit symbol table. Although they only use it for 64-bit objects, nothing in the archive format limits it to ELFCLASS64. It would be perfectly valid to produce a 64-bit symbol table in an archive containing 32-bit objects, text files, or anything else. Tru64 Unix (Digital/Compaq/HP) Tru64 Unix uses a format much like ours, but their symbol table is a hash table, making specific symbol lookup much faster. The Solaris link-editor uses archives by examining the entire symbol table looking for unsatisfied symbols for the link, and not by looking up individual symbols, so there would be no benefit to Solaris from such a hash table. The Tru64 ld must use a different approach in which the hash table pays off for them. Widening the existing SVR4 archive symbol tables rather than inventing something new is the simplest path forward. There is ample precedent for this approach in the ELF world. When ELF was extended to support 64-bit objects, the approach was largely to take the existing data structures, and define 64-bit versions of them. We called the old set ELF32, and the new set ELF64. My guess is that there was no need to widen the archive format at that time, but had there been, it seems obvious that this is how it would have been done. The Implementation of 64-bit Solaris Archives As mentioned earlier, there was no desire to improve the fundamental nature of archives. They have always had O(n) insert/delete behavior, and for the most part it hasn't mattered. AIX made efforts to improve this, but those efforts did not find widespread adoption. For the purposes of link-editing, which is essentially the only thing that archives are used for, the existing format is adequate, and issues of backward compatibility trump the desire to do something technically better. Widening the existing symbol table format to 64-bits is therefore the obvious way to proceed. For Solaris 11, I implemented that, and I also updated the ar command so that a 64-bit version is run by default. This eliminates the 2 most significant limits to archive size, leaving only the limit on an individual archive member. We only generate a 64-bit symbol table if the archive exceeds 4GB, or when the new -S option to the ar command is used. This maximizes backward compatibility, as an archive produced by Solaris 11 is highly likely to be less than 4GB in size, and will therefore employ the same format understood by older versions of the system. The main reason for the existence of the -S option is to allow us to test the 64-bit format without having to construct huge archives to do so. I don't believe it will find much use outside of that. Other than the new ability to create and use extremely large archives, this change is largely invisible to the end user. When reading an archive, the ar command will transparently accept either form of symbol table. Similarly, the ELF library (libelf) has been updated to understand either format. Users of libelf (such as the link-editor ld) do not need to be modified to use the new format, because these changes are encapsulated behind the existing functions provided by libelf. As mentioned above, this work did not lift the limit on the maximum size of an individual archive member. That limit remains fixed at 4GB for now. This is not because we think objects will never get that large, for the history of computing says otherwise. Rather, this is based on an estimation that single relocatable objects of that size will not appear for a decade or two. A lot can change in that time, and it is better not to overengineer things by writing code that will sit and rot for years without being used. It is not too soon however to have a plan for that eventuality. When the time comes when this limit needs to be lifted, I believe that there is a simple solution that is consistent with the existing format. The archive member header size field is an ASCII string, like the name, and as such, the overflow scheme used for long names can also be used to handle the size. The size string would be placed into the archive string table, and its offset in the string table would then be written into the archive header size field using the same format "/ddd" used for overflowed names.

    Read the article

  • processing gamestate with a window of commands across time?

    - by rook2pawn
    I have clients sending client updates at a 100ms intervals. i pool the command inputs and create a client command frame. the commands come into the server in these windows and i tag them across time as they come in. when i do a server tick i intend to process this list of commands i.e. [ {command:'duck',timestamp:350,player:'a'}, {command:'shoot',timestamp:395,player:'b'}, {command:'move', timestamp:410,player:'c'} {command:'cover',timestamp:420,player:'a'} ] how would i efficiently update the gamestate based on this list? the two solutions i see are 1) simulate time via direct equation to figure out how far everyone would move or change as if the real gameupdate was ticking on the worldtick..but then unforseen events that would normally trigger during real update would not get triggered such as powerups or collissions 2) prepare to run the worldupdate multiple times and figure out which commands get sent to which worldupdate. this seems better but a little more costly is there a canonical way to do this?

    Read the article

  • No rails commands will run

    - by Jeremy
    I am trying to learn rails and haven't used it in the last few weeks but today when I try to run any rails commands such as - 'rails -v' - 'script/server' I get not have reinstalled ruby but the didn't don't have a clue what could be wrong Am on a brand new Macbook Pro Jeremy-Geross-MacBook-Pro:~ Jeremy$ rails -v /Library/Ruby/Site/1.8/rubygems/config_file.rb:172:in merge': can't convert String into Hash (TypeError) from /Library/Ruby/Site/1.8/rubygems/config_file.rb:172:ininitialize' from /Library/Ruby/Site/1.8/rubygems.rb:384:in new' from /Library/Ruby/Site/1.8/rubygems.rb:384:inconfiguration' from /Library/Ruby/Site/1.8/rubygems.rb:634:in path' from /Library/Ruby/Site/1.8/rubygems/source_index.rb:68:ininstalled_spec_directories' from /Library/Ruby/Site/1.8/rubygems/source_index.rb:58:in from_installed_gems' from /Library/Ruby/Site/1.8/rubygems.rb:881:insource_index' from /Library/Ruby/Site/1.8/rubygems/gem_path_searcher.rb:81:in init_gemspecs' from /Library/Ruby/Site/1.8/rubygems/gem_path_searcher.rb:13:ininitialize' from /Library/Ruby/Site/1.8/rubygems.rb:839:in new' from /Library/Ruby/Site/1.8/rubygems.rb:839:insearcher' from /Library/Ruby/Site/1.8/rubygems.rb:838:in synchronize' from /Library/Ruby/Site/1.8/rubygems.rb:838:insearcher' from /Library/Ruby/Site/1.8/rubygems.rb:478:in find_files' from /Library/Ruby/Site/1.8/rubygems.rb:1103 from /usr/bin/rails:9:inrequire' from /usr/bin/rails:9

    Read the article

  • Run powershell commands in C#

    - by Ramnik
    RunspaceConfiguration psConfig = RunspaceConfiguration.Create(); Runspace psRunspace = RunspaceFactory.CreateRunspace(psConfig); psRunspace.Open(); using (Pipeline psPipeline = psRunspace.CreatePipeline()) { // Define the command to be executed in this pipeline Command command = new Command("Add-spsolution"); // Add a parameter to this command command.Parameters.Add("literalpath", @"c:\project3.wsp"); // Add this command to the pipeline psPipeline.Commands.Add(command); // Invoke the cmdlet try { Collection<PSObject> results = psPipeline.Invoke(); Label1.Text = "hi"+results.ToString(); // Process the results } catch (Exception exception) { Label1.Text = exception.ToString();// Process the exception here } } It is throwing the exception: System.Management.Automation.CommandNotFoundException: The term 'add-spsolution' is not recognized as the name of a cmdlet, function, script file, or operable program. Any suggestions why?

    Read the article

  • Python (pdb) - Queueing up commands to execute

    - by kpatelPro
    I am implementing a "breakpoint" system for use in my Python development that will allow me to call a function that, in essence, calls pdb.set_trace(); Some of the functionality that I would like to implement requires me to control pdb from code while I am within a set_trace context. Example: disableList = [] def breakpoint(name=None): def d(): disableList.append(name) #**** #issue 'run' command to pdb so user #does not have to type 'c' #**** if name in disableList: return print "Use d() to disable breakpoint, 'c' to continue" pdb.set_trace(); In the above example, how do I implement the comments demarked by the #**** ? In other parts of this system, I would like to issue an 'up' command, or two sequential 'up' commands without leaving the pdb session (so the user ends up at a pdb prompt but up two levels on the call stack. Thanks!

    Read the article

  • Send ESC commands to a printer in C#

    - by Ewerton
    My application needs to print invoices, then a get the invoice from database, insert informations os the invoice in a big string (tellling the line, column, etc). after this a have the string ready to be sent to a printer. My problem is: I need to put some ESC/P commands/characters in my big string i try to do something like this: char formFeed = (char)12; Convert.ToChar(12); MyBigString.Insert(10, formFeed); whit this, the line 10 will do a FormFeed, but this not work NOTE: i send the MybigString all at once to printer. to make my code works i need to send the data line by line to a printer ? Thanks for the helps. PS: Sorry my English, i'am a Brazilian developer which dont speak English (yet).

    Read the article

  • How to execute several batch commands in sequence

    - by ptikobj
    I want to create a Windows XP batch script that sequentially performs something like the following: @echo off :: build everything cd \workspace\project1 mvn clean install cd ..\project2 mvn clean install :: run some java file cd \workspace\project3 java -jar somefile.jar When I create a Batch script like this (following these instructions), I still have the problem that the script stops doing something after the first mvn clean install and then displays the command line. How can i execute all of these commands in sequence in one batch file? I don't want to refer to other files, I want to do it in one file.

    Read the article

  • Eclipse RCP: Actions vs Commands - would like an update

    - by nEm
    I know this question has been asked before but it was in 2009 and I haven't found anything more recent either on the web. I was wondering if the answer in that still holds or can it be updated? I am just starting work on an RCP and I haven't been able to decide between actions and commands for my menu items. I will be using a lot of the ones provided by Eclipse such as the Edit, File and some of their sub menu items as well. Since it has been nearly two years for the answer provided in the '09 question, I just wanted to make sure there is nothing else that could sway my decision in either direction or maybe if there have been some new developments that I am not aware of.

    Read the article

  • Interacting with system commands using a web dev language

    - by Jamie
    Hi all, First of all, sorry for the vague title. Let me explain. At work we're currently using SunGrid I've been assigned a project to create a web interface wrapper for interacting with the engine. i.e. displaying users jobs, submitting jobs via a nice GUI etc. (most of the sgrid commands output xml which is nice) My question for you chaps is the following: What web dev language would you use to interact with the system? i.e. use the language to do a system call and evaluate the response. I'm not after an argument on which language is best, I just would like to know which language is specifically good for interacting with the system and is also good for web dev.

    Read the article

  • Executing multiple commands from a Windows cmd script

    - by Darren Greaves
    I'm trying to write a Windows cmd script to perform several tasks in series. However, it always stops after the first command in the script. The command it stops after is a maven build (not sure if that's relevant). How do I make it carry on and run each task in turn please? Installing any software or configuring the registry etc is completely out of the question - it has to work on a vanilla Windows XP installation I'm afraid. Ideally I'd like the script to abort if any of the commands failed, but that's a "nice to have", not essential. Thanks.

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >