Search Results

Search found 3548 results on 142 pages for 'unix'.

Page 131/142 | < Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >

  • Script to install and compile Python, Django, Virtualenv, Mercurial, Git, LessCSS, etc... on Dreamho

    - by tmslnz
    The Story After cleaning up my Dreamhost shared server's home folder from all the cruft accumulated over time, I decided to start afresh and compile/reinstall Python. All tutorials and snippets I found seemed overly simplistic, assuming (or ignoring) a bunch of dependencies needed by Python to compile all modules correctly. So, starting from http://andrew.io/weblog/2010/02/installing-python-2-6-virtualenv-and-virtualenvwrapper-on-dreamhost/ (so far the best guide I found), I decided to write a set-and-forget Bash script to automate this painful process, including along the way a bunch of other things I am planning to use. The Script I am hosting the script on http://bitbucket.org/tmslnz/python-dreamhost-batch/src/ The TODOs So far it runs fine, and does all it needs to do in about 900 seconds, giving me at the end of the process a fully functional Python / Mercurial / etc... setup without even needing to log out and back in. I though this might be of use for others too, but there are a few things that I think it's missing and I am not quite sure how to go for it, what's the best way to do it, or if this just doesn't make any sense at all. Check for errors and break Check for minor version bumps of the packages and give warnings Check for known dependencies Use arguments to install only some of the packages instead of commenting out lines Organise the code in a manner that's easy to update Optionally make the installers and compiling silent, with error logging to file failproof .bashrc modification to prevent breaking ssh logins and having to log back via FTP to fix it EDIT: The implied question is: can anyone, more bashful than me, offer general advice on the worthiness of the above points or highlight any problems they see with this approach? (see my answer to Ry4an's comment below) The Gist I am no UNIX or Bash or compiler expert, and this has been built iteratively, by trial and error. It is somehow going towards apt-get (well, 1% of it...), but since Dreamhost and others obviously cannot give root access on shared servers, this looks to me like a potentially very useful workaround; particularly so with some community work involved.

    Read the article

  • Unity in C# for Platform Specific Implementations

    - by DxCK
    My program has heavy interaction with the operating system through Win32API functions. now i want to migrate my program to run under Mono under Linux (No wine), and this requires different implementations to the interaction with the operating system. I started designing a code that can have different implementation for difference platform and is extensible for new future platforms. public interface ISomeInterface { void SomePlatformSpecificOperation(); } [PlatformSpecific(PlatformID.Unix)] public class SomeImplementation : ISomeInterface { #region ISomeInterface Members public void SomePlatformSpecificOperation() { Console.WriteLine("From SomeImplementation"); } #endregion } public class PlatformSpecificAttribute : Attribute { private PlatformID _platform; public PlatformSpecificAttribute(PlatformID platform) { _platform = platform; } public PlatformID Platform { get { return _platform; } } } public static class PlatformSpecificUtils { public static IEnumerable<Type> GetImplementationTypes<T>() { foreach (Assembly assembly in AppDomain.CurrentDomain.GetAssemblies()) { foreach (Type type in assembly.GetTypes()) { if (typeof(T).IsAssignableFrom(type) && type != typeof(T) && IsPlatformMatch(type)) { yield return type; } } } } private static bool IsPlatformMatch(Type type) { return GetPlatforms(type).Any(platform => platform == Environment.OSVersion.Platform); } private static IEnumerable<PlatformID> GetPlatforms(Type type) { return type.GetCustomAttributes(typeof(PlatformSpecificAttribute), false) .Select(obj => ((PlatformSpecificAttribute)obj).Platform); } } class Program { static void Main(string[] args) { Type first = PlatformSpecificUtils.GetImplementationTypes<ISomeInterface>().FirstOrDefault(); } } I see two problems with this design: I can't force the implementations of ISomeInterface to have a PlatformSpecificAttribute. Multiple implementations can be marked with the same PlatformID, and i dont know witch to use in the Main. Using the first one is ummm ugly. How to solve those problems? Can you suggest another design?

    Read the article

  • cURL gets Internal Server Error when posting to aspx page

    - by Mihai
    I have a big problem. I have some applications made on an unix based system, and I use PHP with cURL to post an XML question to an IIS server with asp.net. Every time I ask the server something I get error: HTTP/1.1 500 Internal Server Error Date: Tue, 04 May 2010 07:36:08 GMT Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET X-AspNet-Version: 2.0.50727 Cache-Control: private Content-Type: text/html; charset=utf-8 Content-Length: 3032 But if I ask same question on another server, almost identically to this one (BOTH configured by me) I get results like it should and the headers: HTTP/1.1 200 OK Date: Tue, 04 May 2010 07:39:37 GMT Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET X-AspNet-Version: 2.0.50727 Cache-Control: private Content-Type: text/html; charset=utf-8 Content-Length: 9169 I tried everything, searched hundreds of forums, but i don't find anything. In IIS logs I only get: 2010-05-04 07:36:08 W3SVC1657587027 80.xx.xx.xx POST /XML_SERV/XmlAPI.aspx - 80 - 80.xx.xx.xx Mozilla/4.0+(compatible;+MSIE+6.0;+Windows+NT+5.1 500 0 0 any ideas where to look what is going on? I forgot to mention! If I use an XML request software, and ask same question, it works.

    Read the article

  • JMeter CSV Data Set is corrupting Japanese strings stored as proper UTF-8, I get Question Marks instead

    - by Mark Bennett
    I read in search terms from a simple text file to send to a search engine. It works fine in English, but gives me ???? for any Japanese text. Text with mixed English and Japanese does show the English text, so I know it's reading it. What I'm seeing: Input text: Snow Leopard ??????????????? Turns into: Snow Leopard ??????????????? This is in my POST field of an HTTP. If I set JMeter to encode the data, it just puts in the percent sequence for question marks. Interesting note: In the example above there are 15 Japanese characters, and then 15 question marks, so at some point it's being seen as full characters and not just bytes. About the Data: The CSV file is very simple in structure. There's only one field / one column, which I name TERM, and later use as ${TERM} I don't really need full CSV because it's only one string per line. There's no commas or quotes. When I run the Unix "file" command on the file, it says UTF-8 text. I've also verified it in command line and graphical mode on two machines. JMeter CSV Dataset Config: Filename: japanese-searches.csv File encoding: UTF-8 (also tried without) Variable names: TERM Delimiter: , Allow Quoted Data: False (I also tried True, different, but still wrong) Recycle at EOF: True Stop at EOF: False Staring mode: All threads A few things I've tried: Tried Allow quoted Data. It changed to other strange characters. -Dfile.encoding=UTF-8 Tried encoding the POST, but it just turned into a bunch of %nn for question marks And I'm not sure how "debug" just after the each line of the CSV is read in. I think it's corrupted right away, but I'm not sure. If it's only mangled when I reference it, then instead of ${TERM} perhaps there's some other "to bytes" function call. I'll start checking into that. I haven't done anything with the JMeter functions yet.

    Read the article

  • Working with datetime type in Quickbooks My Time files

    - by jakemcgraw
    I'm attempting to process Quickbooks My Time imt files using PHP. The imt file is a plaintext XML file. I've been able to use the PHP SimpleXML library with no issues but one: The numeric representations of datetime in the My Time XML files is something I've never seen before: <object type="TIMEPERIOD" id="z128"> <attribute name="notes" type="string"></attribute> <attribute name="start" type="date">308073428.00000000000000000000</attribute> <attribute name="running" type="bool">0</attribute> <attribute name="duration" type="double">3600</attribute> <attribute name="datesubmitted" type="date">310526237.59616601467132568359</attribute> <relationship name="activity" type="1/1" destination="ACTIVITY" idrefs="z130"></relationship> </object> You can see that attritube[@name='start'] has a value of: 308073428.00000000000000000000 This is not Excel based method of storage 308,073,428 is too many days since 1900-01-00 and it isn't Unix Epoch either. So, my question is, has anyone ever seen this type of datetime representation?

    Read the article

  • MySQL get point in time totals from related tables

    - by batfastad
    Hi everyone We have an order book and invoicing system and I've been tasked with trying to output monthly rolling totals from these tables. But I don't know really where to start with this. I think there's some SQL syntax that I don't even know about yet. I'm familiar with INNER/LEFT/JOINS and GROUP BY etc but grouping by date is confusing since I don't know how to limit the data to only the current date that's being grouped by at that point. I think this will involve joining the tables to themselves or possibly a sub-select. I always thought it best to avoid sub-selects apart from when absolutely necessary. Basically the system has 3 tables orders: order_id, currency, order_stamp orders_lines: order_line_id, invoice_id, order_id, price invoices: invoice_id, invoice_stamp order_stamp and invoice_stamp are UTC unix timestamps stored as integers, not MySQL timestamps. I'm trying to get a listing by year/month showing the total of current unbilled orders (sum of price), at that point in time. Current orders are ones where order_stamp is less than or equal to 00:00 on the 1st of the month. Unbilled orders are ones where invoice_stamp is null or invoice_stamp is greater than 00:00 on the 1st of the month. At that point in time there may not be a related invoice yet and invoice_id might be null. Anyone got any suggestions on what I should join to what and what I need to group by? Cheers, B

    Read the article

  • ExecutorSerrvice memory leak on exception

    - by TofuBeer
    I am having a hard time tracking this down since the profiler keeps crashing (hotspot error). Before I go too deep into figuring it out I'd like to know if I really have a problem or not :-) I have a few thread pools created via: Executors.newFixedThreadPool(10); The threads connect to different web sites and, on occasion, I get connection refused and wind up throwing an exception. When I later on call Future.get() to get the result it will then catch the ExecutionException that wraps the exception that was thrown when the connection could not be made. The program uses a fairly constant amount of memory up until the point in time that the exceptions get thrown (they tend to happen in batches when a particular site is overloaded). After that point the memory again remains constant but at a higher level. So my question is along the lines of is the memory behaviour (reported by "top" on Unix) expected because the exceptions just triggered something or do I probably have an actual leak that I'll need to track down? Additionally when Future.get() throws an exception is there anything else I need to do besides catch the exception (such as call Future.cancel() on it)?

    Read the article

  • GUI Agent accepts statuses from Daemon and shows it using progress indicator

    - by Pavel
    Hi to all! My application is a GUI agent, which communicate with daemon through the unix domain socket, wrapped in CFSocket.... So there are main loop and added CFRunLoop source. Daemon sends statuses and agent shows it with a progress indicator. When there are any data on socket, callback function begin to work and at this time I have to immediately show the new window with progress indicator and increase counter. //this function initiate the runloop for listening socket - (int) AcceptDaemonConnection:(ConnectionRef)conn { int err = 0; conn->fSockCF = CFSocketCreateWithNative(NULL, (CFSocketNativeHandle) conn->fSockFD, kCFSocketAcceptCallBack, ConnectionGotData, NULL); if (conn->fSockCF == NULL) err = EINVAL; if (err == 0) { conn->fRunLoopSource = CFSocketCreateRunLoopSource(NULL, conn->fSockCF, 0); if (conn->fRunLoopSource == NULL) err = EINVAL; else CFRunLoopAddSource(CFRunLoopGetCurrent(), conn->fRunLoopSource, kCFRunLoopDefaultMode); CFRelease(conn->fRunLoopSource); } return err; } // callback function void ConnectionGotData(CFSocketRef s, CFSocketCallBackType type, CFDataRef address, const void * data, void * info) { #pragma unused(s) #pragma unused(address) #pragma unused(info) assert(type == kCFSocketAcceptCallBack); assert( (int *) data != NULL ); assert( (*(int *) data) != -1 ); TStatusUpdate status; int nativeSocket = *(int *) data; status = [agg AcceptPacket:nativeSocket]; // [stWindow InitNewWindow] inside [agg SendUpdateStatus:status.percent]; } AcceptPacket function receives packet from the socket and trying to show new window with progress indicator. Corresponding function is called, but nothing happens... I think, that I have to make work the main application loop with interrupting CFSocket loop... Or send a notification? No idea....

    Read the article

  • C++ Program Flow: Sockets in an Object and the Main Function

    - by jfm429
    I have a rather tricky problem regarding C++ program flow using sockets. Basically what I have is this: a simple command-line socket server program that listens on a socket and accepts one connection at a time. When that connection is lost it opens up for further connections. That socket communication system is contained in a class. The class is fully capable of receiving the connections and mirroring the data received to the client. However, the class uses UNIX sockets, which are not object-oriented. My problem is that in my main() function, I have one line - the one that creates an instance of that object. The object then initializes and waits. But as soon as a connection is gained, the object's initialization function returns, and when that happens, the program quits. How do I somehow wait until this object is deleted before the program quits? Summary: main() creates instance of object Object listens Connection received Object's initialization function returns main() exits (!) What I want is for main() to somehow delay until that object is finished with what it's doing (aka it will delete itself) before it quits. Any thoughts?

    Read the article

  • ps: Clean way to only get parent processes?

    - by shkschneider
    I use ps ef and ps rf a lot. Here is a sample output for ps rf: PID TTY STAT TIME COMMAND 3476 pts/0 S 0:00 su ... 3477 pts/0 S 0:02 \_ bash 8062 pts/0 T 1:16 \_ emacs -nw ... 15733 pts/0 R+ 0:00 \_ ps xf 15237 ? S 0:00 uwsgi ... 15293 ? S 0:00 \_ uwsgi ... 15294 ? S 0:00 \_ uwsgi ... And today I needed to retrieve only the master process of uwsgi in a script (so I want only 15237 but not 15293 nor 15294). As of today, I tried some ps rf | grep -v ' \\_ '... but I would like a cleaner way. I also came accross another solution from unix.com's forums: ps xf | sed '1d' | while read pid tty stat time command ; do [ -n "$(echo $command | egrep '^uwsgi')" ] && echo $pid ; done But still a lot of pipes and ugly tricks. Is there really no ps option or cleaner tricks (maybe using awk) to accomplish that?

    Read the article

  • How is this Perl code selecting two different elements from an array?

    - by Mike
    I have inherited some code from a guy whose favorite past time was to shorten every line to its absolute minimum (and sometimes only to make it look cool). His code is hard to understand but I managed to understand (and rewrite) most of it. Now I have stumbled on a piece of code which, no matter how hard I try, I cannot understand. my @heads = grep {s/\.txt$//} OSA::Fast::IO::Ls->ls($SysKey,'fo','osr/tiparlo',qr{^\d+\.txt$}) || (); my @selected_heads = (); for my $i (0..1) { $selected_heads[$i] = int rand scalar @heads; for my $j (0..@heads-1) { last if (!grep $j eq $_, @selected_heads[0..$i-1]); $selected_heads[$i] = ($selected_heads[$i] + 1) % @heads; #WTF? } my $head_nr = sprintf "%04d", $i; OSA::Fast::IO::Cp->cp($SysKey,'',"osr/tiparlo/$heads[$selected_heads[$i]].txt","$recdir/heads/$head_nr.txt"); OSA::Fast::IO::Cp->cp($SysKey,'',"osr/tiparlo/$heads[$selected_heads[$i]].cache","$recdir/heads/$head_nr.cache"); } From what I can understand, this is supposed to be some kind of randomizer, but I never saw a more complex way to achieve randomness. Or are my assumptions wrong? At least, that's what this code is supposed to do. Select 2 random files and copy them. === NOTES === The OSA Framework is a Framework of our own. They are named after their UNIX counterparts and do some basic testing so that the application does not need to bother with that.

    Read the article

  • parsing the output of the 'w' command?

    - by Blackbinary
    I'm writing a program which requires knowledge of the current load on the system, and the activity of any users (it's a load balancer). This is a university assignment, and I am required to use the w command. I'm having a hard time parsing this command because it is very verbose. Any suggestions on what I can do would be appreciated. This is a small part of the program, and I am free to use whatever method i like. The most condensed version of w which still has the information I require is `w -u -s -f' which produces this: 10:13:43 up 9:57, 2 users, load average: 0.00, 0.00, 0.00 USER TTY IDLE WHAT fsm tty7 22:44m x-session-manager fsm pts/0 0.00s w -u -s -f So out of that, I am interested in the first number after load average and the smallest idle time (so i will need to parse them all). My background process will call w, so the fact that w is the lowest idle time will not matter (all i will see is the tty time). Do you have any ideas? Thanks (I am allowed to use alternative unix commands, like grep, if that helps).

    Read the article

  • use Ghostscript to convert pcl to postscript

    - by Bryon
    So I want to use Ghostscript to convert files that are created in pcl format to postscript. That's the gist of my problem. I am simply trying to run it on the command line, but in the final stage it will have to be run on a lp command like lp -d < gs something something GPL Ghostscript 9.00 (2010-09-14) I will be running this on a solaris 10 server but I believe any unix system should work similar. bash-3.00# /usr/local/bin/gs -sDEVICE=pswrite -dLanguageLevel=1 -dNOPAUSE -dBATCH -dSAFER -sOutputFile=output.ps cms-form.pcl GPL Ghostscript 9.00 (2010-09-14) Copyright (C) 2010 Artifex Software, Inc. All rights reserved. This software comes with NO WARRANTY: see the file PUBLIC for details. Error: /undefined in &k2G-210z100u0l6d0e63fa0V Operand stack: Execution stack: %interp_exit .runexec2 --nostringval-- --nostringval-- --nostringval-- 2 %stopped_push --nostringval-- --nostringval-- --nostringval-- false 1 %stopped_push 1910 1 3 %oparray_pop 1909 1 3 %oparray_pop 1893 1 3 %oparray_pop 1787 1 3 %oparray_pop --nostringval-- %errorexec_pop .runexec2 --nostringval-- --nostringval-- --nostringval-- 2 %stopped_push --nostringval-- Dictionary stack: --dict:1154/1684(ro)(G)-- --dict:0/20(G)-- --dict:77/200(L)-- Current allocation mode is local Current file position is 30 GPL Ghostscript 9.00: Unrecoverable error, exit code 1

    Read the article

  • SQL Server error handling: exceptions and the database-client contract

    - by gbn
    We’re a team of SQL Servers database developers. Our clients are a mixed bag of C#/ASP.NET, C# and Java web services, Java/Unix services and some Excel. Our client developers only use stored procedures that we provide and we expect that (where sensible, of course) they treat them like web service methods. Some our client developers don’t like SQL exceptions. They understand them in their languages but they don’t appreciate that the SQL is limited in how we can communicate issues. I don’t just mean SQL errors, such as trying to insert “bob” into a int column. I also mean exceptions such as telling them that a reference value is wrong, or that data has already changed, or they can’t do this because his aggregate is not zero. They’d don’t really have any concrete alternatives: they’ve mentioned that we should output parameters, but we assume an exception means “processing stopped/rolled back. How do folks here handle the database-client contract? Either generally or where there is separation between the DB and client code monkeys. Edits: we use SQL Server 2005 TRY/CATCH exclusively we log all errors after the rollback to an exception table already we're concerned that some of our clients won't check output paramaters and assume everything is OK. We need errors flagged up for support to look at. everything is an exception... the clients are expected to do some message parsing to separate information vs errors. To separate our exceptions from DB engine and calling errors, they should use the error number (ours are all 50,000 of course)

    Read the article

  • Are there any downsides in using C++ for network daemons?

    - by badcat
    Hey guys! I've been writing a number of network daemons in different languages over the past years, and now I'm about to start a new project which requires a new custom implementation of a properitary network protocol. The said protocol is pretty simple - some basic JSON formatted messages which are transmitted in some basic frame wrapping to have clients know that a message arrived completely and is ready to be parsed. The daemon will need to handle a number of connections (about 200 at the same time) and do some management of them and pass messages along, like in a chat room. In the past I've been using mostly C++ to write my daemons. Often with the Qt4 framework (the network parts, not the GUI parts!), because that's what I also used for the rest of the projects and it was simple to do and very portable. This usually worked just fine, and I didn't have much trouble. Being a Linux administrator for a good while now, I noticed that most of the network daemons in the wild are written in plain C (of course some are written in other languages, too, but I get the feeling that 80% of the daemons are written in plain C). Now I wonder why that is. Is this due to a pure historic UNIX background (like KISS) or for plain portability or reduction of bloat? What are the reasons to not use C++ or any "higher level" languages for things like daemons? Thanks in advance! Update 1: For me using C++ usually is more convenient because of the fact that I have objects which have getter and setter methods and such. Plain C's "context" objects can be a real pain at some point - especially when you are used to object oriented programming. Yes, I'm aware that C++ is a superset of C, and that C code is basically C++. But that's not the point. ;)

    Read the article

  • GWT: creating a text widget for highly customized data entry

    - by Caffeine Coma
    I'm trying to implement a kind of "guided typing" widget for data entry, in which the user's text entry is highly controlled and filtered. When the user types a particular character I need to intercept and filter it before displaying it in the widget. Imagine if you will, a Unix shell embedded as a webapp; that's the kind of thing I'm trying to implement. I've tried two approaches. In the first, I extend a TextArea, and add a KeyPressHandler to filter the characters. This works, but the browser-provided spelling correction is totally inappropriate, and I don't see how to turn it off. I've tried: DOM.setElementProperty(textArea.getElement(), "spellcheck", "false"); But that seems to have no effect- I still get the red underlines over "typos". In the second approach I use a FocusWidget to get KeyPress events, and a separate Label or HTML widget to present the filtered characters back to the user. This avoids the spelling correction issue, but since the FocusWidget is not a TextArea, the browser tends to intercept certain typed characters for internal use; e.g. FireFox will use the "/" character to begin a "Quick Find" on the page, and hitting Backspace will load the previous web page. Is there a better way to accomplish what I'm trying to do?

    Read the article

  • MySQL Can Connect Remotely but not Locally

    - by A Wizard Did It
    This is a weird problem and I'm not sure what's going on. I installed MySQL on a linux box I have running Ubuntu 10.04 LTS. I can access mysql via SSH mysql -p and perform all my commands that way. I added a user, and I can use AddedUser to connect remotely from my machine, but not from the local machine. It makes no sense to me... SELECT host, user FROM mysql.user Yields: +-----------+------------------+ | host | user | +-----------+------------------+ | % | AddedUser | | 127.0.0.1 | root | | li241-255 | root | | localhost | debian-sys-maint | | localhost | root | +-----------+------------------+ Problem is I'm developing on this machine using Node.js, and I can't connect locally from the server using the same username. I've tried FLUSH PRIVILEGES but that seems to have no effect. I know it's not Node.js because I'm using the same code on another database and it's working in that environment. Edit This is the error node is giving me. node.js:50 throw e; // process.nextTick error, or 'error' event on first tick ^ Error: ECONNREFUSED, Connection refused at Stream._onConnect (net.js:687:18) at IOWatcher.onWritable [as callback] (net.js:284:12) Edit 2 I have the right port & server as best I can tell. My /etc/mysql/my.cnf contains this: port = 3306 socket = /var/run/mysqld/mysqld.sock My MySQL object contains: { host: 'localhost', port: 3306, user: 'removed', password: 'removed', database: '', typeCast: true, flags: 260047, maxPacketSize: 16777216, charsetNumber: 192, debug: false, ending: false, connected: false, _greeting: null, _queue: [], _connection: null, _parser: null, server: 'ExternalIpAddress' } Possibly useful? netstat -ln | grep mysql unix 2 [ ACC ] STREAM LISTENING 1016418 /var/run/mysqld/mysqld.sock

    Read the article

  • How do I prevent qFatal() from aborting the application?

    - by Dave
    My Qt application uses Q_ASSERT_X, which calls qFatal(), which (by default) aborts the application. That's great for the application, but I'd like to suppress that behavior when unit testing the application. (I'm using the Google Test Framework.) I have by unit tests in a separate project, statically linking to the class I'm testing. The documentation for qFatal() reads: Calls the message handler with the fatal message msg. If no message handler has been installed, the message is printed to stderr. Under Windows, the message is sent to the debugger. If you are using the default message handler this function will abort on Unix systems to create a core dump. On Windows, for debug builds, this function will report a _CRT_ERROR enabling you to connect a debugger to the application. ... To supress the output at runtime, install your own message handler with qInstallMsgHandler(). So here's my main.cpp file: #include <gtest/gtest.h> #include <QApplication> void testMessageOutput(QtMsgType type, const char *msg) { switch (type) { case QtDebugMsg: fprintf(stderr, "Debug: %s\n", msg); break; case QtWarningMsg: fprintf(stderr, "Warning: %s\n", msg); break; case QtCriticalMsg: fprintf(stderr, "Critical: %s\n", msg); break; case QtFatalMsg: fprintf(stderr, "My Fatal: %s\n", msg); break; } } int main(int argc, char **argv) { qInstallMsgHandler(testMessageOutput); testing::InitGoogleTest(&argc, argv); return RUN_ALL_TESTS(); } But my application is still stopping at the assert. I can tell that my custom handler is being called, because the output when running my tests is: My Fatal: ASSERT failure in MyClass::doSomething: "doSomething()", file myclass.cpp, line 21 The program has unexpectedly finished. What can I do so that my tests keep running even when an assert fails?

    Read the article

  • What problem did MS solve by creating PowerShell? [closed]

    - by Fred
    I'm asking because PowerShell confuses me. I've been trying to write some deployment scripts using PowerShell and I've been less than enthused by the result. I have a co-worker who loves PowerShell and defends it at every turn. Said co-worker claims PowerShell was never written to be a strong shell, but instead was written to: a) Allow you to peek and poke at .NET assemblies on the command-line (why is this a reason for PowerShell to exist?) b) To be hosted in .NET applications for automation, similar to DCOP in KDE and how Gnome is using CORBA. c) to be treated as ".NET script" rather than as an actual shell (related to b). I've always felt like Windows was missing a decent way to bang out automation scripts. cmd is too simplistic in many cases, and WSH is too obtuse (although the combination can be used successfully, I'm not a fan). When I first heard about PowerShell I felt like Windows was finally getting a decent shell that would be able to help with automation of many tasks, but recent experiences, and my co-worker, tell me otherwise. To be clear, I don't take issue with the fact that it's built on .NET, or that it passes objects around rather than text (despite my Unix background :]), and I'm not arguing that PowerShell is useless, but from what I can see, it doesn't solve the problem I was hoping it would solve very well. As soon as you step outside of the .NET/Powershell world, things quit being nice and cozy for you. So with all that out of the way, what problem did MS solve by creating PowerShell, or is it some political bastard child as I suspect? I've googled and haven't hit upon anything that sufficiently answered that for me, but the more citations the better.

    Read the article

  • How to properly handle signals when using the worker thread pattern?

    - by ipartola
    I have a simple server that looks something like this: void *run_thread(void *arg) { // Communicate via a blocking socket } int main() { // Initialization happens here... // Main event loop while (1) { new_client = accept(socket, ...); pthread_create(&thread, NULL, &run_thread, *thread_data*); pthread_detach(thread); } // Do cleanup stuff: close(socket); // Wait for existing threads to finish exit(0); ) Thus when a SIGINT or SIGTERM is received I need to break out of the main event loop to get to the clean up code. Moreover most likely the master thread is waiting on the accept() call so it's not able to check some other variable to see if it should break;. Most of the advice I found was along the lines of this: http://devcry.blogspot.com/2009/05/pthreads-and-unix-signals.html (creating a special signal handling thread to catch all the signals and do processing on those). However, it's the processing portion that I can't really wrap my head around: how can I possibly tell the main thread to return from the accept() call and check on an external variable to see if it should break;?

    Read the article

  • Books for Computer Networking

    - by Altimet Gaandu
    Hi, I am a student of computer engineering from Vasula University, Somalia. We have a subject called Advanced Computer Networks and the following is the list of recommended books: Text Books: 1. B. A. Forouzan, "TCP/IP Protocol Suite", Tata McGraw Hill edition, Third Edition. 2. N. Olifer, V. Olifer, "Computer Networks: Principles, Technologies and Protocols for Network design", Wiley India Edition, First edition. References: 1. W.Richard Stevens, "TCP/IP Volume1, 2, 3", Addison Wesley. 2. D.E.Comer,"TCP/IPVolumeI and II", PearsonEducation. . 3.W.R. Stevens, "Unix Network Programming", Vol. 1, Pearson Education. 4. J.Walrand, P. Var~fya, "High Performance Communication Networks", Morgan Kaufmann. . 5. A.S.Tanenbaum,"Computer Networks", Pearson Education, Fourth Edition. But we have been unable to find these either in the market or on the internet (read: torrents). Please provide download links to any of these books and oblige. Thanks.

    Read the article

  • boost::filesystem - how to create a boost path from a windows path string on posix plattforms?

    - by VolkA
    I'm reading path names from a database which are stored as relative paths in Windows format, and try to create a boost::filesystem::path from them on a Unix system. What happens is that the constructor call interprets the whole string as the filename. I need the path to be converted to a correct Posix path as it will be used locally. I didn't find any conversion functions in the boost::filesystem reference, nor through google. Am I just blind, is there an obvious solution? If not, how would you do this? Example: std::string win_path("foo\\bar\\asdf.xml"); std::string posix_path("foo/bar/asdf.xml"); // loops just once, as part is the whole win_path interpreted as a filename boost::filesystem::path boost_path(win_path); BOOST_FOREACH(boost::filesystem::path part, boost_path) { std::cout << part << std::endl; } // prints each path component separately boost::filesystem::path boost_path_posix(posix_path); BOOST_FOREACH(boost::filesystem::path part, boost_path_posix) { std::cout << part << std::endl; }

    Read the article

  • What is the right path for PHP includes on a Mac?

    - by skorned
    Running Mac OS X 10.5.8, with PHP 5.2.11 Pre-installed. Using Coda 1.6.10. I'm writing PHP files, and then preview them running from file, not server. This was working fine till I tried PHP includes. These don't work as a relative path, only as an absolute from the root of the drive. Is there any way I can use statements like include_once "common/header.php"; without specifying my entire file path like so : include_once "/Volumes/Macintosh HD/Users/neil/Desktop/Website/ColoredLists_v1.0/common/base.php"; ,where ColoredLists_v1.0 is the directory with all the website files in it. I tried solutions like prepending _SERVER[DOCUMENT_ROOT] or dirname(File) to the file paths, but that didn't work as the variables were not set. Is there any easy way to do this, or a configuration I can change so that it looks in a specific directory by default instead of looking at the drive root? Currently, echo_include_path shows .: When I include this line at the start of the script, it works: set_include_path('/Volumes/Macintosh HD/Users/neil/Desktop/Website/ColoredLists_v1.0'); However, if I want to do this for all my scripts, I can't seem to make the change permanent. Even after I edited the Unix include_path in my php.ini, it doesn't seem to work.

    Read the article

  • Recommendations for IPC between parent and child processes in .NET?

    - by Jeremy
    My .NET program needs to run an algorithm that makes heavy use of 3rd party libraries (32-bit), most of which are unmanaged code. I want to drive the CPU as hard as I can, so the code runs several threads in parallel to divide up the work. I find that running all these threads simultaneously results in temporary memory spikes, causing the process' virtual memory size to approach the 2 GB limit. This memory is released back pretty quickly, but occasionally if enough threads enter the wrong sections of code at once, the process crosses the "red line" and either the unmanaged code or the .NET code encounters an out of memory error. I can throttle back the number of threads but then my CPU usage is not as high as I would like. I am thinking of creating worker processes rather than worker threads to help avoid the out of memory errors, since doing so would give each thread of execution its own 2 GB of virtual address space (my box has lots of RAM). I am wondering what are the best/easiest methods to communicate the input and output between the processes in .NET? The file system is an obvious choice. I am used to shared memory, named pipes, and such from my UNIX background. Is there a Windows or .NET specific mechanism I should use?

    Read the article

  • Static libraries in version-cross-compiled program

    - by Brian Postow
    I have a unix command line app (with big nasty makefile) that I'm trying to run on a mac. I am compiling it on a 10.6 system, with all of the appropriate libraries of course. The deployment environment is a 10.5 system, with no extra libraries. I compiled without -dynamic, and it appears to have static libraries, correctly. When I run it on the 10.6 system, it works. However, when I run it on the 10.5 system, I get: dyld: unknown required load command 0x80000022 I got this same error when I compiled things for the 10.6 system using the 10.5 xcode, so it looks like a version mis-match type problem. However, I used gcc-4.0, and $CFLAGS = -isysroot /Developer/SDKs/MacOSX10.5.sdk -mmacosx-version-min=10.5 so it SHOULD be set up for 10.5... any ideas? thanks Editing an ancient question: I have the exact same problem on a different computer. This time I am at 10.5.8, fully update, the same executable works on 10.6 still. Has anyone had any luck with this in the months since I asked this?

    Read the article

< Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >