Search Results

Search found 12428 results on 498 pages for 'wait types'.

Page 82/498 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • MSM Merge Modules in Visual Studio 2013 [on hold]

    - by theGreenCabbage
    Could someone please let me know where I might find resources for creating MSM files? While I am able to create MSI files using InstallShield, it seems that Visual Studio no longer supports Merge Module Projects, judging by the link below and the screenshot of my version of Visual Studio 2013 - http://msdn.microsoft.com/en-us/library/z6z02ts5(v=vs.80).aspx To create a new merge module project: On the File menu, point to Add, then click New Project. In the resulting Add New Project dialog box, in the Project types pane, open the Other Project Types node and select Setup and Deployment Projects. In the Templates pane, choose Merge Module Project.

    Read the article

  • Data architecture for event log metrics?

    - by elliot42
    My service has a large ongoing number of user events, and we would like to do things like "count occurrence of event type T since date D." We are trying to make two basic decisions: What to store? Storing every event vs. only storing aggregates (Event log style) log every event and count them later, vs. (Time-series style) store a single aggregated "count of event E for date D" for every day Where to store the data In a relational database (particularly MySQL) In a non-relational (NoSQL) database In flat log files (collected centrally over the network via syslog-ng) What is standard practice / where can I read more about comparing the different types of systems? Additional details: The total event stream is large, potentially hundreds of thousands of entries per day But our current need is only to count certain types of events within it We don't necessarily need real-time access to the raw data or aggregation results IMHO, "log all events to files, crawl them at a later time to filter and aggregate the stream" is a pretty standard UNIX Way, but my Rails-y compatriots seem to think that nothing is real unless it's in MySQL.

    Read the article

  • How can I tweak this A* search pathfinding algorithm to handle different terrain movement values?

    - by user422318
    I'm creating a 2D map-based action game with similar interaction design as Diablo II. In other words, the player clicks around a map to move their player. I just finished player movement and am moving on to pathfinding. In the game, enemies should charge the player's character. There are also five different terrain types that give different movement bonuses. I want the AI to take advantage of these terrain bonuses as they try to reach the player. I was told to check out the A* search algorithm (http://en.wikipedia.org/wiki/A*_search_algorithm). I'm doing this game in HTML5 and JavaScript, and found a version in JavaScript: http://www.briangrinstead.com/blog/astar-search-algorithm-in-javascript I'm trying to figure out how to tweak it though. Below are my ideas about what I need to change. What else do I need to worry about? When I create a graph, I will need to initialize the 2D array I pass in passed on with a traversal of a map that corresponds to the different terrain types. in graph.js: "GraphNodeType" definition needs to be modified to handle the 5 terrain types. There will be no walls. in astar.js: The g and h scoring will need to be modified. How should I do this? in astar.js: isWall() should probably be removed. My game doesn't have walls. in astar.js: I'm not sure what this is. I think it indicates a node that isn't valid to be processed. When would this happen, though? At a high level, how do I change this algorithm from "oh, is there a wall there?" to "will this terrain get me to the player faster than the terrain around me?" Because of time, I'm also debating reusing my Bresenham algorithm for the enemies. Unfortunately, the different terrain movement bonuses won't be used by the AI, which will make the game suck. :/ I'd really like to have this in for the prototype, but I'm not a developer by trade nor am I a computer scientist. :D If you know of any code that does what I'm looking for, please share! Sanity check tips for this are also appreciated.

    Read the article

  • Convincing Upper Management the need of larger monitors for Developers

    - by The Rubber Duck
    The company I work for has recently hired on several developers, and there are a limited number of monitors to go around. There are two types in the office - a standard 15" (thankfully flatscreen) and a widescreen 23". No developer has a machine capable of a dual monitor setup, and the largest monitors went to the people who got here first. Three or four new senior level developers only have a 15" monitor to work on. To make matters worse, there are perhaps a total of 25-30 DBAs/Testers/Admin types in the company who all have dual screen 23" setups. We have brought the issue to management, and they refuse to take away large monitors from people who have been here for years for the sake of new employees, even if they are senior level. We have pitched the idea of testers sacrificing a large monitor for one of our small ones, but they won't go for that either. What can I say to management to illustrate the need of monitors for developers?

    Read the article

  • How to restore/change Alt+Tab behaviour/ram usage and a few other things after Ubuntu upgrade from 11.04 to 11.10?

    - by fiktor
    I use Ubuntu for programming. I recently updated it from 11.04 to 11.10. There are some things I don't like in the new version of Unity desktop interface. I don't actually know if it is hard to restore previous behavior or not, and if it is not, where should I look to do that. I know a bit of programming, but I really don't know much about Linux settings. I used to have 3-6 terminal windows and switch between them with Alt+Tab and Shift+Alt+Tab. I liked half-transparent terminal windows, since with them I could open web-page with some instruction in Firefox, press Alt+Tab and type commands in a console window, being able to recognize text on a web-page under it. Now I have problems with my usual work-style because of the following. List of "negative" changes Alt+Tab shows just one icon for all console windows. When I wait some time, it, however, shows all windows, but I don't like to wait. I prefer to remember order of windows and press Alt+Tab as many times as I need to switch to the right window. Alt+Shift+Tab to switch in reverse order doesn't work now. Console windows are not transparent any more. When I don't wait, and switch to this icon, it shows all console windows altogether. So even if they were transparent, I wouldn't be able to see anything below them (I can read something only from the window, which is directly under current one, not a few levels under). When I run a few console windows in Unity I had 740Mb used on Ubuntu 11.04, but I have 1050Mb now. The question is how to make it back to 750-. I really need my memory, since I use my computer to work with 1512Mb of data and I try to save every 10Mb possible (if it doesn't take too much of machine and, more importantly, my time). When I press "The Super key" I have a field to type the name of the program I want to run. But now it sometimes shows this field, but when I'm trying to type nothing happens. Probably, focus is not on the right field. I don't really mean to restore exactly the same behavior, but I want to make my work in Ubuntu 11.10 efficient (at least as efficient as in Ubuntu 11.04). I would be happy if there are some ways to accomplish that. What have I tried I have installed CompizConfig Settings Manager. I have read this question. However enabling "Static Application Switcher" makes Alt+Tab crazy: after enabling it It says about key-binding conflicts with "Ubuntu Unity Plugin"; "Alt+Tab" switching doesn't change, but "Shift+Alt+Tab" now works and shows all windows; Memory usage increases. I have tried turning off Ubuntu Unity Plugin, but this doesn't seem right thing to do, since it seems to turn off all menus, a lot of keystrokes and app-launcher, which usually activates with "The Super key". I have found, that window transparency can be enabled by "Opacity, Brightness and Saturation" plugin from Accessibility. However I don't know if enabling it is the right thing to do (at least it increases memory usage). Update: everything solved but #3: see my own answer below. I have made a separate question about issue #3 (transparency).

    Read the article

  • How do I count Internal Logical Files (ILF) and External Inputs (EI) for a dynamic form entry page?

    - by DmytroL
    Assuming I have an applicant information entry screen, the number and types of fields on which can be defined by the system administrator, how do I go about counting the number of Internal Logical Files (ILFs) and Data Element Types (DETs) for the related data functions? So far I have come up with something like this: ILF #1 (control information): Field Metadata, 1 RET, ~3 DET (name, type, mandatory) ILF #2 (business data): Applicant Data, most likely 1 RET, but how many DET? Of course I could count it as 2 DET (Field ref, Value), but I am not sure that would be correct And when it comes to an External Input (EI), say, "Add New Applicant", things become even more complicated, because the number of DET corresponding to the user-editable fields is totally dependent on the control information in ILF #1, and I am out of ideas here... Anyone fancy to help with that? Thanks in advance!

    Read the article

  • What Design Pattern is seperating transform converters

    - by RevMoon
    For converting a Java object model into XML I am using the following design: For different types of objects (e.g. primitive types, collections, null, etc.) I define each its own converter, which acts appropriate with respect to the given type. This way it can easily extended without adding code to a huge if-else-then construct. The converters are chosen by a method which tests whether the object is convertable at all and by using a priority ordering. The priority ordering is important so let's say a List is not converted by the POJO converter, even though it is convertable as such it would be more appropriate to use the collection converter. What design pattern is that? I can only think of a similarity to the command pattern.

    Read the article

  • What Design Pattern is separating transform converters

    - by RevMoon
    For converting a Java object model into XML I am using the following design: For different types of objects (e.g. primitive types, collections, null, etc.) I define each its own converter, which acts appropriate with respect to the given type. This way it can easily extended without adding code to a huge if-else-then construct. The converters are chosen by a method which tests whether the object is convertable at all and by using a priority ordering. The priority ordering is important so let's say a List is not converted by the POJO converter, even though it is convertable as such it would be more appropriate to use the collection converter. What design pattern is that? I can only think of a similarity to the command pattern.

    Read the article

  • Problem serialising TextBuffer

    - by njallam
    The following is supposed to serialise a TextArea's content to a string. The first two line complete fine, however I have problems from then onwards. page_content = subject_content.get_nth_page(pn) page_name = subject_content.get_tab_label(page_content).get_text() c_buffer = page_content.get_buffer() c_format = c_buffer.register_serialize_tagset() serial = c_buffer.serialize(c_format, c_buffer.get_start_iter(), c_buffer.get_end_iter()) The first error I get is: Traceback (most recent call last): File "/home/nja/notetaker/notetaker/NotetakerWindow.py", line 251, in on_btn_save_clicked self.save() File "/home/nja/notetaker/notetaker/NotetakerWindow.py", line 160, in save c_format = c_buffer.register_serialize_tagset() File "/usr/lib/python2.7/dist-packages/gi/types.py", line 43, in function return info.invoke(*args, **kwargs) TypeError: register_serialize_tagset() takes exactly 2 arguments (1 given) When inserting None as a parameter to c_format = c_buffer.register_serialize_tagset() I get the following as well: Traceback (most recent call last): File "/home/nja/notetaker/notetaker/NotetakerWindow.py", line 251, in on_btn_save_clicked self.save() File "/home/nja/notetaker/notetaker/NotetakerWindow.py", line 161, in save serial = c_buffer.serialize(c_format, c_buffer.get_start_iter(), c_buffer.get_end_iter()) File "/usr/lib/python2.7/dist-packages/gi/types.py", line 43, in function return info.invoke(*args, **kwargs) TypeError: serialize() takes exactly 5 arguments (4 given) I have no idea of a workaround for that, however I shouldn't have to fill None in that other function in the first place. What is happening here?

    Read the article

  • C++ strongly typed typedef

    - by Kian
    I've been trying to think of a way of declaring strongly typed typedefs, to catch a certain class of bugs in the compilation stage. It's often the case that I'll typedef an int into several types of ids, or a vector to position or velocity: typedef int EntityID; typedef int ModelID; typedef Vector3 Position; typedef Vector3 Velocity; This can make the intent of code more clear, but after a long night of coding one might make silly mistakes like comparing different kinds of ids, or adding a position to a velocity perhaps. EntityID eID; ModelID mID; if ( eID == mID ) // <- Compiler sees nothing wrong { /*bug*/ } Position p; Velocity v; Position newP = p + v; // bug, meant p + v*s but compiler sees nothing wrong Unfortunately, suggestions I've found for strongly typed typedefs include using boost, which at least for me isn't a possibility (I do have c++11 at least). So after a bit of thinking, I came upon this idea, and wanted to run it by someone. First, you declare the base type as a template. The template parameter isn't used for anything in the definition, however: template < typename T > class IDType { unsigned int m_id; public: IDType( unsigned int const& i_id ): m_id {i_id} {}; friend bool operator==<T>( IDType<T> const& i_lhs, IDType<T> const& i_rhs ); }; Friend functions actually need to be forward declared before the class definition, which requires a forward declaration of the template class. We then define all the members for the base type, just remembering that it's a template class. Finally, when we want to use it, we typedef it as: class EntityT; typedef IDType<EntityT> EntityID; class ModelT; typedef IDType<ModelT> ModelID; The types are now entirely separate. Functions that take an EntityID will throw a compiler error if you try to feed them a ModelID instead, for example. Aside from having to declare the base types as templates, with the issues that entails, it's also fairly compact. I was hoping anyone had comments or critiques about this idea? One issue that came to mind while writing this, in the case of positions and velocities for example, would be that I can't convert between types as freely as before. Where before multiplying a vector by a scalar would give another vector, so I could do: typedef float Time; typedef Vector3 Position; typedef Vector3 Velocity; Time t = 1.0f; Position p = { 0.0f }; Velocity v = { 1.0f, 0.0f, 0.0f }; Position newP = p + v*t; With my strongly typed typedef I'd have to tell the compiler that multypling a Velocity by a Time results in a Position. class TimeT; typedef Float<TimeT> Time; class PositionT; typedef Vector3<PositionT> Position; class VelocityT; typedef Vector3<VelocityT> Velocity; Time t = 1.0f; Position p = { 0.0f }; Velocity v = { 1.0f, 0.0f, 0.0f }; Position newP = p + v*t; // Compiler error To solve this, I think I'd have to specialize every conversion explicitly, which can be kind of a bother. On the other hand, this limitation can help prevent other kinds of errors (say, multiplying a Velocity by a Distance, perhaps, which wouldn't make sense in this domain). So I'm torn, and wondering if people have any opinions on my original issue, or my approach to solving it.

    Read the article

  • Add Service Reference is generating Message Contracts

    - by JohnIdol
    OK, this has been haunting me for a while, can't find much on Google and I am starting to lose hope so I am reverting to the SO community. When I import a given service using "Add service Reference" on Visual Studio 2008 (SP1) all the Request/Response messages are being unnecessarily wrapped into Message Contracts (named as -- "operationName" + "Request"/"Response" + "1" at the end). The code generator says: // CODEGEN: Generating message contract since the operation XXX is neither RPC nor document wrapped. The guys who are generating the wsdl from a Java service say they are specifying DOCUMENT-LITERAL/WRAPPED. Any help/pointer/clue would be highly appreciated. Update: this is a sample of my wsdl for one of the operations that look suspicious. Note the mismatch on the message element attribute for the request, compared to the response. <!- imports namespaces and defines elements --> <wsdl:types> <xsd:schema targetNamespace="http://WHATEVER/" xmlns:xsd_1="http://WHATEVER_1/" xmlns:xsd_2="http://WHATEVER_2/"> <xsd:import namespace="http://WHATEVER_1/" schemaLocation="WHATEVER_1.xsd"/> <xsd:import namespace="http://WHATEVER_2/" schemaLocation="WHATEVER_2.xsd"/> <xsd:element name="myOperationResponse" type="xsd_1:MyOperationResponse"/> <xsd:element name="myOperation" type="xsd_1:MyOperationRequest"/> </xsd:schema> </wsdl:types> <!- declares messages - NOTE the mismatch on the request element attribute compared to response --> <wsdl:message name="myOperationRequest"> <wsdl:part element="tns:myOperation" name="request"/> </wsdl:message> <wsdl:message name="myOperationResponse"> <wsdl:part element="tns:myOperationResponse" name="response"/> </wsdl:message> <!- operations --> <wsdl:portType name="MyService"> <wsdl:operation name="myOperation"> <wsdl:input message="tns:myOperationRequest"/> <wsdl:output message="tns:myOperationResponse"/> <wsdl:fault message="tns:myOperationFault" name="myOperationFault"/> <wsdl:fault message="tns:myOperationFault1" name="myOperationFault1"/> </wsdl:operation> </wsdl:portType> Update 2: I pulled all the types that I had in my imported namespace (they were in a separate xsd) into the wsdl, as I suspected the import could be triggering the message contract generation. To my surprise it was not the case and having all the types defined in the wsdl did not change anything. I then (out of desperation) started constructing wsdls from scratch and playing with the maxOccurs attributes of element attributes contained in a sequence attribute I was able to reproduce the undesired message contract generation behavior. Here's a sample of an element: <xsd:element name="myElement"> <xsd:complexType> <xsd:sequence> <xsd:element minOccurs="0" maxOccurs="1" name="arg1" type="xsd:string"/> </xsd:sequence> </xsd:complexType> </xsd:element> Playing with maxOccurs on elements that are used as messages (all requests and responses basically) the following happens: maxOccurs = "1" does not trigger the wrapping macOcccurs 1 triggers the wrapping maxOccurs = "unbounded" triggers the wrapping I was not able to reproduce this on my production wsdl yet because the nesting of the types goes very deep, and it's gonna take me time to inspect it thoroughly. In the meanwhile I am hoping it might ring a bell - any help highly appreciated.

    Read the article

  • New cast exception with VS2010/.Net 4

    - by Trevor
    [ Updated 25 May 2010 ] I've recently upgraded from VS2008 to VS2010, and at the same time upgraded to .Net 4. I've recompiled an existing solution of mine and I'm encountering a Cast exception I did not have before. The structure of the code is simple (although the actual implementation somewhat more complicated). Basically I have: public class SomeClass : ISomeClass { // Stuff } public static class ClassFactory { public static IInterface GetClassInstance<IInterface>(Type classType) { return (IInterface)Activator.CreateInstance(classType); // This throws a cast exception } } // Call the factory with: ISomeClass anInstance = ClassFactory.GetClassInstance<ISomeClass>(typeof(SomeClass)); Ignore the 'sensibleness' of the above - its provides just a representation of the issue rather than the specifics of what I'm doing (e.g. constructor parameters have been removed). The marked line throws the exception: Unable to cast object of type 'Namespace.SomeClass' to type 'Namespace.ISomeClass'. I suspect it may have something to do with the additional DotNet security (and in particular, explicit loading of assemblies, as this is something my app does). The reason I suspect this is that I have had to add to the config file the setting: <runtime> <loadFromRemoteSources enabled="true" /> </runtime> .. but I'm unsure if this is related. Update I see (from comments) that my basic code does not reproduce the issue by itself. Not surprising I suppose. It's going to be tricky to identify which part of a largish 3-tier CQS system is relevant to this problem. One issue might be that there are multiple assemblies involved. My static class is actually a factory provider, and the 'SomeClass' is a class factory (relevant in that the factories are 'registered' within the app via explicit assembly/type loading - see below) . Upfront I use reflection to 'register' all factories (i.e. classes that implement a particular interface) and that I do this when the app starts by identifying the relevant assemblies, loading them and adding them to a cache using (in essence): Loop over (file in files) { Assembly assembly = Assembly.LoadFile(file); baseAssemblyList.Add(assembly); } Then I cache the available types in these assemblies with: foreach (Assembly assembly in _loadedAssemblyList) { Type[] assemblyTypes = assembly.GetTypes(); _loadedTypesCache.AddRange(assemblyTypes); } And then I use this cache to do a variety of reflection operations, including 'registering' of factories, which involves looping through all loaded (cached) types and finding those that implement the (base) Factory interface. I've experienced what may be a similar problem in the past (.Net 3.5, so not exactly the same) with an architecture that involved dynamically creating classes on the server and streaming the compiled binary of those classes to the client app. The problem came when trying to deserialize an instance of the dynamic class on the client from a remote call: the exception said the class type was not know, even though the source and destination types were exactly the same name (including namespace). Basically the cross boundry versions of the class were not recognised as being the same. I solved that by intercepting the deserialization process and explicitly defining the deseriazation class type in the context of the local assemblies. This experience is what makes me think the types are considered mismatched because (somehow) the interface of the actual SomeClass object, and the interface of passed into the Generic method are not considered the same type. So (possibly) my question for those more knowledgable about C#/DotNet is: How does the class loading work that somehow my app thinks there are two versions/types of the interface type and how can I fit that? [ whew ... anyone who got here is quite patient .. thanks ]

    Read the article

  • displaying search results with two or more queries

    - by fusion
    in my search form, if the user types 'good', it displays all the results which contain the keyword 'good'. however if the user types in 'good sweetest', it displays no results because there is no record with the two words appearing together; BUT appearing in an entry at different places. for example, the record says: A good action is an ever-remaining store and a pure yield the user types in 'good', it will show up this record, but if the user types in 'good' + 'pure', it will not show anything. what i would like is that if the user types in 'good' + 'pure' it should records containing these keywords highlighting them. search.php code: $search_result = ""; $search_result = $_POST["q"]; $search_result = trim($search_result); //Check if the string is empty if ($search_result == "") { echo "<p class='error'>Search Error. Please Enter Your Search Query.</p>" ; exit(); } if ($search_result == "%" || $search_result == "_" || $search_result == "+" ) { echo "<p class='error1'>Search Error. Please Enter a Valid Search Query.</p>" ; exit(); } $result = mysql_query('SELECT cQuotes, vAuthor, cArabic, vReference FROM thquotes WHERE cQuotes LIKE "%' . mysql_real_escape_string($search_result) .'%" ORDER BY idQuotes DESC', $conn) or die ('Error: '.mysql_error()); function h($s) { echo htmlspecialchars($s, ENT_QUOTES); } function highlightWords($string, $word) { $string = preg_replace("/".preg_quote($word, "/")."/i", "<span class='highlight'>$0</span>", $string); /*** return the highlighted string ***/ return $string; } ?> <div class="caption">Search Results</div> <div class="center_div"> <table> <?php while ($row= mysql_fetch_array($result, MYSQL_ASSOC)) { $cQuote = highlightWords(htmlspecialchars($row['cQuotes']), $search_result); ?> <tr> <td style="text-align:right; font-size:18px;"><?php h($row['cArabic']); ?></td> <td style="font-size:16px;"><?php echo $cQuote; ?></td> <td style="font-size:12px;"><?php h($row['vAuthor']); ?></td> <td style="font-size:12px; font-style:italic; text-align:right;"><?php h($row['vReference']); ?></td> </tr> <?php } ?> </table> </div> search.html: <form name="myform" class="wrapper"> <input type="text" name="q" onkeyup="showUser()" class="txt_search"/> <input type="button" name="button" onclick="showUser()" class="button"/> <p> <div id="txtHint"></div> </form>

    Read the article

  • Development process for an embedded project with significant hardware changes

    - by pierr
    I have a good idea about Agile development process but it seems it does not fit well with a embedded project with significant hardware changes. I will describe below what we are currently doing (Ad-hoc way, no defined process yet). The changes are divided into three categories and different processes are used for each of them: complete hardware change example : use a different video codec IP a) Study the new IP b) RTL/FPGA simulation c) Implement the legacy interface - go to b) d) Wait until hardware (tape out) is ready f) Test on the real hardware hardware improvement example : enhance the image display quality by improving the underlying algorithm a) RTL/FPGA simulation b) Wait until hardware and test on the hardware Minor change example : only change hardware register mapping a) Wait until hardware and test on the hardware The worry is it seems we don't have too much control and confidence about software maturity for the hardware changes as the bring-up schedule is always very tight and the customer desired a seamless change when updating to a new version of hardware. How did you manage this kind of hardware change? Did you solve that by a Hardware Abstraction Layer (HAL)? Did you have a automatic test for the HAL layer? How did you test when the hardware platform is not even ready? Do you have well-documented processes for this kind of change?

    Read the article

  • Perl Curses::UI

    - by user353211
    I am trying to use the library Curses:UI from http://search.cpan.org/dist/Curses-UI/ to build a UI on linux karmic. I can create a simple user interface e.g.: #!usr/usr/bin/perl use strict; use Curses; use Curses::UI; $ui = new Curses::UI(-color_support=1,-clear_on_exit=1,-intellidraw=1); my $window = $ui-add('window', 'Window',intellidraw=1); my $message = $window-add(-text="Hello!",-intellidraw=1); $window-focus(); $ui-mainloop(); Question: I need some way to communicate informatio to the UI i.e. I have a loop which will wait for message to come and change the text in window. Once this message comes a popup will be displayed. Attempt: my $ui = new Curses::UI(-color_support=1,-clear_on_exit=1,-intellidraw=1); my $window = $ui-add('window', 'Window',intellidraw=1); my $message = $window-add(-text="Hello!",-intellidraw=1); pseudocode while(true) #implemented a function to wait { popup($window-text("Hello how are you?")); } $window-focus(); $ui-mainloop(); Problem: The above does not work. I am given a dark screen where my message is displayed. I have read the documentation and when I relocate : $ui-mainloop() above the while loop I am given the user interface but now nothing communicates to the window. Coincise Question: I need some way of displaying the user interface wait for inputs and display messages. Could anyone please help me on this? Thank you!

    Read the article

  • How to change TestNG dataProvider order

    - by momad
    Hi, I am running hundreds of tests against a large publishing system and would like to paralellize the tests using TestNG. However, I cannot find any easy way of doing this. Each test case instanciates an instance of this publisher, send some messages, wait for those messages to be published, then dump out the contents of the publish queues and compare against expected outcome. Doing this with so many tests (even if I paralellize using threads, still takes a very long time to complete (1 day or more)). We've found that in testing this sort of system, it's best to start up system once, run all tests to send their messages, wait for publish to do its thing, dump all outputs, and match outputs with tests and verify. For example, instead of the following: @Test public void testRule1() { Publisher pub = new Publisher(); pub.sendRule(new Rule("test1-a")); sleep(10); // wait 10 seconds pub.dumpRules(); verifyRule("test1-a"); } We wanted to do something like the following: @Test public void testRule1(bool sendMode) { if(sendMode) { this.pub.sendRule(new Rule("test1-a")); } else { verifyRule("test1-a"); } } Where you have a dataProvider run through all the tests with sendMode = true and then perform dumpAllRules() followed by running through all of the tests again with sendMode = false. The problem is, TestNG calls the same method twice, once with sendMode = true followed by sendMode = false. Is there anyway to accomplish this in TestNG? Thanks!

    Read the article

  • NAudio Mp3 Playback in Console

    - by Kurru
    Hi I'm trying to make a helper dll that will simplify the NAudio framework into a subset of functions I'm likely to need but I've hit a stumbling block right off the bat. I'm trying to use the following code to play an mp3 but I'm not hearing anything at all. Any help would be appreciated! static WaveOut waveout; static WaveStream playback; static System.Threading.ManualResetEvent wait = new System.Threading.ManualResetEvent(false); static void Main(string[] args) { System.Threading.Thread t = new System.Threading.Thread(new System.Threading.ThreadStart(PlaySong)); t.Start(); wait.WaitOne(); System.Threading.Thread.Sleep(2 * 1000); waveout.Stop(); waveout.Dispose(); playback.Dispose(); } static void PlaySong() { waveout = new WaveOut(); playback = OpenMp3Stream(@"songname.mp3"); waveout.Init(playback); waveout.Play(); Console.WriteLine("Started"); wait.Set(); } private static WaveChannel32 OpenMp3Stream(string fileName) { WaveChannel32 inputStream; WaveStream mp3Reader = new Mp3FileReader(fileName); WaveStream pcmStream = WaveFormatConversionStream.CreatePcmStream(mp3Reader); WaveStream blockAlignedStream = new BlockAlignReductionStream(pcmStream); inputStream = new WaveChannel32(blockAlignedStream); return inputStream; }

    Read the article

  • How to write a C program using the fork() system call that generates the Fibonacci sequence in the

    - by Ellen
    The problem I am having is that when say for instance the user enters 7, then the display shows: 0 11 2 3 5 8 13 21 child ends. I cannot seem to figure out how to fix the 11 and why is it displaying that many numbers in the sequence! Can anyone help? The number of the sequence will be provided in the command line. For example, if 5 is provided, the first five numbers in the Fibonacci sequence will be output by the child process. Because the parent and child processes have their own copies of the data, it will be necessary for the child to output the sequence. Have the parent invoke the wait() call to wait for the child process to complete before exiting the program. Perform necessary error checking to ensure that a non-negative number is passed on the command line. #include <stdio.h> #include <sys/types.h> #include <unistd.h> int main() { int a=0, b=1, n=a+b,i,ii; pid_t pid; printf("Enter the number of a Fibonacci Sequence:\n"); scanf("%d", &ii); if (ii < 0) printf("Please enter a non-negative integer!\n"); else { pid = fork(); if (pid == 0) { printf("Child is producing the Fibonacci Sequence...\n"); printf("%d %d",a,b); for (i=0;i<ii;i++) { n=a+b; printf("%d ", n); a=b; b=n; } printf("Child ends\n"); } else { printf("Parent is waiting for child to complete...\n"); wait(NULL); printf("Parent ends\n"); } } return 0; }

    Read the article

  • Classes, methods, and polymorphism in Python

    - by Morlock
    I made a module prototype for building complex timer schedules in python. The classe prototypes permit to have Timer objects, each with their waiting times, Repeat objects that group Timer and other Repeat objects, and a Schedule class, just for holding a whole construction or Timers and Repeat instances. The construction can be as complex as needed and needs to be flexible. Each of these three classes has a .run() method, permitting to go through the whole schedule. Whatever the Class, the .run() method either runs a timer, a repeat group for a certain number of iterations, or a schedule. Is this polymorphism-oriented approach sound or silly? What are other appropriate approaches I should consider to build such a versatile utility that permits to put all building blocks together in as complex a way as desired with simplicity? Thanks! Here is the module code: ##################### ## Importing modules from time import time, sleep ##################### ## Class definitions class Timer: """ Timer object with duration. """ def __init__(self, duration): self.duration = duration def run(self): print "Waiting for %i seconds" % self.duration wait(self.duration) chime() class Repeat: """ Repeat grouped objects for a certain number of repetitions. """ def __init__(self, objects=[], rep=1): self.rep = rep self.objects = objects def run(self): print "Repeating group for %i times" % self.rep for i in xrange(self.rep): for group in self.objects: group.run() class Schedule: """ Groups of timers and repetitions. Maybe redundant with class Repeat. """ def __init__(self, schedule=[]): self.schedule = schedule def run(self): for group in self.schedule: group.run() ######################## ## Function definitions def wait(duration): """ Wait a certain number of seconds. """ time_end = time() + float(duration) #uncoment for minutes# * 60 time_diff = time_end - time() while time_diff > 0: sleep(1) time_diff = time_end - time() def chime(): print "Ding!"

    Read the article

  • Java synchronized seems ignored

    - by viraptor
    Hi, I've got the following code, which I expected to deadlock after printing out "Main: pre-sync". But it looks like synchronized doesn't do what I expect it to. What happens here? import java.util.*; public class deadtest { public static class waiter implements Runnable { Object obj; public waiter(Object obj) { this.obj = obj; } public void run() { System.err.println("Thead: pre-sync"); synchronized(obj) { System.err.println("Thead: pre-wait"); try { obj.wait(); } catch (Exception e) { } System.err.println("Thead: post-wait"); } System.err.println("Thead: post-sync"); } } public static void main(String args[]) { Object obj = new Object(); System.err.println("Main: pre-spawn"); Thread waiterThread = new Thread(new waiter(obj)); waiterThread.start(); try { Thread.sleep(1000); } catch (Exception e) { } System.err.println("Main: pre-sync"); synchronized(obj) { System.err.println("Main: pre-notify"); obj.notify(); System.err.println("Main: post-notify"); } System.err.println("Main: post-sync"); try { waiterThread.join(); } catch (Exception e) { } } } Since both threads synchronize on the created object, I expected the threads to actually block each other. Currently, the code happily notifies the other thread, joins and exits.

    Read the article

  • LWUIT Deadlock (lwuit dialog VS System dialog)

    - by Ramps
    Hi, I have a deadlock problem when I try to make some I/O operations that needs user permission. When user click the button I start a new thread which is responsible for performing IO operations, and I display lwuit "please wait" dialog. Dialog is dismissed by IO thread from callback method. Problem is that, when system dialog appears (asking for user permission ) on top of lwuit dialog - deadlock occurs. I assume that this is because dialog.show() method blocks main thread (EDT), so it's impossible to dismiss system dialog, when lwuit dialog is behind it. Anyone managed to solve this problem? Here is the simplified code, hope it is clear enough: protected void actionPerformed(ActionEvent evt, int id) { switch (id) { case ID_FRIEND: MyRunnableWithIoOperation r = new MyRunnableWithIoOperation(this); new Thread(r).start(); //run the thread performing IO operations Command cmd = mWaitDialog.showDialog(); // show the "please wait" dialog ...//handle cancel }//end switch } /* method called from MyRunnableWithIoOperation, when operation finished*/ public void myCallbackMethod(){ mWaitDialog.dispose(); // } I tried to start my IO thread by calling Display.getInstance().invokeAndBlock( r ), but with no luck. In such case, my "wait dialog" doesn't show up.

    Read the article

  • Developmnet process for an embedded project with significant Hardware change

    - by pierr
    Hi, I have a good idea about Agile development process but it seems it does not fit well with a embedded project with significant hardware change. I will describe below what we are currently doing (Ad-hoc way , no defined process yet). The change are divided to three categories and different process are used for them : complete hardware change example : use a different video codec IP a) Study the new IP b) RTL/FPGA simulation c) Implement the leagcy interface - go to b) d) Wait until hardware (tape out) is ready f) Test on the real Hardware hardware improvement example : enhance the image display quaulity by improving the underlie algorithm a)RTL/FPGA simulation b)Wait until hardware and test on the hardware Mino change exmaple : only change hardware register mapping a)Wait until hardware and test on the hardware The worry is it seems we don't have too much control and confidence about software maturity for the hardware change as the bring up schedule is always very tight and the customer desired a seemless change when updating to a new version hardware. How did you manage this kind of hardware hardware change? Did you solve that by a Hardware Abstraction Layer (HAL)? Did you have a automatical test for the HAL layer? How did you test when the hardware platform is not even ready? Do you have well documented process for this kind of change? Thanks for your insight.

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >