Search Results

Search found 25534 results on 1022 pages for 'write powershell'.

Page 115/1022 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • Is this the right way to write a ProtocolDecoder in MINA?

    - by phpscriptcoder
    public class CustomProtocolDecoder extends CumulativeProtocolDecoder{ byte currentCmd = -1; int currentSize = -1; boolean isFirst = false; @Override protected boolean doDecode(IoSession is, ByteBuffer bb, ProtocolDecoderOutput pdo) throws Exception { if(currentCmd == -1) { currentCmd = bb.get(); currentSize = Packet.getSize(currentCmd); isFirst = true; } while(bb.remaining() > 0) { if(!isFirst) { currentCmd = bb.get(); currentSize = Packet.getSize(currentCmd); } else isFirst = false; //System.err.println(currentCmd + " " + bb.remaining() + " " + currentSize); if(bb.remaining() >= currentSize - 1) { Packet p = PacketDecoder.decodePacket(bb, currentCmd); pdo.write(p); } else { bb.flip(); return false; } } if(bb.remaining() == 0) return true; else return false; } } Anyone see anything wrong with this code? When a lot of packets are received at once, even when only one client is connected, one of them might get cut off at the end (12 bytes instead of 15 bytes, for example) which is obviously bad.

    Read the article

  • How do I write this SQL statement to get the ad and posting? (PHP/MySQL)

    - by ggfan
    I am a little confused on the logic of how to write this SQL statement. When a user clicks on a tag, say HTML, it would display all the posts with HTML as its tag. (a post can have multiple tags) I have three tables: Posting--posting_id, title, detail, etc tags--tagID, tagname postingtag--posting_id, tagID I want to display all the title of the post and the date added. global $dbc; $tagID=$_GET['tagID']; //the GET is set by URL //part I need help with. I need another WHERE statment to get to the posting table $query = "SELECT p.title,p.date_added, t.tagname FROM posting as p, postingtag as pt, tags as t WHERE t.tagID=$tagID"; $data = mysqli_query($dbc, $query); echo '<table>'; echo '<tr><td><b>Title</b></td><td><b>Date Posted</b></td></tr>'; while ($row = mysqli_fetch_array($data)) { echo '<tr><td>'.$row['title'].'</td>'; echo '<td>'.$row['date_added'].'</td></tr>'; } echo '</table>'; }

    Read the article

  • How to write to a text file in pipe delimited format from SQL Server / ASP.Net?

    - by NJTechGuy
    I have a text file which needs to be constantly updated (regular intervals). All I want is the syntax and possibly some code that outputs data from a SQL Server database using ASP.Net. The code I have so far is : <%@ Import Namespace="System.IO" %> <script language="vb" runat="server"> sub Page_Load(sender as Object, e as EventArgs) Dim FILENAME as String = Server.MapPath("Output.txt") Dim objStreamWriter as StreamWriter ' If Len(Dir$(FILENAME)) > 0 Then Kill(FILENAME) objStreamWriter = File.AppendText(FILENAME) objStreamWriter.WriteLine("A user viewed this demo at: " & DateTime.Now.ToString()) objStreamWriter.Close() Dim objStreamReader as StreamReader objStreamReader = File.OpenText(FILENAME) Dim contents as String = objStreamReader.ReadToEnd() lblNicerOutput.Text = contents.Replace(vbCrLf, "<br>") objStreamReader.Close() end sub </script> <asp:label runat="server" id="lblNicerOutput" Font-Name="Verdana" /> With PHP, it is a breeze, but with .Net I have no clue. If you could help me with the database connectivity and how to write the data in pipe delimited format to an Output.txt file, that had be awesome. Thanks guys!

    Read the article

  • is it possible to write a program which prints its own source code utilizing a "sequence-generating-

    - by guest
    is it possible to write a program which prints its own source code utilizing a "sequence-generating-function"? what i call a sequence-generating-function is simply a function which returns a value out of a specific interval (i.e. printable ascii-charecters (32-126)). the point now is, that this generated sequence should be the programs own source-code. as you see, implementing a function which returns an arbitrary sequence is really trivial, but since the returned sequence must contain the implementation of the function itself it is a highly non-trivial task. this is how such a program (and its corresponding output) could look like #include <stdio.h> int fun(int x) { ins1; ins2; ins3; . . . return y; } int main(void) { int i; for ( i=0; i<size of the program; i++ ) { printf("%c", fun(i)); } return 0; } i personally think it is not possible, but since i don't know very much about the underlying matter i posted my thoughts here. i'm really looking forward to hear some opinions!

    Read the article

  • Do I need to write a trigger for such a simple constraint?

    - by Paul Hanbury
    I really had a hard time knowing what words to put into the title of my question, as I am not especially sure if there is a database pattern related to my problem. I will try to simplify matters as much as possible to get directly to the heart of the issue. Suppose I have some tables. The first one is a list of widget types: create table widget_types ( widget_type_id number(7,0) primary key, description varchar2(50) ); The next one contains icons: create table icons ( icon_id number(7,0) primary key, picture blob ); Even though the users get to select their preferred widget, there is a predefined subset of widgets that they can choose from for each widget type. create table icon_associations ( widget_type_id number(7,0) references widget_types, icon_id number(7,0) references icons, primary key (widget_type_id, icon_id) ); create table icon_prefs ( user_id number(7,0) references users, widget_type_id number(7,0), icon_id number(7,0), primary key (user_id, widget_type_id), foreign key (widget_type_id, icon_id) references icon_associations ); Pretty simple so far. Let us now assume that if we are displaying an icon to a user who has not set up his preferences, we choose one of the appropriate images associated with the current widget. I'd like to specify the preferred icon to display in such a case, and here's where I run into my problem: alter table icon_associations add ( is_preferred char(1) check( is_preferred in ('y','n') ) ) ; I do not see how I can enforce that for each widget_type there is one, and only one, row having is_preferred set to 'y'. I know that in MySQL, I am able to write a subquery in my check constraint to quickly resolve this issue. This is not possible with Oracle. Is my mistake that this column has no business being in the icon_associations table? If not where should it go? Is this a case where, in Oracle, the constraint can only be handled with a trigger? I ask only because I'd like to go the constraint route if at all possible. Thanks so much for your help, Paul

    Read the article

  • How to write a simple Lexer/Parser with antlr 2.7?

    - by Burkhard
    Hello, I have a complex grammar (in antlr 2.7) which I need to extend. Having never used antlr before, I wanted to write a very simple Lexer and Parser first. I found a very good explanation for antlr3 and tried to adapt it: header{ #include <iostream> using namespace std; } options { language="Cpp"; } class P2 extends Parser; /* This will be the entry point of our parser. */ eval : additionExp ; /* Addition and subtraction have the lowest precedence. */ additionExp : multiplyExp ( "+" multiplyExp | "-" multiplyExp )* ; /* Multiplication and addition have a higher precedence. */ multiplyExp : atomExp ( "*" atomExp | "/" atomExp )* ; /* An expression atom is the smallest part of an expression: a number. Or when we encounter parenthesis, we're making a recursive call back to the rule 'additionExp'. As you can see, an 'atomExp' has the highest precedence. */ atomExp : Number | "(" additionExp ")" ; /* A number: can be an integer value, or a decimal value */ number : ("0".."9")+ ("." ("0".."9")+)? ; /* We're going to ignore all white space characters */ protected ws : (" " | "\t" | "\r" | "\n") { newline(); } ; It does generate four files without errors: P2.cpp, P2.hpp, P2TokenTypes.hpp and P2TokenTypes.txt. But now what? How do I create a working programm with that? I tried to add these files to a VS2005-WinConsole-Project but it does not compile: p2.cpp(277) : fatal error C1010: unexpected end of file while looking for precompiled header. Did you forget to add '#include "stdafx.h"' to your source?

    Read the article

  • Can I write a .NETCF Partial Class to extend System.Windows.Forms.UserControl?

    - by eidylon
    Okay... I'm writing a .NET CF (VBNET 2008 3.5 SP1) application, which has one master form, and it dynamically loads specific UserControls based on menu click, in a sort of framework idea. There are certain methods and properties these controls all need to work within the app. Right now I am doing this as an Interface, but this is aggravating as all get up, because some of the methods are optional, and yet I MUST implement them by the nature of interfaces. I would prefer to use inheritance, so that I can have certain code be inherited with overridability, but if I write a class which inherits System.Windows.Forms.UserControl and then inherit my control from that, it squiggles, and tells me that UserControls MUST inherit directly from System.Windows.Forms.UserControl. (Talk about a design flaw!) So next I thought, well, let me use a partial class to extend System.Windows.Forms.UserControl, but when I do that, even though it all seems to compile fine, none of my new properties/methods show up on my controls. Is there any way I can use partial classes to 'extend' System.Windows.Forms.UserControl? For example, can anyone give me a code sample of a partial class which simply adds a MyCount As Integer readonly property to the System.Windows.Forms.UserControl class? If I can just see how to get this going, I can take it from there and add the rest of my functionality. Thanks in advance! I've been searching google, but can't find anything that seems to work for UserControl extension on .NET CF. And the Interface method is driving me crazy as even a small change means updating ALL the controls whether they need to 'override' the method or not.

    Read the article

  • How to write to a Text File in Pipe delimited format from MS Sql Server / ASP.Net?

    - by NJTechGuy
    I have a text file which needs to be constantly updated (regular intervals). All I want is the syntax and possibly some code that outputs data from a MS Sql Database using ASP.Net. The code I have so far is : <%@ Import Namespace="System.IO" %> <script language="vb" runat="server"> sub Page_Load(sender as Object, e as EventArgs) Dim FILENAME as String = Server.MapPath("Output.txt") Dim objStreamWriter as StreamWriter ' If Len(Dir$(FILENAME)) > 0 Then Kill(FILENAME) objStreamWriter = File.AppendText(FILENAME) objStreamWriter.WriteLine("A user viewed this demo at: " & DateTime.Now.ToString()) objStreamWriter.Close() Dim objStreamReader as StreamReader objStreamReader = File.OpenText(FILENAME) Dim contents as String = objStreamReader.ReadToEnd() lblNicerOutput.Text = contents.Replace(vbCrLf, "<br>") objStreamReader.Close() end sub </script> <asp:label runat="server" id="lblNicerOutput" Font-Name="Verdana" /> With PHP, it is a breeze, but with .Net I have no clue. If you could help me with the database connectivity and how to write the data in pipe delimited format to an Output.txt file, that had be awesome. Thanks guys!

    Read the article

  • The best way to write a jQuery plugin - If there is such a way?

    - by Nick Lowman
    There are quite a few ways to write plugins i.e. here's a nice example and what I've seen quite a lot of lately is the following code pattern and it's used by Doug Neiner here; (function($){ $.formatLink = function(el, options){ var base = this; base.$el = $(el); base.el = el; base.$el.data("formatLink", base); base.init = function(){ base.options = $.extend({}, $.formatLink.defaultOptions, options); //code here } base.init(); }; $.formatLink.defaultOptions = { }; $.fn.formatLink = function(options){ return this.each(function(){ (new $.formatLink(this, options)); }); }; })(jQuery); So, can anyone tell me the benefits of using the pattern above rather than the one below. I can't see the point in calling the $.extend function for every element in the jQuery stack (above), where the example below only does this once and then works on the stack. To test it I created two plugins, using both patterns, which applied styles to about 5000 li elements and the code below took about 1 second whereas the pattern above took about 1.3 seconds. (function($){ $.fn.formatLink = function(options){ var options = $.extend({}, $.fn.formatLink.defaultOptions, options || {}); return this.each(function(){ //code here }); }); $.fn.formatLink.defaultOptions ={} })(jQuery);

    Read the article

  • Is there an efficient way to write this PHP if / else statement?

    - by nvoyageur
    I've written a simple issue tracker for my web app. I have some comments that I want to keep private (only a role of 'root' can see them). Is there a better way to write the following so I do not need the empty else section? $role will be 'root' or some other values $is_private will be true if the comment is private <?php // Don't show private comments to non-root users if ($is_private && 'root' != $role): // NON Root cannot see private else: ?> <div class="comment <?= $is_private ? 'private' : '' ; ?>"> <div class="comment-meta toolbar"> <?= $is_private ? 'PRIVATE - ': ''; ?> <span class="datestamp"><?= $created_at; ?></span> - <span class="fullname"><?= $fname . ' ' . $lname; ?></span></div> <p class="content"><?= nl2br($body); ?></p> </div> <?php endif; ?>

    Read the article

  • Why can’t PHP script write a file on server 2008 via command line or task scheduler?

    - by rg89
    I created a question on serverfault.com, and it was recommended that I ask here. http://serverfault.com/questions/140669/why-cant-php-script-write-a-file-on-server-2008-via-command-line-or-task-schedul I have a PHP script. It runs well when I use a browser. It writes an XML file in the same directory. The script takes ~60 seconds to run, and the resulting XML file is ~16 MB. I am running PHP 5.2.13 via FastCGI on Windows Server Web edition SP1 64 bit. The code pulls inventory from SQL server, runs a loop to build an XML file for a third party. I created a task in task scheduler to run c:\php5\php.exe "D:\inetpub\tools\build.php" The task scheduler shows a time lapse of about a minute, which is how long the script takes to run in a browser. No error returned, but no file created. Each time I make a change to the scheduled task properties, a user password box comes up and I enter the administrator account password. If I run this same path and argument at a command line it does not error and does not create the file. When I right click run command prompt as an administrator, the file is still not created. I get my echo statement "file published" that is after the file creation and no error is returned. I am doing a simple fopen fwrite fclose to save the contents of a php variable to a .xml file, and the file only gets created when the script is run through the browser. Here's what happens after the xml-building loop: $feedContent .= "</feed"; sqlsrv_close( $conn ); echo "<p>feed built</p>"; $feedFile = "feed.xml"; $handler = fopen($feedFile, 'w'); fwrite( $handler, $feedContent ); fclose( $handler ); echo "<p>file published</p>"; Thanks

    Read the article

  • How can I write a function template for all types with a particular type trait?

    - by TC
    Consider the following example: struct Scanner { template <typename T> T get(); }; template <> string Scanner::get() { return string("string"); } template <> int Scanner::get() { return 10; } int main() { Scanner scanner; string s = scanner.get<string>(); int i = scanner.get<int>(); } The Scanner class is used to extract tokens from some source. The above code works fine, but fails when I try to get other integral types like a char or an unsigned int. The code to read these types is exactly the same as the code to read an int. I could just duplicate the code for all other integral types I'd like to read, but I'd rather define one function template for all integral types. I've tried the following: struct Scanner { template <typename T> typename enable_if<boost::is_integral<T>, T>::type get(); }; Which works like a charm, but I am unsure how to get Scanner::get<string>() to function again. So, how can I write code so that I can do scanner.get<string>() and scanner.get<any integral type>() and have a single definition to read all integral types? Update: bonus question: What if I want to accept more than one range of classes based on some traits? For example: how should I approach this problem if I want to have three get functions that accept (i) integral types (ii) floating point types (iii) strings, respectively.

    Read the article

  • How to read/write from erlang to a named pipe ?

    - by cstar
    I need my erlang application to read and write through a named pipe. Opening the named pipe as a file will fail with eisdir. I wrote the following module, but it is fragile and feels wrong in many ways. Also it fails on reading after a while. Is there a way to make it more ... elegant ? -module(port_forwarder). -export([start/2, forwarder/2]). -include("logger.hrl"). start(From, To)-> spawn(fun() -> forwarder(From, To) end). forwarder(FromFile, ToFile) -> To = open_port({spawn,"/bin/cat > " ++ ToFifo}, [binary, out, eof,{packet, 4}]), From = open_port({spawn,"/bin/cat " ++ FromFifo}, [binary, in, eof, {packet, 4}]), forwarder(From, To, nil). forwarder(From, To, Pid) -> receive {Manager, {command, Bin}} -> ?ERROR("Sending : ~p", [Bin]), To ! {self(), {command, Bin}}, forwarder(From, To, Manager); {From ,{data,Data}} -> Pid ! {self(), {data, Data}}, forwarder(From, To, Pid); E -> ?ERROR("Quitting, first message not understood : ~p", [E]) end. As you may have noticed, it's mimicking the port format in what it accepts or returns. I want it to replace a C code that will be reading the other ends of the pipes and being launched from the debugger.

    Read the article

  • How to write a generic USB Host Driver for Printers from various vendors?

    - by Ullas
    I want to develop a USB host on an embedded device that will talk to printers from various vendors. Drivers for the vendor specific printers would be available on PC which is ultimately communicating with printer but my device is facilitating this communication and needs to perform the basic handshaking/setup of the printer (i.e, it needs to know when the printer is connected, what are the socket IDs that needs to be opened for CTRL and DATA transmissions etc). All of these printers are supposed to comply with IEEE 1284.4 standards but I see that many of them vary quiet a bit. One approach I have is to take the USB traces of handshaking from each of these printers and write various sections of code respectively (I know, that sounds ridiculous!). Is there a generic way to do this? Is there any available forum where these standard informations are mentioned? For eg: EPSON uses 'EPSON-CTRL' and 'EPSON-DATA' for its control and data services which needs to be provided to get the socket ID for these services. I am pretty sure HPs, Canon's etc would have their own service names as well. As per the standards, this was supposed to be captured in IANA but I dont see anything there. Any help on this would be greatly appreciated. Thanks and regards, Ullas

    Read the article

  • When to use basic types (Integer, String), and when to write a new class?

    - by belgarat
    Stackoverflow users: A lot of things can be represented in programs by using the basic types, or we can create a new class for it. Example: A social security number can be a number, string or its own object. (Other common examples: Phone numbers, names, zip codes, user id, order id and other id's.) My question is: When should the basic types be used, and when should we write ourselves a new class? I see that when you need to add behavior, you'll want to create a class (example, social security number parsing, validation, formatting, etc). But is this the only criteria? I have come across cases where many of these things are represented as java Integers and/or Strings. We loose the benefit of type-checking, and I have often seen bugs caused by parameters being mixed in calls to function(Intever, Integer, Integer, Integer). On the other hand, some programmers are opposed to over-designing by creating classes for "eveything". Obviously, the answer is "it depends". But, what do you think, and what do you normally do?

    Read the article

  • How (and if) to write a single-consumer queue using the task parallel library?

    - by Eric
    I've heard a bunch of podcasts recently about the TPL in .NET 4.0. Most of them describe background activities like downloading images or doing a computation, using tasks so that the work doesn't interfere with a GUI thread. Most of the code I work on has more of a multiple-producer / single-consumer flavor, where work items from multiple sources must be queued and then processed in order. One example would be logging, where log lines from multiple threads are sequentialized into a single queue for eventual writing to a file or database. All the records from any single source must remain in order, and records from the same moment in time should be "close" to each other in the eventual output. So multiple threads or tasks or whatever are all invoking a queuer: lock( _queue ) // or use a lock-free queue! { _queue.enqueue( some_work ); _queueSemaphore.Release(); } And a dedicated worker thread processes the queue: while( _queueSemaphore.WaitOne() ) { lock( _queue ) { some_work = _queue.dequeue(); } deal_with( some_work ); } It's always seemed reasonable to dedicate a worker thread for the consumer side of these tasks. Should I write future programs using some construct from the TPL instead? Which one? Why?

    Read the article

  • How can I write output to a new external file with Perl?

    - by Structure
    Maybe I am searching with the wrong keywords or this is a very basic question but I cannot find the answer to my question. I am having trouble writing the result of my whois command to a new external file. My code is below. It takes $readfilename, which is a file name which has a list of IPs, and $writefilename, which is the destination file for output. Both are user-specified. For my tests, $readfilename contains three IP addresses on three separate lines so there should be three separate whois results in the user specified output file. if ($readfilename) { open (my $inputfile, "<", $readfilename) || die "\n Cannot open the specified file. Please double check your file name and path.\n\n"; open (my $outputfile, ">", $writefilename) || die "\n Could not create write file.\n\n"; while (<$inputfile>) { my $iplookupresult = `whois $_`; print $outputfile $iplookupresult; } close $outputfile; close $inputfile; } I can execute this script and end up with a new external file, but over half of the file has binary garbage data (running on CentOS) and only one (or a portion of one) of the whois lookups is readable. I have no idea how half of my file is ending up binary... but my approach must be incorrect. Is there a better way to achieve the same result?

    Read the article

  • Guidelines for creating a programming-language enjoyable to write programs in?

    - by sub
    I'm currently working on the topic of programming-languages and interpreter-design. I have already created several programming languages but couldn't reach my goal so far: Create a programming-language which focuses on giving the programmer a good feeling when writing code in it. It should just be fun and/or interesting and in no case annoying to write something in it. I get this feeling when writing code in Python. I sometimes get the opposite with PHP and in rare cases when having to reinvent some wheel in C++. So I've tried to figure out some syntactical features to make programming in my new language fun, but I just can't find any. Which concrete features, maybe mainly in terms of syntax, do/could make programming in a language fun? Examples: I find it enjoyable to program in Ruby because of it's use of code blocks. It would be nice if you could include exactly one example in your answer Those features do not have to already exist in any language! I'm doing this because I have experienced extreme rises in (my own) productivity when programming in languages I love (because of particular features).

    Read the article

  • Is this right in the use case of exec method of child_process? is there away to cody the envirorment along with the require module too?

    - by L2L2L
    I'm learning node. I am using child_process to move data to another script to be executed. But it seem that it does not copy the hold environment or I could be doing something wrong. To copy the hold environment --require modules too-- or is this when I use spawn, I'm not so clear or understanding spawn exec and execfile --although execfile is like what I'm doing at the bottom, but with exec... right?-- And I would just love to have some clarity on this matter. Please anyone? Thank you. parent.js - "use strict"; var fs, path, _err; fs = require("fs"), path = require("path"), _err = require("./err.js"); var url; url= process.argv[1]; var dirname, locate_r; dirname = path.dirname(url); locate_r = dirname + "/" + "test.json";//path.join(dirname,"/", "test.json"); var flag, str; flag = "r", str = ""; fs.open(locate_r, flag, function opd(error, fd){ if (error){_err(error, function(){ fs.close(fd,function(){ process.stderr.write("\n" + "In Finally Block: File Closed!" + "\n");});})} var readBuff, buffOffset, buffLength, filePos; readBuff = new Buffer(15), buffOffset = 0, buffLength = readBuff.length, filePos = 0; fs.read(fd, readBuff, buffOffset, buffLength, filePos, function rd(error, readBytes){ error&&_err(error, fd); str = readBuff.toString("utf8"); process.env.str = str; process.stdout.write("str: "+ str + "\n" + "readBuff: " + readBuff + "\n"); fs.close(fd, function(){process.stdout.write( "Read and Closed File." + "\n" )}); //write(str); //run test for process.exec** var env, varName, envCopy, exec; env = process.env, varName, envCopy = {}, exec = require("child_process").exec; for(varName in env){ envCopy[varName] = env[varName]; } process.env.fs = fs, process.env.path = path, process.env.dirname = dirname, process.env.flag = flag, process.env.str = str, process.env._err = _err; process.env.fd = fd; exec("node child.js", env, function(error, stdout, stderr){ if(error){throw (new Error(error));} }); }); }); child.js - "use strict"; var fs, path, _err; fs = require("fs"), path = require("path"), _err = require("./err.js"); var fd, fs, flag, path, dirname, str, _err; fd = process.env.fd, //fs = process.env.fs, //path = process.env.path, dirname = process.env.dirname, flag = process.env.flag, str = process.env.str, _err = process.env._err; var url; url= process.argv[1]; var locate_r; dirname = path.dirname(url); locate_r = dirname + "/" + "test.json";//path.join(dirname,"/", "test.json"); //function write(str){ var locate_a; locate_a = dirname + "/" + "test.json"; //path.join(dirname,"/", "test.json"); flag = "a"; fs.open(locate_a, flag, function opd(error, fd){ error&&_err(error, fs, fd); var writeBuff, buffPos, buffLgh, filePs; writeBuff = new Buffer(str), process.stdout.write( "writeBuff: " + writeBuff + "\n" + "str: " + str + "\n"), buffPos = 0, buffLgh = writeBuff.length, filePs = buffLgh;//null; fs.write(fd, writeBuff, buffPos, buffLgh, filePs-3, function(error, written){ error&&_err(error, function(){ fs.close(fd,function(){ process.stderr.write("\n" + "In Finally Block: File Closed!" + "\n"); }); }); fs.close(fd, function(){process.stdout.write( "Written and Closed File." + "\n");}); }); }); //} err.js - "use strict"; var fs; fs = require("fs"); module.exports = function _err(err, scp, cd){ try{ throw (new Error(err)); }catch(e){ process.stderr.write(e + "\n"); }finally{ cd; } }

    Read the article

  • TDD - A question about the approach

    - by k25
    I have a question about TDD. I have always seen the recommendation that we should first write unit tests and then start writing code. But I feel that going the other way is much more comfortable (for me) - write code and then the unit tests, because I feel we have much more clarity after we have written the actual code. If I write the code and then the tests, I may have to change my code a little bit to make it testable, even if I concentrate much on creating a testable design. On the other hand, if I write the tests and then the code, the tests will change pretty frequently as and when the code shapes up. My questions are: 1) As I see a lot of recommendations to start writing tests and then move on to coding, what are the disadvantages if I do it the other way - write code and then the unit tests? 2) Could you please point me to some links that discuss about this or recommend some books (TDD)?

    Read the article

  • Skip the first RenderTarget when writing to MRT with Opaque blending

    - by cubrman
    I am writing to three rendertargets and whant to know how to tell a GPU not to write to the first RT. When you write a shader you can simply output less data than you have RTs (like output a single float4 when writing to three RTs) and only the first RTs will be affected, but you cannot specify to output this data anywhere else but to COLOR0, then 1, etc. Is there a way to write to several RTs but skip the first target? If I output zeroes, the data in the target will become zeroes, but I need it to remain untuched in the first target and only change in the specified ones. The reason I need this is to prevent data loss when calling SetRendertarget() with DiscardContents RTs. I write to all the RTs at one point and I need to write to only the specified ones afterwards. It must be the first texture as I have a depth buffer linked to it (XNA 4.0). Thanks.

    Read the article

  • Why does fprintf start printing out of order or not at all?

    - by Steve Melvin
    This code should take an integer, create pipes, spawn two children, wait until they are dead, and start all over again. However, around the third time around the loop I lose my prompt to enter a number and it no longer prints the number I've entered. Any ideas? #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <errno.h> #define WRITE 1 #define READ 0 int main (int argc, const char * argv[]) { //Pipe file-descriptor array unsigned int isChildA = 0; int pipeA[2]; int pipeB[2]; int num = 0; while(1){ fprintf(stderr,"Enter an integer: "); scanf("%i", &num); if(num == 0){ fprintf(stderr,"You entered zero, exiting...\n"); exit(0); } //Open Pipes if(pipe(pipeA) < 0){ fprintf(stderr,"Could not create pipe A.\n"); exit(1); } if(pipe(pipeB) < 0){ fprintf(stderr,"Could not create pipe B.\n"); exit(1); } fprintf(stderr,"Value read: %i \n", num); fprintf(stderr,"Parent PID: %i\n", getpid()); pid_t procID = fork(); switch (procID) { case -1: fprintf(stderr,"Fork error, quitting...\n"); exit(1); break; case 0: isChildA = 1; break; default: procID = fork(); if (procID<0) { fprintf(stderr,"Fork error, quitting...\n"); exit(1); } else if(procID == 0){ isChildA = 0; } else { write(pipeA[WRITE], &num, sizeof(int)); close(pipeA[WRITE]); close(pipeA[READ]); close(pipeB[WRITE]); close(pipeB[READ]); pid_t pid; while (pid = waitpid(-1, NULL, 0)) { if (errno == ECHILD) { break; } } } break; } if (procID == 0) { //We're a child, do kid-stuff. ssize_t bytesRead = 0; int response; while (1) { while (bytesRead == 0) { bytesRead = read((isChildA?pipeA[READ]:pipeB[READ]), &response, sizeof(int)); } if (response < 2) { //Kill other child and self fprintf(stderr, "Terminating PROCID: %i\n", getpid()); write((isChildA?pipeB[WRITE]:pipeA[WRITE]), &response, sizeof(int)); close(pipeA[WRITE]); close(pipeA[READ]); close(pipeB[WRITE]); close(pipeB[READ]); return 0; } else if(!(response%2)){ //Even response/=2; fprintf(stderr,"PROCID: %i, VALUE: %i\n", getpid(), response); write((isChildA?pipeB[WRITE]:pipeA[WRITE]), &response, sizeof(int)); bytesRead = 0; } else { //Odd response*=3; response++; fprintf(stderr,"PROCID: %i, VALUE: %i\n", getpid(), response); write((isChildA?pipeB[WRITE]:pipeA[WRITE]), &response, sizeof(int)); bytesRead = 0; } } } } return 0; } This is the output I am getting... bash-3.00$ ./proj2 Enter an integer: 101 Value read: 101 Parent PID: 9379 PROCID: 9380, VALUE: 304 PROCID: 9381, VALUE: 152 PROCID: 9380, VALUE: 76 PROCID: 9381, VALUE: 38 PROCID: 9380, VALUE: 19 PROCID: 9381, VALUE: 58 PROCID: 9380, VALUE: 29 PROCID: 9381, VALUE: 88 PROCID: 9380, VALUE: 44 PROCID: 9381, VALUE: 22 PROCID: 9380, VALUE: 11 PROCID: 9381, VALUE: 34 PROCID: 9380, VALUE: 17 PROCID: 9381, VALUE: 52 PROCID: 9380, VALUE: 26 PROCID: 9381, VALUE: 13 PROCID: 9380, VALUE: 40 PROCID: 9381, VALUE: 20 PROCID: 9380, VALUE: 10 PROCID: 9381, VALUE: 5 PROCID: 9380, VALUE: 16 PROCID: 9381, VALUE: 8 PROCID: 9380, VALUE: 4 PROCID: 9381, VALUE: 2 PROCID: 9380, VALUE: 1 Terminating PROCID: 9381 Terminating PROCID: 9380 Enter an integer: 102 Value read: 102 Parent PID: 9379 PROCID: 9386, VALUE: 51 PROCID: 9387, VALUE: 154 PROCID: 9386, VALUE: 77 PROCID: 9387, VALUE: 232 PROCID: 9386, VALUE: 116 PROCID: 9387, VALUE: 58 PROCID: 9386, VALUE: 29 PROCID: 9387, VALUE: 88 PROCID: 9386, VALUE: 44 PROCID: 9387, VALUE: 22 PROCID: 9386, VALUE: 11 PROCID: 9387, VALUE: 34 PROCID: 9386, VALUE: 17 PROCID: 9387, VALUE: 52 PROCID: 9386, VALUE: 26 PROCID: 9387, VALUE: 13 PROCID: 9386, VALUE: 40 PROCID: 9387, VALUE: 20 PROCID: 9386, VALUE: 10 PROCID: 9387, VALUE: 5 PROCID: 9386, VALUE: 16 PROCID: 9387, VALUE: 8 PROCID: 9386, VALUE: 4 PROCID: 9387, VALUE: 2 PROCID: 9386, VALUE: 1 Terminating PROCID: 9387 Terminating PROCID: 9386 Enter an integer: 104 Value read: 104 Parent PID: 9379 Enter an integer: PROCID: 9388, VALUE: 52 PROCID: 9389, VALUE: 26 PROCID: 9388, VALUE: 13 PROCID: 9389, VALUE: 40 PROCID: 9388, VALUE: 20 PROCID: 9389, VALUE: 10 PROCID: 9388, VALUE: 5 PROCID: 9389, VALUE: 16 PROCID: 9388, VALUE: 8 PROCID: 9389, VALUE: 4 PROCID: 9388, VALUE: 2 PROCID: 9389, VALUE: 1 Terminating PROCID: 9388 Terminating PROCID: 9389 105 Value read: 105 Parent PID: 9379 Enter an integer: PROCID: 9395, VALUE: 316 PROCID: 9396, VALUE: 158 PROCID: 9395, VALUE: 79 PROCID: 9396, VALUE: 238 PROCID: 9395, VALUE: 119 PROCID: 9396, VALUE: 358 PROCID: 9395, VALUE: 179 PROCID: 9396, VALUE: 538 PROCID: 9395, VALUE: 269 PROCID: 9396, VALUE: 808 PROCID: 9395, VALUE: 404 PROCID: 9396, VALUE: 202 PROCID: 9395, VALUE: 101 PROCID: 9396, VALUE: 304 PROCID: 9395, VALUE: 152 PROCID: 9396, VALUE: 76 PROCID: 9395, VALUE: 38 PROCID: 9396, VALUE: 19 PROCID: 9395, VALUE: 58 PROCID: 9396, VALUE: 29 PROCID: 9395, VALUE: 88 PROCID: 9396, VALUE: 44 PROCID: 9395, VALUE: 22 PROCID: 9396, VALUE: 11 PROCID: 9395, VALUE: 34 PROCID: 9396, VALUE: 17 PROCID: 9395, VALUE: 52 PROCID: 9396, VALUE: 26 PROCID: 9395, VALUE: 13 PROCID: 9396, VALUE: 40 PROCID: 9395, VALUE: 20 PROCID: 9396, VALUE: 10 PROCID: 9395, VALUE: 5 PROCID: 9396, VALUE: 16 PROCID: 9395, VALUE: 8 PROCID: 9396, VALUE: 4 PROCID: 9395, VALUE: 2 PROCID: 9396, VALUE: 1 Terminating PROCID: 9395 Terminating PROCID: 9396 105 Value read: 105 Parent PID: 9379 Enter an integer: PROCID: 9397, VALUE: 316 PROCID: 9398, VALUE: 158 PROCID: 9397, VALUE: 79 PROCID: 9398, VALUE: 238 PROCID: 9397, VALUE: 119 PROCID: 9398, VALUE: 358 PROCID: 9397, VALUE: 179 PROCID: 9398, VALUE: 538 PROCID: 9397, VALUE: 269 PROCID: 9398, VALUE: 808 PROCID: 9397, VALUE: 404 PROCID: 9398, VALUE: 202 PROCID: 9397, VALUE: 101 PROCID: 9398, VALUE: 304 PROCID: 9397, VALUE: 152 PROCID: 9398, VALUE: 76 PROCID: 9397, VALUE: 38 PROCID: 9398, VALUE: 19 PROCID: 9397, VALUE: 58 PROCID: 9398, VALUE: 29 PROCID: 9397, VALUE: 88 PROCID: 9398, VALUE: 44 PROCID: 9397, VALUE: 22 PROCID: 9398, VALUE: 11 PROCID: 9397, VALUE: 34 PROCID: 9398, VALUE: 17 PROCID: 9397, VALUE: 52 PROCID: 9398, VALUE: 26 PROCID: 9397, VALUE: 13 PROCID: 9398, VALUE: 40 PROCID: 9397, VALUE: 20 PROCID: 9398, VALUE: 10 PROCID: 9397, VALUE: 5 PROCID: 9398, VALUE: 16 PROCID: 9397, VALUE: 8 PROCID: 9398, VALUE: 4 PROCID: 9397, VALUE: 2 PROCID: 9398, VALUE: 1 Terminating PROCID: 9397 Terminating PROCID: 9398 106 Value read: 106 Parent PID: 9379 Enter an integer: PROCID: 9399, VALUE: 53 PROCID: 9400, VALUE: 160 PROCID: 9399, VALUE: 80 PROCID: 9400, VALUE: 40 PROCID: 9399, VALUE: 20 PROCID: 9400, VALUE: 10 PROCID: 9399, VALUE: 5 PROCID: 9400, VALUE: 16 PROCID: 9399, VALUE: 8 PROCID: 9400, VALUE: 4 PROCID: 9399, VALUE: 2 PROCID: 9400, VALUE: 1 Terminating PROCID: 9399 Terminating PROCID: 9400 ^C Another thing that's strange, when ran from within XCode it behaves normally. However, when ran from bash on Solaris or OSX it acts up.

    Read the article

  • Windows Azure VMs - New "Stopped" VM Options Provide Cost-effective Flexibility for On-Demand Workloads

    - by KeithMayer
    Originally posted on: http://geekswithblogs.net/KeithMayer/archive/2013/06/22/windows-azure-vms---new-stopped-vm-options-provide-cost-effective.aspxDidn’t make it to TechEd this year? Don’t worry!  This month, we’ll be releasing a new article series that highlights the Best of TechEd announcements and technical information for IT Pros.  Today’s article focuses on a new, much-heralded enhancement to Windows Azure Infrastructure Services to make it more cost-effective for spinning VMs up and down on-demand on the Windows Azure cloud platform. NEW! VMs that are shutdown from the Windows Azure Management Portal will no longer continue to accumulate compute charges while stopped! Previous to this enhancement being available, the Azure platform maintained fabric resource reservations for VMs, even in a shutdown state, to ensure consistent resource availability when starting those VMs in the future.  And, this meant that VMs had to be exported and completely deprovisioned when not in use to avoid compute charges. In this article, I'll provide more details on the scenarios that this enhancement best fits, and I'll also review the new options and considerations that we now have for performing safe shutdowns of Windows Azure VMs. Which scenarios does the new enhancement best fit? Being able to easily shutdown VMs from the Windows Azure Management Portal without continued compute charges is a great enhancement for certain cloud use cases, such as: On-demand dev/test/lab environments - Freely start and stop lab VMs so that they are only accumulating compute charges when being actively used.  "Bursting" load-balanced web applications - Provision a number of load-balanced VMs, but keep the minimum number of VMs running to support "normal" loads. Easily start-up the remaining VMs only when needed to support peak loads. Disaster Recovery - Start-up "cold" VMs when needed to recover from disaster scenarios. BUT ... there is a consideration to keep in mind when using the Windows Azure Management Portal to shutdown VMs: although performing a VM shutdown via the Windows Azure Management Portal causes that VM to no longer accumulate compute charges, it also deallocates the VM from fabric resources to which it was previously assigned.  These fabric resources include compute resources such as virtual CPU cores and memory, as well as network resources, such as IP addresses.  This means that when the VM is later started after being shutdown from the portal, the VM could be assigned a different IP address or placed on a different compute node within the fabric. In some cases, you may want to shutdown VMs using the old approach, where fabric resource assignments are maintained while the VM is in a shutdown state.  Specifically, you may wish to do this when temporarily shutting down or restarting a "7x24" VM as part of a maintenance activity.  Good news - you can still revert back to the old VM shutdown behavior when necessary by using the alternate VM shutdown approaches listed below.  Let's walk through each approach for performing a VM Shutdown action on Windows Azure so that we can understand the benefits and considerations of each... How many ways can I shutdown a VM? In Windows Azure Infrastructure Services, there's three general ways that can be used to safely shutdown VMs: Shutdown VM via Windows Azure Management Portal Shutdown Guest Operating System inside the VM Stop VM via Windows PowerShell using Windows Azure PowerShell Module Although each of these options performs a safe shutdown of the guest operation system and the VM itself, each option handles the VM shutdown end state differently. Shutdown VM via Windows Azure Management Portal When clicking the Shutdown button at the bottom of the Virtual Machines page in the Windows Azure Management Portal, the VM is safely shutdown and "deallocated" from fabric resources.  Shutdown button on Virtual Machines page in Windows Azure Management Portal  When the shutdown process completes, the VM will be shown on the Virtual Machines page with a "Stopped ( Deallocated )" status as shown in the figure below. Virtual Machine in a "Stopped (Deallocated)" Status "Deallocated" means that the VM configuration is no longer being actively associated with fabric resources, such as virtual CPUs, memory and networks. In this state, the VM will not continue to allocate compute charges, but since fabric resources are deallocated, the VM could receive a different internal IP address ( called "Dynamic IPs" or "DIPs" in Windows Azure ) the next time it is started.  TIP: If you are leveraging this shutdown option and consistency of DIPs is important to applications running inside your VMs, you should consider using virtual networks with your VMs.  Virtual networks permit you to assign a specific IP Address Space for use with VMs that are assigned to that virtual network.  As long as you start VMs in the same order in which they were originally provisioned, each VM should be reassigned the same DIP that it was previously using. What about consistency of External IP Addresses? Great question! External IP addresses ( called "Virtual IPs" or "VIPs" in Windows Azure ) are associated with the cloud service in which one or more Windows Azure VMs are running.  As long as at least 1 VM inside a cloud service remains in a "Running" state, the VIP assigned to a cloud service will be preserved.  If all VMs inside a cloud service are in a "Stopped ( Deallocated )" status, then the cloud service may receive a different VIP when VMs are next restarted. TIP: If consistency of VIPs is important for the cloud services in which you are running VMs, consider keeping one VM inside each cloud service in the alternate VM shutdown state listed below to preserve the VIP associated with the cloud service. Shutdown Guest Operating System inside the VM When performing a Guest OS shutdown or restart ( ie., a shutdown or restart operation initiated from the Guest OS running inside the VM ), the VM configuration will not be deallocated from fabric resources. In the figure below, the VM has been shutdown from within the Guest OS and is shown with a "Stopped" VM status rather than the "Stopped ( Deallocated )" VM status that was shown in the previous figure. Note that it may require a few minutes for the Windows Azure Management Portal to reflect that the VM is in a "Stopped" state in this scenario, because we are performing an OS shutdown inside the VM rather than through an Azure management endpoint. Virtual Machine in a "Stopped" Status VMs shown in a "Stopped" status will continue to accumulate compute charges, because fabric resources are still being reserved for these VMs.  However, this also means that DIPs and VIPs are preserved for VMs in this state, so you don't have to worry about VMs and cloud services getting different IP addresses when they are started in the future. Stop VM via Windows PowerShell In the latest version of the Windows Azure PowerShell Module, a new -StayProvisioned parameter has been added to the Stop-AzureVM cmdlet. This new parameter provides the flexibility to choose the VM configuration end result when stopping VMs using PowerShell: When running the Stop-AzureVM cmdlet without the -StayProvisioned parameter specified, the VM will be safely stopped and deallocated; that is, the VM will be left in a "Stopped ( Deallocated )" status just like the end result when a VM Shutdown operation is performed via the Windows Azure Management Portal.  When running the Stop-AzureVM cmdlet with the -StayProvisioned parameter specified, the VM will be safely stopped but fabric resource reservations will be preserved; that is the VM will be left in a "Stopped" status just like the end result when performing a Guest OS shutdown operation. So, with PowerShell, you can choose how Windows Azure should handle VM configuration and fabric resource reservations when stopping VMs on a case-by-case basis. TIP: It's important to note that the -StayProvisioned parameter is only available in the latest version of the Windows Azure PowerShell Module.  So, if you've previously downloaded this module, be sure to download and install the latest version to get this new functionality. Want to Learn More about Windows Azure Infrastructure Services? To learn more about Windows Azure Infrastructure Services, be sure to check-out these additional FREE resources: Become our next "Early Expert"! Complete the Early Experts "Cloud Quest" and build a multi-VM lab network in the cloud for FREE!  Build some cool scenarios! Check out our list of over 20+ Step-by-Step Lab Guides based on key scenarios that IT Pros are implementing on Windows Azure Infrastructure Services TODAY!  Looking forward to seeing you in the Cloud! - Keith Build Your Lab! Download Windows Server 2012 Don’t Have a Lab? Build Your Lab in the Cloud with Windows Azure Virtual Machines Want to Get Certified? Join our Windows Server 2012 "Early Experts" Study Group

    Read the article

  • How to write a cctor and op= for a factory class with ptr to abstract member field?

    - by Kache4
    I'm extracting files from zip and rar archives into raw buffers. I created the following to wrap minizip and unrarlib: Archive.hpp #include "ArchiveBase.hpp" #include "ArchiveDerived.hpp" class Archive { public: Archive(string path) { /* logic here to determine type */ switch(type) { case RAR: archive_ = new ArchiveRar(path); break; case ZIP: archive_ = new ArchiveZip(path); break; case UNKNOWN_ARCHIVE: throw; break; } } Archive(Archive& other) { archive_ = // how do I copy an abstract class? } ~Archive() { delete archive_; } void passThrough(ArchiveBase::Data& data) { archive_->passThrough(data); } Archive& operator = (Archive& other) { if (this == &other) return *this; ArchiveBase* newArchive = // can't instantiate.... delete archive_; archive_ = newArchive; return *this; } private: ArchiveBase* archive_; } ArchiveBase.hpp class ArchiveBase { public: // Is there any way to put this struct in Archive instead, // so that outside classes instantiating one could use // Archive::Data instead of ArchiveBase::Data? struct Data { int field; }; virtual void passThrough(Data& data) = 0; /* more methods */ } ArchiveDerived.hpp "Derived" being "Zip" or "Rar" #include "ArchiveBase.hpp" class ArchiveDerived : public ArchiveBase { public: ArchiveDerived(string path); void passThrough(ArchiveBase::Data& data); private: /* fields needed by minizip/unrarlib */ // example zip: unzFile zipFile_; // example rar: RARHANDLE rarFile_; } ArchiveDerived.cpp #include "ArchiveDerived.hpp" ArchiveDerived::ArchiveDerived(string path) { //implement } ArchiveDerived::passThrough(ArchiveBase::Data& data) { //implement } Somebody had suggested I use this design so that I could do: Archive archiveFile(pathToZipOrRar); archiveFile.passThrough(extractParams); // yay polymorphism! How do I write a cctor for Archive? What about op= for Archive? What can I do about "renaming" ArchiveBase::Data to Archive::Data? (Both minizip and unrarlib use such structs for input and output. Data is generic for Zip & Rar and later is used to create the respective library's struct.)

    Read the article

  • jQuery: Trying to write my first plugin, how can I pass a selector to my func?

    - by Jannis
    Hi, I am trying to start writing some simple jQuery plugins and for my first try if figured I would write something that does nothing else but apply .first & .last classes to passed elements. so for instance i want to ultimately be able to: $('li').firstnlast(); So that in this html structure: <ul> <li></li> <li></li> <li></li> <li></li> </ul> <ul> <li></li> <li></li> </ul> it would create the following: <ul> <li class="first"></li> <li></li> <li></li> <li class="last"></li> </ul> <ul> <li class="first"></li> <li class="last"></li> </ul> What i have written so far in my plugin is this: (function($) { $.fn.extend({ firstnlast: function() { $(this) .first() .addClass('first') .end() .last() .addClass('last'); } }); })(jQuery); However the above only returns this: <ul> <li class="first"></li> <li></li> <li></li> <li></li> </ul> <ul> <li></li> <li class="last"></li> </ul> It seems that passing only the li into the function it will look at all li elements on the entire page and then apply the classes to it. What I need to figure out is how i can basically make the function 'context aware' so that passing the above li into the function would apply the classes to those lis inside of the parent ul, not to the first and last li on the entire page. Any hints, ideas and help would be much appreciated. Thanks for reading, Jannis

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >