Search Results

Search found 2897 results on 116 pages for 'retro fun'.

Page 111/116 | < Previous Page | 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • Mercurial "hg clone" on Windows via ssh with plink issue

    - by Kyle Tolle
    I have a Windows Server 2008 machine (iis7) that has CopSSH set up on it. To connect to it, I have a Windows 7 machine with Mercurial 1.5.1 (and TortoiseHg) installed. I can connect to the server using PuTTY with a non-standard ssh port and a .ppk file just fine. So I know the server can be SSH'd into. Next, I wanted to use the CLI to connect via hg clone to get a private repo. I've seen elsewhere that you need to have ssh configured in your mercurial.ini file, so my mercurial.ini has a line: ssh = plink.exe -ssh -C -l username -P #### -i "C:/Program Files/PuTTY/Key Files/KyleKey.ppk" Note: username is filled in with the username I set up via copSSH. #### is filled in with the non-standard ssh port I've defined for copSSH. I try to do the command hg clone ssh://inthom.com but I get this error: remote: bash: inthom.com: command not found abort: no suitable response from remote hg! It looks like hg or plink parses the hostname such that it thinks that inthom.com is a command instead of the server to ssh to. That's really odd. Next, I tried to just use plink to connect by plink -P #### ssh://inthom.com, and I am then prompted for my username, and next password. I enter them both and then I get this error: bash: ssh://inthom.com: No such file or directory So now it looks like plink doesn't parse the hostname correctly. I fiddled around for a while trying to figure out how to do call hg clone with an empty ssh:// field and eventually figured out that this command allows me to reach the server and clone a test repo on the inthom.com server: hg clone ssh://!/Repos/test ! is the character I've found that let's me leave the hostname blank, but specify the repo folder to clone. What I really don't understand is how plink knows what server to ssh to at all. neither my mercurial.ini nor the command specify a server. None of the hg clone examples I've seen have a ! character. They all use an address, which makes sense, so you can connect to any repo via ssh that you want to clone. My only guess is that it somehow defaults to the last server I used PuTTY to SSH to, but I SSH'd into another server, and then tried to use plink to get to it, but plink still defaults to inthom.com (verified with the -v arg to plink). So I am at a loss as to how plink gets this server value at all. For "fun", I tried using TortoiseHg and can only clone a repo when I use ssh://!/Repos/test as the Source. Now, you can see that, since plink doesn't parse the hostname correctly, I had to specify the port number and username in the mercurial.ini file, instead of in the hostname like [email protected]:#### like you'd expect to. Trying to figure this out at first drove me insane, because I would get errors that the host couldn't be reached, which I knew shouldn't be the case. My question is how can I configure my setup so that ssh://[email protected]:####/Repos/test is parsed correctly as the username, hostname, port number, and repo to copy? Is it something wrong with the version of plink that I'm using, or is there some setting I may have messed up? If it is plink's fault, is there an alternative tool I can use? I'm going to try to get my friend set up to connect to this same repo, so I'd like to have a clean solution instead of this ! business. Especially when I have no idea how plink gets this default server, so I'm not sure if he'd even be able to get to inthom.com correctly. PS. I've had to use a ton of different tutorials to even get to this stage. Therefore, I haven't tried pushing any changes to the server yet. Hopefully I'll get this figured out and then I can try pushing changes to the repo.

    Read the article

  • Is the Unix Philosophy still relevant in the Web 2.0 world?

    - by David Titarenco
    Introduction Hello, let me give you some background before I begin. I started programming when I was 5 or 6 on my dad's PSION II (some primitive BASIC-like language), then I learned more and more, eventually inching my way up to C, C++, Java, PHP, JS, etc. I think I'm a pretty decent coder. I think most people would agree. I'm not a complete social recluse, but I do stuff like write a virtual machine for fun. I've never taken a computer course in college because I've been in and out for the past couple of years and have only been taking core classes; never having been particularly amazing at school, perhaps I'm missing some basic tenet that most learn in CS101. I'm currently reading Coders at Work and this question is based on some ideas I read in there. A Brief (Fictionalized) Example So a certain sunny day I get an idea. I hire a designer and hammer away at some C/C++ code for a couple of months, soon thereafter releasing silvr.com, a website that transmutes lead into silver. Yep, I started my very own start-up and even gave it a clever web 2.0 name with a vowel missing. Mom and dad are proud. I come up with some numbers I should be seeing after 1, 2, 3, 6, 9, 12 months and set sail. Obviously, my transmuting server isn't perfect, sometimes it segfaults, sometimes it leaks memory. I fix it and keep truckin'. After all, gdb is my best friend. Eventually, I'm at a position where a very small community of people are happily transmuting lead into silver on a semi-regular basis, but they want to let their friends on MySpace know how many grams of lead they transmuted today. And they want to post images of their lead and silver nuggets on flickr. I'm losing out on potential traffic unless I let them log in with their Yahoo, Google, and Facebook accounts. They want webcam support and live cock fighting, merry-go-rounds and Jabberwockies. All these things seem necessary. The Aftermath Of course, I have to re-write the transmuting server! After all, I've been losing money all these months. I need OAuth libraries and OpenID libraries, JSON support, and the only stable Jabberwocky API is for Java. C++ isn't even an option anymore. I'm just one guy! The Java binary just grows and grows since I need some legacy Apache include for the JSON library, and some antiquated Sun dependency for OAuth support. Then I pick up a book like Coders at Work and read what people like jwz say about complexity... I think to myself.. Keep it simple, stupid. I like simple things. I've always loved the Unix Philosophy but even after trying to keep the new server source modular and sleek, I loathe having to write one more line of code. It feels that I'm just piling crap on top of other crap. Maybe I'm naive thinking every piece of software can be simple and clever. Maybe it's just a phase.. or is the Unix Philosophy basically dead when it comes to the current state of (web) development? I'm just kind of disheartened :(

    Read the article

  • Anyone willing to help out a javascript n00b? :-)

    - by Splynx
    Since I am asking for a lot, and know it, the following is a wall of text for those who might show some interest and want to know a little before offering their help to me. First a little about my level of programming skills, and a little about what I ask for. Where I'm at: I am not totally new to Javascript, and have dabbled a little with PHP earlier - well have dabbled a lot with PHP in fact, but never got good at it because I program alone. And I have until now never used forums to get help etc. other that searching to see if anyone else had my problem before and what the solution was. So I am not a intuitive or talented programmer, I'm more of a very maticulate programmer and you would be surprised how far you can get with if else... (ok that's a joke hehe). My solutions are usually (I am guessing here) not the best ones - and slow I take it, and the code is usually too long and I have to look up most of the stuff I use (really a lot of it is not done in "freehand"). I have a LOT of experience with HTML and CSS, and have always done well formed markup, as well as I am really into x-browsing and always require that my work validates when it's done. I also worry about optimizing a lot, and work with sprites for images, minimize the number of http requests etc, using H1,H2 etc. where it is logically correct, as well as use the correct elements and not just div span or p it... So because I am a workhorse and very maticulate I can actually pull off some quite "advanced" features, but it's always the basics that bite me in the end. Not fully understanding the syntax and so on usually gives me problems. Have recently discovered jQuery - wich is a lot of fun.... But I want to use it for the DOM node manipulation/handling only. As I mentioned I worry about optimizing, and jQuery used for everything seems... well not optimal, it strikes me as doing it yourself when possible is faster than accesing another script that may take a whole lot of other considerations into perspective when handling your variables and objects (and I am just guessing here since I as explained know nothing). So thats where I'm at... As mentioned I just started with javascript for "real" so I do not have much to show, but at the end of my WOT you can see two unfinisheded scripts I have made so you can see where I'm at roughly - just check out the URL without the /feedback.html for the second example (I am only allowed to post 1 link since I am also a SO n00b) (and for those rushing over to a validation service, remember I wrote "when it's done"...) What I ask for: I am figuring this... I have a piece of code I am working on at the moment, and this little project has taught me a whole lot already, and I have "grown" a lot as a javascript programmer. If I add a whole lot of comments to the script, and explain what it is intended to do, will you then show me where: I am writing incorrect code - making mistakes Where/how my code could be more optimal Where I am just simply being a muppet The code I want to use as the background for the tuition is the one here http://projects.1000monkeys.dk/feedback.html Use firebug and have a quick look see...

    Read the article

  • Can a conforming C implementation #define NULL to be something wacky

    - by janks
    I'm asking because of the discussion that's been provoked in this thread: http://stackoverflow.com/questions/2597142/when-was-the-null-macro-not-0/2597232 Trying to have a serious back-and-forth discussion using comments under other people's replies is not easy or fun. So I'd like to hear what our C experts think without being restricted to 500 characters at a time. The C standard has precious few words to say about NULL and null pointer constants. There's only two relevant sections that I can find. First: 3.2.2.3 Pointers An integral constant expression with the value 0, or such an expression cast to type void * , is called a null pointer constant. If a null pointer constant is assigned to or compared for equality to a pointer, the constant is converted to a pointer of that type. Such a pointer, called a null pointer, is guaranteed to compare unequal to a pointer to any object or function. and second: 4.1.5 Common definitions <stddef.h> The macros are NULL which expands to an implementation-defined null pointer constant; The question is, can NULL expand to an implementation-defined null pointer constant that is different from the ones enumerated in 3.2.2.3? In particular, could it be defined as: #define NULL __builtin_magic_null_pointer Or even: #define NULL ((void*)-1) My reading of 3.2.2.3 is that it specifies that an integral constant expression of 0, and an integral constant expression of 0 cast to type void* must be among the forms of null pointer constant that the implementation recognizes, but that it isn't meant to be an exhaustive list. I believe that the implementation is free to recognize other source constructs as null pointer constants, so long as no other rules are broken. So for example, it is provable that #define NULL (-1) is not a legal definition, because in if (NULL) do_stuff(); do_stuff() must not be called, whereas with if (-1) do_stuff(); do_stuff() must be called; since they are equivalent, this cannot be a legal definition of NULL. But the standard says that integer-to-pointer conversions (and vice-versa) are implementation-defined, therefore it could define the conversion of -1 to a pointer as a conversion that produces a null pointer. In which case if ((void*)-1) would evaluate to false, and all would be well. So what do other people think? I'd ask for everybody to especially keep in mind the "as-if" rule described in 2.1.2.3 Program execution. It's huge and somewhat roundabout, so I won't paste it here, but it essentially says that an implementation merely has to produce the same observable side-effects as are required of the abstract machine described by the standard. It says that any optimizations, transformations, or whatever else the compiler wants to do to your program are perfectly legal so long as the observable side-effects of the program aren't changed by them. So if you are looking to prove that a particular definition of NULL cannot be legal, you'll need to come up with a program that can prove it. Either one like mine that blatantly breaks other clauses in the standard, or one that can legally detect whatever magic the compiler has to do to make the strange NULL definition work. Steve Jessop found an example of way for a program to detect that NULL isn't defined to be one of the two forms of null pointer constants in 3.2.2.3, which is to stringize the constant: #define stringize_helper(x) #x #define stringize(x) stringize_helper(x) Using this macro, one could puts(stringize(NULL)); and "detect" that NULL does not expand to one of the forms in 3.2.2.3. Is that enough to render other definitions illegal? I just don't know. Thanks!

    Read the article

  • Is it possible to embed Cockburn style textual UML Use Case content in the code base to improve code

    - by fooledbyprimes
    experimenting with Cockburn use cases in code I was writing some complicated UI code. I decided to employ Cockburn use cases with fish,kite,and sea levels (discussed by Martin Fowler in his book 'UML Distilled'). I wrapped Cockburn use cases in static C# objects so that I could test logical conditions against static constants which represented steps in a UI workflow. The idea was that you could read the code and know what it was doing because the wrapped objects and their public contants gave you ENGLISH use cases via namespaces. Also, I was going to use reflection to pump out error messages that included the described use cases. The idea is that the stack trace could include some UI use case steps IN ENGLISH.... It turned out to be a fun way to achieve a mini,psuedo light-weight Domain Language but without having to write a DSL compiler. So my question is whether or not this is a good way to do this? Has anyone out there ever done something similar? c# example snippets follow Assume we have some aspx page which has 3 user controls (with lots of clickable stuff). User must click on stuff in one particular user control (possibly making some kind of selection) and then the UI must visually cue the user that the selection was successful. Now, while that item is selected, the user must browse through a gridview to find an item within one of the other user controls and then select something. This sounds like an easy thing to manage but the code can get ugly. In my case, the user controls all sent event messages which were captured by the main page. This way, the page acted like a central processor of UI events and could keep track of what happens when the user is clicking around. So, in the main aspx page, we capture the first user control's event. using MyCompany.MyApp.Web.UseCases; protected void MyFirstUserControl_SomeUIWorkflowRequestCommingIn(object sender, EventArgs e) { // some code here to respond and make "state" changes or whatever // // blah blah blah // finally we have this (how did we know to call fish level method?? because we knew when we wrote the code to send the event in the user control) UpdateUserInterfaceOnFishLevelUseCaseGoalSuccess(FishLevel.SomeNamedUIWorkflow.SelectedItemForPurchase) } protected void UpdateUserInterfaceOnFishLevelGoalSuccess(FishLevel.SomeNamedUIWorkflow goal) { switch (goal) { case FishLevel.SomeNamedUIWorkflow.NewMasterItemSelected: //call some UI related methods here including methods for the other user controls if necessary.... break; case FishLevel.SomeNamedUIWorkFlow.DrillDownOnDetails: //call some UI related methods here including methods for the other user controls if necessary.... break; case FishLevel.SomeNamedUIWorkFlow.CancelMultiSelect: //call some UI related methods here including methods for the other user controls if necessary.... break; // more cases... } } } //also we have protected void UpdateUserInterfaceOnSeaLevelGoalSuccess(SeaLevel.SomeNamedUIWorkflow goal) { switch (goal) { case SeaLevel.CheckOutWorkflow.ChangedCreditCard: // do stuff // more cases... } } } So, in the MyCompany.MyApp.Web.UseCases namespace we might have code like this: class SeaLevel... class FishLevel... class KiteLevel... The workflow use cases embedded in the classes could be inner classes or static methods or enumerations or whatever gives you the cleanest namespace. I can't remember what I did originally but you get the picture.

    Read the article

  • Code golf: Word frequency chart

    - by ChristopheD
    The challenge: Build an ASCII chart of the most commonly used words in a given text. The rules: Only accept a-z and A-Z (alphabetic characters) as part of a word. Ignore casing (She == she for our purpose). Ignore the following words (quite arbitary, I know): the, and, of, to, a, i, it, in, or, is Clarification: considering don't: this would be taken as 2 different 'words' in the ranges a-z and A-Z: (don and t). Optionally (it's too late to be formally changing the specifications now) you may choose to drop all single-letter 'words' (this could potentially make for a shortening of the ignore list too). Parse a given text (read a file specified via command line arguments or piped in; presume us-ascii) and build us a word frequency chart with the following characteristics: Display the chart (also see the example below) for the 22 most common words (ordered by descending frequency). The bar width represents the number of occurences (frequency) of the word (proportionally). Append one space and print the word. Make sure these bars (plus space-word-space) always fit: bar + [space] + word + [space] should be always <= 80 characters (make sure you account for possible differing bar and word lenghts: e.g.: the second most common word could be a lot longer then the first while not differing so much in frequency). Maximize bar width within these constraints and scale the bars appropriately (according to the frequencies they represent). An example: The text for the example can be found here (Alice's Adventures in Wonderland, by Lewis Carroll). This specific text would yield the following chart: _________________________________________________________________________ |_________________________________________________________________________| she |_______________________________________________________________| you |____________________________________________________________| said |____________________________________________________| alice |______________________________________________| was |__________________________________________| that |___________________________________| as |_______________________________| her |____________________________| with |____________________________| at |___________________________| s |___________________________| t |_________________________| on |_________________________| all |______________________| this |______________________| for |______________________| had |_____________________| but |____________________| be |____________________| not |___________________| they |__________________| so For your information: these are the frequencies the above chart is built upon: [('she', 553), ('you', 481), ('said', 462), ('alice', 403), ('was', 358), ('that ', 330), ('as', 274), ('her', 248), ('with', 227), ('at', 227), ('s', 219), ('t' , 218), ('on', 204), ('all', 200), ('this', 181), ('for', 179), ('had', 178), (' but', 175), ('be', 167), ('not', 166), ('they', 155), ('so', 152)] A second example (to check if you implemented the complete spec): Replace every occurence of you in the linked Alice in Wonderland file with superlongstringstring: ________________________________________________________________ |________________________________________________________________| she |_______________________________________________________| superlongstringstring |_____________________________________________________| said |______________________________________________| alice |________________________________________| was |_____________________________________| that |______________________________| as |___________________________| her |_________________________| with |_________________________| at |________________________| s |________________________| t |______________________| on |_____________________| all |___________________| this |___________________| for |___________________| had |__________________| but |_________________| be |_________________| not |________________| they |________________| so The winner: Shortest solution (by character count, per language). Have fun! Edit: Table summarizing the results so far (2012-02-15) (originally added by user Nas Banov): Language Relaxed Strict ========= ======= ====== GolfScript 130 143 Perl 185 Windows PowerShell 148 199 Mathematica 199 Ruby 185 205 Unix Toolchain 194 228 Python 183 243 Clojure 282 Scala 311 Haskell 333 Awk 336 R 298 Javascript 304 354 Groovy 321 Matlab 404 C# 422 Smalltalk 386 PHP 450 F# 452 TSQL 483 507 The numbers represent the length of the shortest solution in a specific language. "Strict" refers to a solution that implements the spec completely (draws |____| bars, closes the first bar on top with a ____ line, accounts for the possibility of long words with high frequency etc). "Relaxed" means some liberties were taken to shorten to solution. Only solutions shorter then 500 characters are included. The list of languages is sorted by the length of the 'strict' solution. 'Unix Toolchain' is used to signify various solutions that use traditional *nix shell plus a mix of tools (like grep, tr, sort, uniq, head, perl, awk).

    Read the article

  • translating play in HTML to python

    - by aharon
    So, I'd like to represent one of Shakespeare's plays, Hamlet, into the following objects (maybe this isn't the best representation, if so please tell me): class Play(): acts = [] ... def add_act(self, act): acts.append(act) class Act(): scenes = [] ... def add_scene(self, scene): scenes.append(scene) class Scene(): elems = [] def __init__(self, title, setting=""): ... def add_elem(self, elem): elems.append(elem) ... class StageDirection(): # elem def __init__(self, text): ... class Line(): # elem def __init__(self, id, text, character = None): ... # A None character represents a continuation from the previous line # id could be, for example, 1.1.1 There are other methods, of course, for printing and such in each of the classes. The question is, how do I get a structure based on these classes (or something like them) from HTML 4 code that looks like this: <H3>ACT I</h3> <h3>SCENE I. Elsinore. A platform before the castle.</h3> <p><blockquote> <i>FRANCISCO at his post. Enter to him BERNARDO</i> </blockquote> <A NAME=speech1><b>BERNARDO</b></a> <blockquote> <A NAME=1.1.1>Who's there?</A><br> </blockquote> <A NAME=speech2><b>FRANCISCO</b></a> <blockquote> <A NAME=1.1.2>Nay, answer me: stand, and unfold yourself.</A><br> </blockquote> <A NAME=speech3><b>BERNARDO</b></a> <blockquote> <A NAME=1.1.3>Long live the king!</A><br> </blockquote> <A NAME=speech4><b>FRANCISCO</b></a> <blockquote> <A NAME=1.1.4>Bernardo?</A><br> </blockquote> <A NAME=speech5><b>BERNARDO</b></a> <blockquote> <A NAME=1.1.5>He.</A><br> </blockquote> <!-- for more, see the source of shakespeare.mit.edu/hamlet/full.html --> translating that into something like this: play = Play() actI = Act() sceneI = Scene("Scene I", "Elsinore. A platform before the castle.") sceneI.add_elem(StageDirection("Francisco at his post. Enter to him Bernardo.")) sceneI.add_elem(Line("Bernardo", "Who's there?")) ... Of course, I don't expect all the code—but what libraries and, when there aren't libraries, logic should I use? Thanks. (This is for a future opensource project and me learning Python for fun—not homework.)

    Read the article

  • Use QuickCheck by generating primes

    - by Dan
    Background For fun, I'm trying to write a property for quick-check that can test the basic idea behind cryptography with RSA. Choose two distinct primes, p and q. Let N = p*q e is some number relatively prime to (p-1)(q-1) (in practice, e is usually 3 for fast encoding) d is the modular inverse of e modulo (p-1)(q-1) For all x such that 1 < x < N, it is always true that (x^e)^d = x modulo N In other words, x is the "message", raising it to the eth power mod N is the act of "encoding" the message, and raising the encoded message to the dth power mod N is the act of "decoding" it. (The property is also trivially true for x = 1, a case which is its own encryption) Code Here are the methods I have coded up so far: import Test.QuickCheck -- modular exponentiation modExp :: Integral a => a -> a -> a -> a modExp y z n = modExp' (y `mod` n) z `mod` n where modExp' y z | z == 0 = 1 | even z = modExp (y*y) (z `div` 2) n | odd z = (modExp (y*y) (z `div` 2) n) * y -- relatively prime rPrime :: Integral a => a -> a -> Bool rPrime a b = gcd a b == 1 -- multiplicative inverse (modular) mInverse :: Integral a => a -> a -> a mInverse 1 _ = 1 mInverse x y = (n * y + 1) `div` x where n = x - mInverse (y `mod` x) x -- just a quick way to test for primality n `divides` x = x `mod` n == 0 primes = 2:filter isPrime [3..] isPrime x = null . filter (`divides` x) $ takeWhile (\y -> y*y <= x) primes -- the property prop_rsa (p,q,x) = isPrime p && isPrime q && p /= q && x > 1 && x < n && rPrime e t ==> x == (x `powModN` e) `powModN` d where e = 3 n = p*q t = (p-1)*(q-1) d = mInverse e t a `powModN` b = modExp a b n (Thanks, google and random blog, for the implementation of modular multiplicative inverse) Question The problem should be obvious: there are way too many conditions on the property to make it at all usable. Trying to invoke quickCheck prop_rsa in ghci made my terminal hang. So I've poked around the QuickCheck manual a bit, and it says: Properties may take the form forAll <generator> $ \<pattern> -> <property> How do I make a <generator> for prime numbers? Or with the other constraints, so that quickCheck doesn't have to sift through a bunch of failed conditions? Any other general advice (especially regarding QuickCheck) is welcome.

    Read the article

  • Why do I get a segmentation fault while redirecting sys.stdout to Tkinter.Text widget in Python?

    - by Brent Nash
    I'm in the process of building a GUI-based application with Python/Tkinter that builds on top of the existing Python bdb module. In this application, I want to silence all stdout/stderr from the console and redirect it to my GUI. To accomplish this purpose, I've written a specialized Tkinter.Text object (code at the end of the post). The basic idea is that when something is written to sys.stdout, it shows up as a line in the "Text" with the color black. If something is written to sys.stderr, it shows up as a line in the "Text" with the color red. As soon as something is written, the Text always scrolls down to view the most recent line. I'm using Python 2.6.1 at the moment. On Mac OS X 10.5, this seems to work great. I have had zero problems with it. On RedHat Enterprise Linux 5, however, I pretty reliably get a segmentation fault during the run of a script. The segmentation fault doesn't always occur in the same place, but it pretty much always occurs. If I comment out the sys.stdout= and sys.stderr= lines from my code, the segmentation faults seem to go away. I'm sure there are other ways around this that I will probably have to resort to, but can anyone see anything I'm doing blatantly wrong here that could be causing these segmentation faults? It's driving me nuts. Thanks! PS - I realize redirecting sys.stderr to the GUI might not be a great idea, but I still get segmentation faults even when I only redirect sys.stdout and not sys.stderr. I also realize that I'm allowing the Text to grow indefinitely at the moment. class ConsoleText(tk.Text): '''A Tkinter Text widget that provides a scrolling display of console stderr and stdout.''' class IORedirector(object): '''A general class for redirecting I/O to this Text widget.''' def __init__(self,text_area): self.text_area = text_area class StdoutRedirector(IORedirector): '''A class for redirecting stdout to this Text widget.''' def write(self,str): self.text_area.write(str,False) class StderrRedirector(IORedirector): '''A class for redirecting stderr to this Text widget.''' def write(self,str): self.text_area.write(str,True) def __init__(self, master=None, cnf={}, **kw): '''See the __init__ for Tkinter.Text for most of this stuff.''' tk.Text.__init__(self, master, cnf, **kw) self.started = False self.write_lock = threading.Lock() self.tag_configure('STDOUT',background='white',foreground='black') self.tag_configure('STDERR',background='white',foreground='red') self.config(state=tk.DISABLED) def start(self): if self.started: return self.started = True self.original_stdout = sys.stdout self.original_stderr = sys.stderr stdout_redirector = ConsoleText.StdoutRedirector(self) stderr_redirector = ConsoleText.StderrRedirector(self) sys.stdout = stdout_redirector sys.stderr = stderr_redirector def stop(self): if not self.started: return self.started = False sys.stdout = self.original_stdout sys.stderr = self.original_stderr def write(self,val,is_stderr=False): #Fun Fact: The way Tkinter Text objects work is that if they're disabled, #you can't write into them AT ALL (via the GUI or programatically). Since we want them #disabled for the user, we have to set them to NORMAL (a.k.a. ENABLED), write to them, #then set their state back to DISABLED. self.write_lock.acquire() self.config(state=tk.NORMAL) self.insert('end',val,'STDERR' if is_stderr else 'STDOUT') self.see('end') self.config(state=tk.DISABLED) self.write_lock.release()

    Read the article

  • ASP.NET GridView - Cannot set the colour of the row during databind?

    - by Dan
    This is driving me NUTS! It's something that I've done 100s of time with a Datagrid. I'm now using a Gridview and I can't figure this out. I've got this grid: <asp:GridView AutoGenerateColumns="false" runat="server" ID="gvSelect" CssClass="GridViewStyle" GridLines="None" ShowHeader="False" PageSize="20" AllowPaging="True"> <Columns> <asp:TemplateField> <ItemTemplate> <asp:Label runat="server" ID="lbldas" Text="blahblah"></asp:Label> </ItemTemplate> </asp:TemplateField> </Columns> And during the RowDataBound I've tried: Protected Sub gvSelect_RowDataBound(ByVal sender As Object, ByVal e As System.Web.UI.WebControls.GridViewRowEventArgs) Handles gvSelect.RowCreated If e.Row.RowType = DataControlRowType.DataRow Then e.Row.Attributes.Add("onMouseOver", "this.style.backgroundColor='lightgrey'") End If End Sub And it NEVER sets the row backcolor.. I've been successful in using: gridrow.Cells(0).BackColor = Drawing.Color.Blue But doing the entire row? NOPE! and it's driving me nuts.. does ANYONE have solution for me? And just for fun I put this on the SAME page: <asp:DataGrid AutoGenerateColumns="false" runat="server" ID="dgSelect" GridLines="None" ShowHeader="False" PageSize="20" AllowPaging="True"> <Columns> <asp:TemplateColumn> <ItemTemplate> <asp:Label runat="server" ID="lbldas" Text="blahblah"></asp:Label> </ItemTemplate> </asp:TemplateColumn> </Columns> </asp:DataGrid> And in the ItemDataBound I put: If Not e.Item.ItemType = ListItemType.Header And Not e.Item.ItemType = ListItemType.Footer Then e.Item.Attributes.Add("onMouseOver", "this.style.backgroundColor='lightgrey'") End If And it works as expected.. SO What am I doing wrong with the Gridview? UPDATE ************** I thought I'd post the resulting HTML to show that any styles aren't affecting this. Here's the gridview html: <div class="AspNet-GridView" id="gvSelect"> <table cellpadding="0" cellspacing="0" summary=""> <tbody> <tr> <td> <span id="gvSelect_ctl02_lbldas">blahblah</span> </td> </tr> </tbody> </table> </div> And here's the resulting Datagrid HTML: <table cellspacing="0" border="0" id="dgSelect" style="border-collapse:collapse;"> <tr onMouseOver="this.style.backgroundColor='lightgrey'"> <td> <span id="dgSelect_ctl03_lbldas">blahblah</span> </td> </tr> </table> See.. the main difference is the tag. It never gets set in the gridview.. and I don't know why.. I've traced through it.. and the code gets run.. :S

    Read the article

  • QGraphicsView and custom Cursors

    - by Etienne de Martel
    I am trying to make use of a mix of custom cursors and preset cursors for my QGraphicsView. In my implementation we have created a notion of "modes" for the view. Meaning that depending on what "mode" the user is in, different things will happen on the left-click, or left-click drag. Anyway, none of that is the problem, just the context. The problem arises when I try to change the cursor for each mode. For instance, for mode 1 we want to show the regular Arrow cursor, but for mode 2, we want to use a custom pixmap. Seemingly simple we call graphicsview->viewport()->setCursor(Qt::QArrowCursor)  when we are switching to mode 1, and graphicsview->viewport()->setCursor(our custom cursor) for mode 2. Except it doesn't work at all. Firstly, the cursor does not change to the custom cursor. That is the first problem. However, if through another operation the drag mode of the graphics view gets set to ScrollHandDrag, the cursor will switch to the custom cursor once the drag operation is complete. Weird. But the plot thickens... Once we switch to the custom cursor, it can never be changed back to the ArrorCursor no matter how many times we call setCursor(Qt::QArrowCursor). it also doesn't seem to matter whether I call setCursor on the viewport or the graphics view itself. So, just for fun, I added a call to graphicsview->unsetCursor() just before we want to change the cursor, and that at least rectifies the second problem. The cursor changes just fine so long as we do a little HandDragging in between. Better, but certainly not optimal. However it should be noted, that doing the unsetCursor on the viewport doesn't work. it must absolutely be done on the graphicsview - regardless of the fact that we are setting the cursor on the viewport. To completely patch over the problem I have added these two lines after I set the cursor: graphicsview->setDragMode(QGraphicsView::ScrollHandDrag); graphicsview->setDragMode(QGraphicsView::NoDrag); Which works, but ye gads!! So something magical is happening inside these two methods that fixes the problem, but glancing at the code I don't see what. Something to do with the fact that the drag mode is changing the cursor I imagine. Just for completeness, I should also mention that the thing that triggers the mode change, is a QPushButton that has been added to the scene using QGraphicsScene->addWidget(). I don't know if that has anything to do with it, but you never know. I am hoping that either someone could clarify why I need to make these seemingly random calls. I don't think I am doing anything wrong anywhere. Thanks in advance for any help. EDIT: Here is an actual code example with the cursor patches as described above. You can look at and/or download them from the link below. It was a little long to paste here. I included the framework around which the cursors are changed, because I have a funny feeling that that is important somehow. https://gist.github.com/712654 The code where the problem lies is in MyGraphicsView.cpp starting at line 104. This is where the cursor is set in the graphics view. It is exactly as described above. Keep in mind, with the very ugly patches in place the cursors do work - more or less. Without those lines you will see very clearly the problems listed in the post above. Also included in the link, is all the code for a mainWindow that uses the view, etc... the only thing missing are the images I am using. But the images themselves don't matter, any 16x16 pngs will do.

    Read the article

  • Matrix Multiplication with Threads: Why is it not faster?

    - by prelic
    Hey all, So I've been playing around with pthreads, specifically trying to calculate the product of two matrices. My code is extremely messy because it was just supposed to be a quick little fun project for myself, but the thread theory I used was very similar to: #include <pthread.h> #include <stdio.h> #include <stdlib.h> #define M 3 #define K 2 #define N 3 #define NUM_THREADS 10 int A [M][K] = { {1,4}, {2,5}, {3,6} }; int B [K][N] = { {8,7,6}, {5,4,3} }; int C [M][N]; struct v { int i; /* row */ int j; /* column */ }; void *runner(void *param); /* the thread */ int main(int argc, char *argv[]) { int i,j, count = 0; for(i = 0; i < M; i++) { for(j = 0; j < N; j++) { //Assign a row and column for each thread struct v *data = (struct v *) malloc(sizeof(struct v)); data->i = i; data->j = j; /* Now create the thread passing it data as a parameter */ pthread_t tid; //Thread ID pthread_attr_t attr; //Set of thread attributes //Get the default attributes pthread_attr_init(&attr); //Create the thread pthread_create(&tid,&attr,runner,data); //Make sure the parent waits for all thread to complete pthread_join(tid, NULL); count++; } } //Print out the resulting matrix for(i = 0; i < M; i++) { for(j = 0; j < N; j++) { printf("%d ", C[i][j]); } printf("\n"); } } //The thread will begin control in this function void *runner(void *param) { struct v *data = param; // the structure that holds our data int n, sum = 0; //the counter and sum //Row multiplied by column for(n = 0; n< K; n++){ sum += A[data->i][n] * B[n][data->j]; } //assign the sum to its coordinate C[data->i][data->j] = sum; //Exit the thread pthread_exit(0); } source: http://macboypro.com/blog/2009/06/29/matrix-multiplication-in-c-using-pthreads-on-linux/ For the non-threaded version, I used the same setup (3 2-d matrices, dynamically allocated structs to hold r/c), and added a timer. First trials indicated that the non-threaded version was faster. My first thought was that the dimensions were too small to notice a difference, and it was taking longer to create the threads. So I upped the dimensions to about 50x50, randomly filled, and ran it, and I'm still not seeing any performance upgrade with the threaded version. What am I missing here?

    Read the article

  • Decentralized synchronized secure data storage

    - by Alberich
    Introduction Hi, I am going to ask a question which seems utopic for me, but I need to know if there is a way to achieve what I need. And if not, I need to know why not. The idea Suppose I have a database structure, in MySql. I want to create some solution to allow anyone (no matter who, no matter where) to have a synchronized copy (updated clone) of this database (with its content) Well, and it is not going to be just one synchronized copy, it could (and should) be a multiple replication (supposing the basic, this means, for example, ten copies all over the world) And, the most important thing: It must be secure. By secure I mean only real-accepted transactions will be synchronized with all the others (no matter how many) database copies/clones. Note: Since it would be quite difficult to make the synchronization in real-time, I will design everything to make this feature dispensable. So it is not required. My auto-suggestion This is how I am thinking to manage it: Time identifiers and Updates checking: Every action (insert, update, delete...) will be stored as the action instruction itself, associated to the time identifier. [I think better than a DATETIME field, it'll be an INT one, with the number of miliseconds passed from 1st january 2013 on, for example]. So each copy is going to ask to the "neighbour copy" for new actions done since last update, and execute them after checking they are allowed. Problem 1: the "neighbour copy" could be outdated too. Solution 1: do not ask just one neighbour, create a random list with some of the copies/clones and ask them for news (I could avoid the list and ask ALL the clones for updates, but this will be inefficient if clones number ascends too much). Problem 2: Real-time global synchronization is not active. What if... Someone at CLONE_ENTERPRISING inserts a row into TABLE. ... this row goes to every clone ... Someone at CLONE_FIXEMALL deletes this row. ... and at the same time, somewhere in an outdated clone ... Someone at CLONE_DROPOUT edits this row (now inexistent at the other clones) Solution 2: easy stuff, force a GLOBAL synchronization before doing any new "depending-on-third-data action" (edit, for example). This global synch. will be unnecessary when making an INSERT, for instance. Note: Well, someone could have some fun, and make the same insert in two clones... since they're not getting updated in real-time, this row will exist twice. But, it's the same as when we have one single database, in some needed cases we check if there is an existing same-row before doing the final action. Not a problem. Problem 3: It is possible to edit the code and do not filter actions, so someone could spread instructions to delete everything, or just make some trolling activity. This is not a problem, since good clones will always be somewhere. Those who got bad won't interest anymore. I really appreciate if you read. I know this is not the perfect solution, it has possibly hundred of holes, but it is my basic start. I will now appreciate anything you can teach me now. Thanks a lot. PS.: It could be that all this I am trying already exists and has its own name. Sorry for asking then (I'd anyway thank this name, if it exists)

    Read the article

  • How can I optimize this subqueried and Joined MySQL Query?

    - by kevzettler
    I'm pretty green on mysql and I need some tips on cleaning up a query. It is used in several variations through out a site. Its got some subquerys derived tables and fun going on. Heres the query: # Query_time: 2 Lock_time: 0 Rows_sent: 0 Rows_examined: 0 SELECT * FROM ( SELECT products . *, categories.category_name AS category, ( SELECT COUNT( * ) FROM distros WHERE distros.product_id = products.product_id) AS distro_count, (SELECT COUNT(*) FROM downloads WHERE downloads.product_id = products.product_id AND WEEK(downloads.date) = WEEK(curdate())) AS true_downloads, (SELECT COUNT(*) FROM views WHERE views.product_id = products.product_id AND WEEK(views.date) = WEEK(curdate())) AS true_views FROM products INNER JOIN categories ON products.category_id = categories.category_id ORDER BY created_date DESC, true_views DESC ) AS count_table WHERE count_table.distro_count > 0 AND count_table.status = 'published' AND count_table.active = 1 LIMIT 0, 8 Heres the explain: +----+--------------------+------------+-------+---------------+-------------+---------+------------------------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+--------------------+------------+-------+---------------+-------------+---------+------------------------------------+------+----------------------------------------------+ | 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 232 | Using where | | 2 | DERIVED | categories | index | PRIMARY | idx_name | 47 | NULL | 13 | Using index; Using temporary; Using filesort | | 2 | DERIVED | products | ref | category_id | category_id | 4 | digizald_db.categories.category_id | 9 | | | 5 | DEPENDENT SUBQUERY | views | ref | product_id | product_id | 4 | digizald_db.products.product_id | 46 | Using where | | 4 | DEPENDENT SUBQUERY | downloads | ref | product_id | product_id | 4 | digizald_db.products.product_id | 14 | Using where | | 3 | DEPENDENT SUBQUERY | distros | ref | product_id | product_id | 4 | digizald_db.products.product_id | 1 | Using index | +----+--------------------+------------+-------+---------------+-------------+---------+------------------------------------+------+----------------------------------------------+ 6 rows in set (0.04 sec) And the Tables: mysql> describe products; +---------------+--------------------------------------------------+------+-----+-------------------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------+--------------------------------------------------+------+-----+-------------------+----------------+ | product_id | int(10) unsigned | NO | PRI | NULL | auto_increment | | product_key | char(32) | NO | | NULL | | | title | varchar(150) | NO | | NULL | | | company | varchar(150) | NO | | NULL | | | user_id | int(10) unsigned | NO | MUL | NULL | | | description | text | NO | | NULL | | | video_code | text | NO | | NULL | | | category_id | int(10) unsigned | NO | MUL | NULL | | | price | decimal(10,2) | NO | | NULL | | | quantity | int(10) unsigned | NO | | NULL | | | downloads | int(10) unsigned | NO | | NULL | | | views | int(10) unsigned | NO | | NULL | | | status | enum('pending','published','rejected','removed') | NO | | NULL | | | active | tinyint(1) | NO | | NULL | | | deleted | tinyint(1) | NO | | NULL | | | created_date | datetime | NO | | NULL | | | modified_date | timestamp | NO | | CURRENT_TIMESTAMP | | | scrape_source | varchar(215) | YES | | NULL | | +---------------+--------------------------------------------------+------+-----+-------------------+----------------+ 18 rows in set (0.00 sec) mysql> describe categories -> ; +------------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------------+------------------+------+-----+---------+----------------+ | category_id | int(10) unsigned | NO | PRI | NULL | auto_increment | | category_name | varchar(45) | NO | MUL | NULL | | | parent_id | int(10) unsigned | YES | MUL | NULL | | | category_type_id | int(10) unsigned | NO | | NULL | | +------------------+------------------+------+-----+---------+----------------+ 4 rows in set (0.00 sec) mysql> describe compatibilities -> ; +------------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------------+------------------+------+-----+---------+----------------+ | compatibility_id | int(10) unsigned | NO | PRI | NULL | auto_increment | | name | varchar(45) | NO | | NULL | | | code_name | varchar(45) | NO | | NULL | | | description | varchar(128) | NO | | NULL | | | position | int(10) unsigned | NO | | NULL | | +------------------+------------------+------+-----+---------+----------------+ 5 rows in set (0.01 sec) mysql> describe distros -> ; +------------------+--------------------------------------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------------+--------------------------------------------------+------+-----+---------+----------------+ | id | int(10) unsigned | NO | PRI | NULL | auto_increment | | product_id | int(10) unsigned | NO | MUL | NULL | | | compatibility_id | int(10) unsigned | NO | MUL | NULL | | | user_id | int(10) unsigned | NO | | NULL | | | status | enum('pending','published','rejected','removed') | NO | | NULL | | | distro_type | enum('file','url') | NO | | NULL | | | version | varchar(150) | NO | | NULL | | | filename | varchar(50) | YES | | NULL | | | url | varchar(250) | YES | | NULL | | | virus | enum('READY','PASS','FAIL') | YES | | NULL | | | downloads | int(10) unsigned | NO | | 0 | | +------------------+--------------------------------------------------+------+-----+---------+----------------+ 11 rows in set (0.01 sec) mysql> describe downloads; +------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------+------------------+------+-----+---------+----------------+ | id | int(10) unsigned | NO | PRI | NULL | auto_increment | | product_id | int(10) unsigned | NO | MUL | NULL | | | distro_id | int(10) unsigned | NO | MUL | NULL | | | user_id | int(10) unsigned | NO | MUL | NULL | | | ip_address | varchar(15) | NO | | NULL | | | date | datetime | NO | | NULL | | +------------+------------------+------+-----+---------+----------------+ 6 rows in set (0.01 sec) mysql> describe views -> ; +------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------+------------------+------+-----+---------+----------------+ | id | int(10) unsigned | NO | PRI | NULL | auto_increment | | product_id | int(10) unsigned | NO | MUL | NULL | | | user_id | int(10) unsigned | NO | MUL | NULL | | | ip_address | varchar(15) | NO | | NULL | | | date | datetime | NO | | NULL | | +------------+------------------+------+-----+---------+----------------+ 5 rows in set (0.00 sec)

    Read the article

  • Join and sum not compatible matrices through data.table

    - by leodido
    My goal is to "sum" two not compatible matrices (matrices with different dimensions) using (and preserving) row and column names. I've figured this approach: convert the matrices to data.table objects, join them and then sum columns vectors. An example: > M1 1 3 4 5 7 8 1 0 0 1 0 0 0 3 0 0 0 0 0 0 4 1 0 0 0 0 0 5 0 0 0 0 0 0 7 0 0 0 0 1 0 8 0 0 0 0 0 0 > M2 1 3 4 5 8 1 0 0 1 0 0 3 0 0 0 0 0 4 1 0 0 0 0 5 0 0 0 0 0 8 0 0 0 0 0 > M1 %ms% M2 1 3 4 5 7 8 1 0 0 2 0 0 0 3 0 0 0 0 0 0 4 2 0 0 0 0 0 5 0 0 0 0 0 0 7 0 0 0 0 1 0 8 0 0 0 0 0 0 This is my code: M1 <- matrix(c(0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0), byrow = TRUE, ncol = 6) colnames(M1) <- c(1,3,4,5,7,8) M2 <- matrix(c(0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0), byrow = TRUE, ncol = 5) colnames(M2) <- c(1,3,4,5,8) # to data.table objects DT1 <- data.table(M1, keep.rownames = TRUE, key = "rn") DT2 <- data.table(M2, keep.rownames = TRUE, key = "rn") # join and sum of common columns if (nrow(DT1) > nrow(DT2)) { A <- DT2[DT1, roll = TRUE] A[, list(X1 = X1 + X1.1, X3 = X3 + X3.1, X4 = X4 + X4.1, X5 = X5 + X5.1, X7, X8 = X8 + X8.1), by = rn] } That outputs: rn X1 X3 X4 X5 X7 X8 1: 1 0 0 2 0 0 0 2: 3 0 0 0 0 0 0 3: 4 2 0 0 0 0 0 4: 5 0 0 0 0 0 0 5: 7 0 0 0 0 1 0 6: 8 0 0 0 0 0 0 Then I can convert back this data.table to a matrix and fix row and column names. The questions are: how to generalize this procedure? I need a way to automatically create list(X1 = X1 + X1.1, X3 = X3 + X3.1, X4 = X4 + X4.1, X5 = X5 + X5.1, X7, X8 = X8 + X8.1) because i want to apply this function to matrices which dimensions (and row/columns names) are not known in advance. In summary I need a merge procedure that behaves as described. there are other strategies/implementations that achieve the same goal that are, at the same time, faster and generalized? (hoping that some data.table monster help me) to what kind of join (inner, outer, etc. etc.) is assimilable this procedure? Thanks in advance. p.s.: I'm using data.table version 1.8.2 EDIT - SOLUTIONS @Aaron solution. No external libraries, only base R. It works also on list of matrices. add_matrices_1 <- function(...) { a <- list(...) cols <- sort(unique(unlist(lapply(a, colnames)))) rows <- sort(unique(unlist(lapply(a, rownames)))) out <- array(0, dim = c(length(rows), length(cols)), dimnames = list(rows,cols)) for (m in a) out[rownames(m), colnames(m)] <- out[rownames(m), colnames(m)] + m out } @MadScone solution. Used reshape2 package. It works only on two matrices per call. add_matrices_2 <- function(m1, m2) { m <- acast(rbind(melt(M1), melt(M2)), Var1~Var2, fun.aggregate = sum) mn <- unique(colnames(m1), colnames(m2)) rownames(m) <- mn colnames(m) <- mn m } BENCHMARK (100 runs with microbenchmark package) Unit: microseconds expr min lq median uq max 1 add_matrices_1 196.009 257.5865 282.027 291.2735 549.397 2 add_matrices_2 13737.851 14697.9790 14864.778 16285.7650 25567.448 No need to comment the benchmark: @Aaron solution wins. I'll continue to investigate a similar solution for data.table objects. I'll add other solutions eventually reported or discovered.

    Read the article

  • Please recommend me intermediate-to-advanced Python books to buy.

    - by anonnoir
    I'm in the final year, final semester of my law degree, and will be graduating very soon. (April, to be specific.) But before I begin practice, I plan to take 2 two months off, purely for serious programming study. So I'm currently looking for some Python-related books, gauged intermediate to advanced, which are interesting (because of the subject matter itself) and possibly useful to my future line of work. I've identified 2 possible purchases at the moment: Natural Language Processing with Python. The law deals mostly with words, and I've quite a number of ideas as to where I might go with NLP. Data extraction, summaries, client management systems linked with document templates, etc. Programming Collective Intelligence. This book fascinates me, because I've always liked the idea of machine learning (and I'm currently studying it by the side too, for fun). I'd like to build/play around with Web 2.0 applications; and who knows if I can apply some of the things I learn to my legal work. (E.g. Playground experiments to determine how and under what circumstances judges might be biased, by forcing algorithms to pore through judgments and calculate similarities, etc.) Please feel free to criticize my current choices, but do at least offer or recommend other books that I should read in their place. My budget can deal with 4 books, max. These books will be used heavily throughout the 2 months; I will be reading them back to back, absorbing the explanations given, and hacking away at their code. Also, the books themselves should satisfy 2 main criteria: Application. The book must teach how to solve problems. I like reading theory, but I want to build things and solve problems first. Even playful applications are fine, because games and experiments always have real-world applications sooner or later. Readability. I like reading technical books, no matter how difficult they are. I enjoy the effort and the feeling that you're learning something. But the book shouldn't contain code or explanations that are too cryptic or erratic. Even if it's difficult, the book's content should be accessible with focused reading. Note: I realize that I am somewhat of a beginner to the whole programming thing, so please don't put me down. But from experience, I think it's better to aim up and leave my comfort zone when learning new things, rather than to just remain stagnant the way I am. (At least the difficulty gives me focus: i.e. if a programmer can be that good, perhaps if I sustain my own efforts I too can be as good as him someday.) If anything, I'm also a very determined person, so two months of day-to-night intensive programming study with nothing else on my mind should, I think, give me a bit of a fighting chance to push my programming skills to a much higher level.

    Read the article

  • How to best propagate changes upwards a hierarchical structure for binding?

    - by H.B.
    If i have a folder-like structure that uses the composite design pattern and i bind the root folder to a TreeView. It would be quite useful if i can display certain properties that are being accumulated from the folder's contents. The question is, how do i best inform the folder that changes occurred in a child-element so that the accumulative properties get updated? The context in which i need this is a small RSS-FeedReader i am trying to make. This are the most important objects and aspects of my model: Composite interface: public interface IFeedComposite : INotifyPropertyChanged { string Title { get; set; } int UnreadFeedItemsCount { get; } ObservableCollection<FeedItem> FeedItems { get; } } FeedComposite (aka Folder) public class FeedComposite : BindableObject, IFeedComposite { private string title = ""; public string Title { get { return title; } set { title = value; NotifyPropertyChanged("Title"); } } private ObservableCollection<IFeedComposite> children = new ObservableCollection<IFeedComposite>(); public ObservableCollection<IFeedComposite> Children { get { return children; } set { children.Clear(); foreach (IFeedComposite item in value) { children.Add(item); } NotifyPropertyChanged("Children"); } } public FeedComposite() { } public FeedComposite(string title) { Title = title; } public ObservableCollection<FeedItem> FeedItems { get { ObservableCollection<FeedItem> feedItems = new ObservableCollection<FeedItem>(); foreach (IFeedComposite child in Children) { foreach (FeedItem item in child.FeedItems) { feedItems.Add(item); } } return feedItems; } } public int UnreadFeedItemsCount { get { return (from i in FeedItems where i.IsUnread select i).Count(); } } Feed: public class Feed : BindableObject, IFeedComposite { private string url = ""; public string Url { get { return url; } set { url = value; NotifyPropertyChanged("Url"); } } ... private ObservableCollection<FeedItem> feedItems = new ObservableCollection<FeedItem>(); public ObservableCollection<FeedItem> FeedItems { get { return feedItems; } set { feedItems.Clear(); foreach (FeedItem item in value) { AddFeedItem(item); } NotifyPropertyChanged("Items"); } } public int UnreadFeedItemsCount { get { return (from i in FeedItems where i.IsUnread select i).Count(); } } public Feed() { } public Feed(string url) { Url = url; } Ok, so here's the thing, if i bind a TextBlock.Text to the UnreadFeedItemsCount there won't be simple notifications when an item is marked unread, so one of my approaches has been to handle the PropertyChanged event of every FeedItem and if the IsUnread-Property is changed i have my Feed make a notification that the property UnreadFeedItemsCount has been changed. With this approach i also need to handle all PropertyChanged events of all Feeds and FeedComposites in Children of FeedComposite, from the sound of it, it should be obvious that this is not such a very good idea, you need to be very careful that items never get added or removed to any collection without having attached the PropertyChanged event handler first and things like that. Also: What do i do with the CollectionChanged-Events which necessarily also cause a change in the sum of the unread items count? Sounds like more event handling fun. It is such a mess, it would be great if anyone has an elegant solution to this since i don't want the feed-reader to end up as awful as my first attempt years ago when i didn't even know about DataBinding...

    Read the article

  • how to pull href link

    - by user1751494
    I am trying to pull a link from a page that is in a formal I can't seem to find by simply googling... it might be simple but xpath is not my area of expertise I am using c# and trying to pull the link and just write it to the console to figure out how to get the link here is my C# code var document = webGet.Load("http://classifieds.castanet.net/cat/vehicles/cars/0_-_4_years_old/"); var browser = document.DocumentNode.SelectSingleNode("//a[starts-with(@href,'/details/')]"); if (browser != null) { string htmlbody = browser.OuterHtml; Console.WriteLine(htmlbody); } the html code section is <div class="last">&hellip;</div><a href="/cat/vehicles/cars/0_-_4_years_old/?p=13">13</a><a href="/cat/vehicles/cars/0_-_4_years_old/?p=2">&raquo;</a> <select name="sortby" class="sortby" onchange="doSort(this);"> <option value="">Most Recent</option> <option value="of" >Oldest First</option> <option value="mw" >Most Views</option> <option value="lw" >Fewest Views</option> <option value="lp" >Lowest Price</option> <option value="hp" >Highest Price</option> </select><div style="clear:both"></div> </div> <br /><br /><br /> <a href="/details/2008_vw_gti/1454282/" class="prod_container" > <h2>2008 VW GTi</h2> <div style="float:left; width:122px; z-index:1000"> <div class="thumb"><img src="http://c.castanet.net/img/28/thumbs/1454282-1-1.jpg" border="0"/></div> <div class="clear"></div> mls </div> <div class="descr"> The most fun car I have owned. Dolphin Grey, 4 door, Dual Climate control, DRG Transmission with paddle shift. Leather... </div> <div class="pdate"> <p class="price">$19,000.00</p> <p class="date">Kelowna<br />Posted: Oct 15, 2:54 PM<br />Views: 349</p> </div> <div style="clear:both" ></div> <div class="seal"><img src="/images/bookmark.png" /></div> </a> <a href="/details/price_drop_gorgeous_rare_white_2009_honda_accord_ex-l_coupe/1447341/" class="prod_container" > <h2>PRICE DROP!!! Gorgeous Rare White 2009 Honda Accord EX-L Coupe </h2> <div style="float:left; width:122px; z-index:1000"> <div class="thumb"><img src="http://c.castanet.net/img/28/thumbs/1447341-1-1.jpg" border="0"/></div> <div class="clear"></div> sun2010 </div> <div class="descr"> the link I'm trying to get is the "/details/2008_vw_gti/1454282/" part. THanks

    Read the article

  • Const-correctness semantics in C++

    - by thirtythreeforty
    For fun and profit™, I'm writing a trie class in C++ (using the C++11 standard.) My trie<T> has an iterator, trie<T>::iterator. (They're all actually functionally const_iterators, because you cannot modify a trie's value_type.) The iterator's class declaration looks partially like this: template<typename T> class trie<T>::iterator : public std::iterator<std::bidirectional_iterator_tag, T> { friend class trie<T>; struct state { state(const trie<T>* const node, const typename std::vector<std::pair<typename T::value_type, std::unique_ptr<trie<T>>>>::const_iterator& node_map_it ) : node{node}, node_map_it{node_map_it} {} // This pointer is to const data: const trie<T>* node; typename std::vector<std::pair<typename T::value_type, std::unique_ptr<trie<T>>>>::const_iterator node_map_it; }; public: typedef const T value_type; iterator() =default; iterator(const trie<T>* node) { parents.emplace(node, node->children.cbegin()); // ... } // ... private: std::stack<state> parents; // ... }; Notice that the node pointer is declared const. This is because (in my mind) the iterator should not be modifying the node that it points to; it is just an iterator. Now, elsewhere in my main trie<T> class, I have an erase function that has a common STL signature--it takes an iterator to data to erase (and returns an iterator to the next object). template<typename T> typename trie<T>::iterator trie<T>::erase(const_iterator it) { // ... // Cannot modify a const object! it.parents.top().node->is_leaf = false; // ... } The compiler complains because the node pointer is read-only! The erase function definitely should modify the trie that the iterator points to, even though the iterator shouldn't. So, I have two questions: Should iterator's constructors be public? trie<T> has the necessary begin() and end() members, and of course trie<T>::iterator and trie<T> are mutual friends, but I don't know what the convention is. Making them private would solve a lot of the angst I'm having about removing the const "promise" from the iterator's constructor. What are the correct const semantics/conventions regarding the iterator and its node pointer here? Nobody has ever explained this to me, and I can't find any tutorials or articles on the Web. This is probably the more important question, but it does require a good deal of planning and proper implementation. I suppose it could be circumvented by just implementing 1, but it's the principle of the thing!

    Read the article

  • Calculate the number of ways to roll a certain number

    - by helloworld
    I'm a high school Computer Science student, and today I was given a problem to: Program Description: There is a belief among dice players that in throwing three dice a ten is easier to get than a nine. Can you write a program that proves or disproves this belief? Have the computer compute all the possible ways three dice can be thrown: 1 + 1 + 1, 1 + 1 + 2, 1 + 1 + 3, etc. Add up each of these possibilities and see how many give nine as the result and how many give ten. If more give ten, then the belief is proven. I quickly worked out a brute force solution, as such int sum,tens,nines; tens=nines=0; for(int i=1;i<=6;i++){ for(int j=1;j<=6;j++){ for(int k=1;k<=6;k++){ sum=i+j+k; //Ternary operators are fun! tens+=((sum==10)?1:0); nines+=((sum==9)?1:0); } } } System.out.println("There are "+tens+" ways to roll a 10"); System.out.println("There are "+nines+" ways to roll a 9"); Which works just fine, and a brute force solution is what the teacher wanted us to do. However, it doesn't scale, and I am trying to find a way to make an algorithm that can calculate the number of ways to roll n dice to get a specific number. Therefore, I started generating the number of ways to get each sum with n dice. With 1 die, there is obviously 1 solution for each. I then calculated, through brute force, the combinations with 2 and 3 dice. These are for two: There are 1 ways to roll a 2 There are 2 ways to roll a 3 There are 3 ways to roll a 4 There are 4 ways to roll a 5 There are 5 ways to roll a 6 There are 6 ways to roll a 7 There are 5 ways to roll a 8 There are 4 ways to roll a 9 There are 3 ways to roll a 10 There are 2 ways to roll a 11 There are 1 ways to roll a 12 Which looks straightforward enough; it can be calculated with a simple linear absolute value function. But then things start getting trickier. With 3: There are 1 ways to roll a 3 There are 3 ways to roll a 4 There are 6 ways to roll a 5 There are 10 ways to roll a 6 There are 15 ways to roll a 7 There are 21 ways to roll a 8 There are 25 ways to roll a 9 There are 27 ways to roll a 10 There are 27 ways to roll a 11 There are 25 ways to roll a 12 There are 21 ways to roll a 13 There are 15 ways to roll a 14 There are 10 ways to roll a 15 There are 6 ways to roll a 16 There are 3 ways to roll a 17 There are 1 ways to roll a 18 So I look at that, and I think: Cool, Triangular numbers! However, then I notice those pesky 25s and 27s. So it's obviously not triangular numbers, but still some polynomial expansion, since it's symmetric. So I take to Google, and I come across this page that goes into some detail about how to do this with math. It is fairly easy(albeit long) to find this using repeated derivatives or expansion, but it would be much harder to program that for me. I didn't quite understand the second and third answers, since I have never encountered that notation or those concepts in my math studies before. Could someone please explain how I could write a program to do this, or explain the solutions given on that page, for my own understanding of combinatorics? EDIT: I'm looking for a mathematical way to solve this, that gives an exact theoretical number, not by simulating dice

    Read the article

  • Remotely Schedule and Stream Recorded TV in Windows 7 Media Center

    - by DigitalGeekery
    Have you ever been away from home and suddenly realized you forgot to record your favorite program? Now Windows 7 Media Center, users can schedule recordings remotely from their phones or mobile devices with Remote Potato. How it Works Remote Potato installs server software on the host computer running Windows 7 Media Center. Once the software is installed, we’ll need to do some port forwarding on the router and setup an optional dynamic DNS address. When setup is completed, we will access the application through a web based interface. Silverlight is required for Streaming recorded TV, but scheduling recordings can be done through an HTML interface. Installing Remote Potato Download and install Remote Potato on the Media Center PC. (See download link below) If you plan to stream any Recorded TV, you’ll also want to install the streaming pack located on the same page. It isn’t required to stream all shows, only shows that require the AC3 audio codec. Click Yes to allow Remote Potato to add rules to the Windows Firewall for remote access. You’ll likely need to accept a few UAC prompts. When notified that the rules were added, click OK. Remote Potato will then prompt you to allow administrator privileges to reserve a URL for it’s web server. Click Yes. Remote Potato server will start. Click on the configuration button at the right to to reveal the settings tabs.   One the General tab, you’ll have the option to run Remote Potato on startup and minimized in the System Tray. If you’re running Media Center on a dedicated HTPC, you’ll probably want to enable both startup options. Forwarding Ports on Your Router You’ll need to forward a couple ports on your router. By default, these will be ports 9080 and 9081. In this example we’re using a Linksys WRT54GL router, however, the steps for port forwarding will vary from router to router. On the Linksys configuration page, click on the Applications & Gaming Tab, and then the Port Range Forward tab. Under Application, type in a name of your choosing. In both the Start and End boxes, type the port number 9080. Enter the local IP address of your Media Center computer in the IP address column. Click the check box under Enable. Repeat the process on the next line, but this time use port 9081. When finished, click the Save Settings button. Note: It’s highly recommended that you configure the home computer running Media Center & Remote Potato with a static IP address.   Find your IP Address You’ll need to find the IP address assigned to your router from your ISP. There are many ways to do this but a quick and easy way is to visit a site like checkip.dyndns.org (link available below) The current external IP address of your router will be displayed in the browser.   Dynamic DNS This is an optional step, but  it’s highly recommended. Many routers, such as the Linksys WRT54GL we are using, support Dynamic DNS (DDNS). What Dynamic DNS allows you to do is affiliate your home router’s external IP address to a domain name. Every time your home router is assigned a a new IP address by your ISP, the domain name is updated to point to your new IP address. Remote Potato’s user interface is accessed over the Internet is by connecting to your router’s IP address followed by a colon and the port number. (Ex: XXX.XXX.XXX.XXX:9080) Instead of constantly having to look up and remember an IP address, you can use DDNS along with a 3rd party provider like DynDNS.com, to sign up for a free domain name and configure it to be updated each time your router is assigned a new IP address. Go to the DynDNS.com website (See link at the end of the article) and sign up for a free Domain name. You’ll need to register and confirm by email.   Once you’ve signed in and selected your domain name click Activate Services. You’ll get a confirmation message that your domain name has been activated.    On the Linksys WRT54GL click on the Setup tab an then DDNS. Select DynDNS.org, or TZO.com if you prefer to use their service, from the drop down list.   With DynDNS, you’ll need to fill in your username and password you signed up with at the DynDNS website and the hostname you chose. Note: You can connect over your local network with the IP Address of the computer running Remote Potato followed by a colon and the port number. Ex: 192.168.1.2:9080 Logging in Remote Potato and Recording a Show Once you connect, you’ll see the start page. To view the TV listings, click on TV Guide. You’ll then see your guide listings. There are a few ways to navigate the listings. At the top left, you can click on any of the preset time buttons to jump to  the listings at that time of the day.  Click on the arrows to the right and left of the day and date at the top center to proceed to the previous or next day. Or, jump to a specific day with the date and date buttons at the top right.   To setup a recording, click on a program.   You can choose to record the individual show or the entire series by clicking on Record Show or Record Series.   Remote Potato on Mobile Devices Perhaps the coolest feature of Remote Potato is the ability to schedule recording from your phone or mobile device. Note: For any devices or computers without Silverlight, you will be prompted to view the HTML page. Select Browse Listings. Select your program to record. In the Program Details, select Record Show to record the single episode or Record Series to record all instances of the series. You will then see a red dot on the program listing to indicate that the show is scheduled for recording.   Streaming Recorded TV Click on Recorded TV from the home screen to access your previously recorded TV programs. Click on the selection you wish to stream. Click on Play. If you receive this error message, you’ll need to install the streaming pack for Remote Potato. This is found on the same download page as installation files. (See link below) The Begin from slider allows you to start playback from the start (by default) or a different time of the program by moving the slider. The Quality (bitrate) setting  allows you to choose the quality of the playback. We found the video quality on the Normal setting to be pretty lousy, and Low was just pointless. High was the best overall viewing experience as it provided smooth quality video playback. We experienced significant stuttering during playback using the Ultra High setting.   Click Start when you are ready to begin. When playback begins you’ll see a slider at the top right.   Move the slider left or right to increase or decrease the size of the video. There’s also a button to switch to full screen.   Media Center users who travel frequently or are always on the go will likely find Remote Potato to be a blessing. Since being released earlier this year, updates for Remote Potato have come fast and furious. The latest beta release includes support for streaming music and photos. If you like those nice network TV logos, check out our article on adding TV channel logos to Windows Media Center. Downloads and Links Download Remote Potato and Streaming Pack Find your IP address Sign Up for a Domain Name at DynDNS.com Similar Articles Productive Geek Tips Schedule Updates for Windows Media CenterUsing Netflix Watchnow in Windows Vista Media Center (Gmedia)Add a Sleep Timer to Windows 7 Media CenterStartup Customizations for Media Center in Windows 7Enable Media Streaming in Windows Home Server to Windows Media Player TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos

    Read the article

  • Upgrade Your Existing BI Publisher 11g (11.1.1.3) to 11.1.1.5

    - by Kan Nishida
    It’s already more than a month now since BI Publisher 11.1.1.5 was released at beginning of May. Have you already tried out many of the great new features? If you are already running on the first version of BI Publisher 11g (11.1.1.3) you might wonder how to upgrade the existing BI Publisher to the 11.1.1.5 version. There are two ways to do this, one is ‘Out-Place’ and another is ‘In-Place’. The ‘Out-Place’ would be quite simple. Basically you will need to install the whole BI or just BI Publisher standalone R11.1.1.5 at a different location then you can switch the catalog to the existing one so that all the reports will be there in the new 11.1.1.5 environment. But sometimes things are not that simple, you might have some custom applications or configuration on the original environment and you want to keep all of them with the upgraded environment. For such scenarios, there is the ‘In-Place’ upgrade, which overrides on top of the original environment only the parts relevant for BI and BI Publisher, and that’s what I’m going to talk about today. Here is the basic steps of the ‘In-Place’ upgrade. Upgrade WebLogic Server to 10.3.5 Upgrade BI System to 11.1.1.5 Upgrade Database Schema Re-register BI Components Upgrade FMW (Fusion Middleware) Configuration Upgrade BI Catalog There is a section that talks about this upgrade from 11.1.1.3 to 11.1.1.5 as part of the overall upgrade document. But I hope my blog post summarized it and made it simple for you to cover only what’s necessary. Upgrade Document: http://download.oracle.com/docs/cd/E21764_01/bi.1111/e16452/bi_plan.htm#BABECJJH Before You Start Stop BI System and Backup I can’t emphasize enough, but before you start PLEASE make sure you take a backup of the existing environments first. You want to stop all WebLogic Servers, Node Manager, OPMN, and OPMN-managed system components that are part of your Oracle BI domains. If you’re on Windows you can do this by simply selecting ‘Stop BI Services’ menu. Then backup the whole system. Upgrade WebLogic Server to 10.3.5 Download WebLogic Server 10.3.5 Upgrade Installer With BI 11.1.1.3 installation your WebLogic Server (WLS) is 10.3.3 and you need to upgrade this to 10.3.5 before upgrading the BI part. In order to upgrade you will need this 10.3.5 upgrade version of WLS, which you can download from our support web site (https://support.oracle.com) You can find the detail information about the installation and the patch numbers for the WLS upgrade installer on this document. Just for your short cut, if you are running on Windows or Linux (x86) here is the patch number for your platform. Windows 32 bit: 12395517: Linux: 12395517 Upgrade WebLogic Server 1. After unzip the downloaded file, launch wls1035_upgrade_win32.exe if you’re on Windows. 2. Accept all the default values and keep ‘Next’ till end, and start the upgrade. Once the upgrade process completes you’ll see the following window. Now let’s move to the BI upgrade. Upgrade BI Platform to 11.1.1.5 with Software Only Install Download BI 11.1.1.5 You can download the 11.1.1.5 version from our OTN page for your evaluation or development. For the production use it’s recommended to download from eDelivery. 1. Launch the installer by double click ‘setup.exe’ (for Windows) 2. Select ‘Software Only Install’ option 3. Select your original Oracle Home where you installed BI 11.1.1.3. 4. Click ‘Install’ button to start the installation. And now the software part of the BI has been upgraded to 11.1.1.5. Now let’s move to the database schema upgrade. Upgrade Database Schema with Patch Assistant You need to upgrade the BIPLATFORM and MDS Schemas. You can use the Patch Assistant utility to do this, and here is an example assuming you’ve created the schema with ‘DEV’ prefix, otherwise change it with yours accordingly. Upgrade BIPLATFORM schema (if you created this schema with DEV_ prev) psa.bat -dbConnectString localhost:1521:orcl -dbaUserName sys -schemaUserName DEV_BIPLATFORM Upgrade MDS schema (if you created this schema with DEV_ prev) psa.bat -dbConnectString localhost:1521:orcl -dbaUserName sys -schemaUserName DEV_MDS Re-register BI System components Now you need to re-register your BI system components such as BI Server, BI Presentation Server, etc to the Fusion Middleware system. You can do this by running ‘upgradenonj2eeapp.bat (or .sh)’ command, which can be found at %ORACLE_HOME%/opmn/bin. Before you run, you need to start the WLS Server and make sure your WLS environment is not locked. If it’s locked then you need to release the system from the Fusion Middleware console before you run the following command. Here is the syntax for the ‘upgradenonj2eeapp.bat (or .sh) command.  upgradenonj2eeapp.bat    -oracleInstance Instance_Home_Location    -adminHost WebLogic_Server_Host_Name    -adminPort administration_server_port_number    -adminUsername administration_server_user And here is an example: cd %BI_HOME%\opmn\bin upgradenonj2eeapp.bat -oracleInstance C:\biee11\instances\instance1 -adminHost localhost -adminPort 7001 -adminUsername weblogic Upgrade Fusion Middleware Configuration There are a couple things on the Fusion Middleware need to be upgraded for the BI system to work. Here is a list of the components to upgrade. Upgrade Shared Library (JRF) Upgrade Fusion Middleware Security (OPSS) Upgrade Code Grants Upgrade OWSM Policy Repository Before moving forward, you need to stop the WebLogic Server. Here is an example. cd %MW_HOME%user_projects\domains\bifoundation_domain\binstopWebLogic.cmd And, let’s start with ‘Upgrade Shared Library (JRF)’. Upgrade Shared Library (JRF) You can use updateJRF() WLST command to upgrade the shared libraries in your domain. Before you do this, you need to stop all running instances, Managed Servers, Administration Server, and Node Manager in the domain. Here is an example of the ‘upgradeJRF()’ command: cd %MW_HOME%\oracle_common\common\bin wlst.cmd upgradeJRF('C:/biee11/user_projects/domains/bifoundation_domain') Upgrade Fusion Middleware Security (OPSS) This step is to upgrade the Fusion Middleware security piece. You can use ‘upgradeOpss()’ WLST command. Here is a syntax for the command. upgradeOpss(jpsConfig="existing_jps_config_file", jaznData="system_jazn_data_file") The ‘existing jps-config.xml file can be found under %DOMAIN_HOME%/config/fmwconfig/jps-config.xml and the ‘system_jazn_data_file’ can be found under %MW_HOME%/oracle_common/modules/oracle.jps_11.1.1/domain_config/system-jazn-data.xml. And here is an example: cd %MW_HOME%\oracle_common\common\bin wlst.cmd upgradeOpss(jpsConfig="c:/biee11/user_projects/domains/bifoundation_domain/config/fmwconfig/jps-config.xml", jaznData="c:/biee11/oracle_common/modules/oracle.jps_11.1.1/domain_config/system-jazn-data.xml") exit() Upgrade Code Grants for Oracle BI Domain And this is the last step for the Fusion Middleware platform upgrade task. You need to run this python script ‘bi-upgrade.py‘ script to configure the code grants necessary to ensure that SSL works correctly for Oracle BI. However, even if you don’t use SSL, you still need to run this script. And if you have multiple BI domains (Enterprise deployment) then you need to run this on each domain. Here is an example: cd %MW_HOME%\oracle_common\common\bin wlst c:\biee11\Oracle_BI1\bin\bi-upgrade.py --bioraclehome c:\biee11\Oracle_BI1 --domainhome c:\biee11\user_projects\domains\bifoundation_domain Upgrade OWSM Policy Repository This is to upgrade OWSM (Oracle Web Service Manager) policy repository, you can use WLST command ‘upgradeWSMPolicyRepository()’. In order to run this command you need to have your WebLogic Server up-and-running. Here is an example. cd %MW_HOME%user_projects\domains\bifoundation_domain\binstopWebLogic.cmd cd %MW_HOME%\oracle_common\common\bin wlst.cmd connect ('weblogic','welcome1','t3://localhost:7001') upgradeWSMPolicyRepository() exit() Upgrade BI Catalogs This step is required only when you have your BI Publisher integrated with BIEE. If your BI Publisher is deployed as a standalone then you don’t need to follow this step. Now finally, you can upgrade the BI catalog. This won’t upgrade your BI Publisher reports themselves, but it just upgrades some attributes information inside the catalog. Before you do this upgrade, make sure the BI system components are not running. You can check the status by the command below. opmnctl status You can do the upgrade by updating a configuration file ‘instanceconfig.xml’, which can be found at %BI_HOME%\instances\instance1\config\coreapplication_obips1, and change the value of ‘UpgradeAndExit’ to be ‘true’. Here is an example: <ps:Catalog xmlns:ps="oracle.bi.presentation.services/config/v1.1"> <ps:UpgradeAndExit>true</ps:UpgradeAndExit> </ps:Catalog> After you made the change and save the file, you need to start the BI Presentation Server. This time you want to start only the BI Presentation Server instead of starting all the servers. You can use ‘opmnctl’ to do so, and here is an example. cd %ORACLE_INSTANCE%\bin opmnctl startproc ias-component=coreapplication_obips1 This would upgrade your BI Catalog to be 11.1.1.5. After the catalog is updated, you can stop the BI Presentation Server so that you can modify the instanceconfig.xml file again to revert the upgradeAndExit value back to ‘false’. Start Explore BI Publisher 11.1.1.5 After all the above steps, you can start all the BI Services, access to the same URL, now you have your BI Publisher and/or BI 11.1.1.5 in your hands. Have fun exploring all the new features of R11.1.1.5!

    Read the article

  • When should I use Areas in TFS instead of Team Projects

    - by Martin Hinshelwood
    Well, it depends…. If you are a small company that creates a finite number of internal projects then you will find it easier to create a single project for each of your products and have TFS do the heavy lifting with reporting, SharePoint sites and Version Control. But what if you are not… Update 9th March 2010 Michael Fourie gave me some feedback which I have integrated. Ed Blankenship via @edblankenship offered encouragement and a nice quote. Ewald Hofman gave me a couple of Cons, and maybe a few more soon. Ewald’s company, Avanade, currently uses Areas, but it looks like the manual management is getting too much and the project is getting cluttered. What if you are likely to have hundreds of projects, possibly with a multitude of internal and external projects? You might have 1 project for a customer or 10. This is the situation that most consultancies find themselves in and thus they need a more sustainable and maintainable option. What I am advocating is that we should have 1 “Team Project” per customer, and use areas to create “sub projects” within that single “Team Project”. "What you describe is what we generally do internally and what we recommend. We make very heavy use of area path to categorize the work within a larger project." - Brian Harry, Microsoft Technical Fellow & Product Unit Manager for Team Foundation Server   "We tend to use areas to segregate multiple projects in the same team project and it works well." - Tiago Pascoal, Visual Studio ALM MVP   "In general, I believe this approach provides consistency [to multi-product engagements] and lowers the administration and maintenance costs. All good." - Michael Fourie, Visual Studio ALM MVP   “@MrHinsh BTW, I'm very much a fan of very large, if not huge, team projects in TFS. Just FYI :) Use Areas & Iterations.” Ed Blankenship, Visual Studio ALM MVP   This would mean that SSW would have a single Team Project called “SSW” that contains all of our internal projects and consequently all of the Areas and Iteration move down one hierarchy to accommodate this. Where we would have had “\SSW\Sprint 1” we now have “\SSW\SqlDeploy\Sprint1” with “SqlDeploy” being our internal project. At the moment SSW has over 70 internal projects and more than 170 total projects in TFS. This method has long term benefits that help to simplify the support model for companies that often have limited internal support time and many projects. But, there are implications as TFS does not provide this model “out-of-the-box”. These implications stretch across Areas, Iterations, Queries, Project Portal and Version Control. Michael made a good comment, he said: I agree with your approach, assuming that in a multi-product engagement with a client, they are happy to adopt the same process template across all products. If they are not, then it’ll either be easy to convince them or there is a valid reason for having a different template - Michael Fourie, Visual Studio ALM MVP   At SSW we have a standard template that we use and this is applied across the board, to all of our projects. We even apply any changes to the core process template to all of our existing projects as well. If you have multiple projects for the same clients on multiple templates and you want to keep it that way, then this approach will not work for you. However, if you want to standardise as we have at SSW then this approach may benefit you as well. Implications around Areas Areas should be used for topological classification/isolation of work items. You can think of this as architecture areas, organisational areas or even the main features of your application. In our scenario there is an additional top level item that represents the Project / Product that we want to chop our Team Project into. Figure: Creating a sub area to represent a product/project is easy. <teamproject> <teamproject>\<Functional Area/module whatever> Becomes: <teamproject> <teamproject>\<ProjectName>\ <teamproject>\<ProjectName>\<Functional Area/module whatever> Implications around Iterations Iterations should be used for chronological classification/isolation of work items. This could include isolated time boxes, milestones or release timelines and really depends on the logical flow of your project or projects. Due to the new level in Area we need to add the same level to Iteration. This is primarily because it is unlikely that the sprints in each of your projects/products will start and end at the same time. This is just a reality of managing multiple projects. Figure: Adding the same Area value to Iteration as the top level item adds flexibility to Iteration. <teamproject>\Sprint 1 Or <teamproject>\Release 1\Sprint 1 Becomes: <teamproject>\<ProjectName>\Sprint 1 Or <teamproject>\<ProjectName>\Release 1\Sprint 1 Implications around Queries Queries are used to filter your work items based on a specified level of granularity. There are a number of queries that are built into a project created using the MSF Agile 5.0 template, but we now have multiple projects and it would be a pain to have to edit all of the work items every time we changed project, and that would only allow one team to work on one project at a time.   Figure: The Queries that are created in a normal MSF Agile 5.0 project do not quite suit our new needs. In order for project contributors to be able to query based on their project we need a couple of things. The first thing I did was to create an “_Area Template” folder that has a copy of the project layout with all the queries setup to filter based on the “_Area Template” Area and the “_Sprint template” you can see in the Area and Iteration views. Figure: The template is currently easily drag and drop, but you then need to edit the queries to point at the right Area and Iteration. This needs a tool. I then created an “Areas” folder to hold all of the area specific queries. So, when you go to create a new TFS Sub-Project you just drag “_Area Template” while holding “Ctrl” and drop it onto “Areas”. There is a little setup here. That said I managed it in around 10 minutes which is not so bad, and I can imagine it being quite easy to build a tool to create these queries Figure: These new queries can be configured in around 10 minutes, which includes setting up the Area and Iteration as well. Version Control What about your source code? Well, that is the easiest of the lot. Just create a sub folder for each of your projects/products.   Figure: Creating sub folders in source control is easy as “Right click | Create new folder”. <teamproject>\DEV\Main\ Becomes: <teamproject>\<ProjectName>\DEV\Main\ Conclusion I think it is up to each company to make a call on how you want to configure your Team Projects and it depends completely on how many projects/products you are going to have for each customer including yourself. If we decide to utilise this route it will require some configuration to get our 170+ projects into this format, and I will probably be writing some tools to help. Pros You only have one project to upgrade when a process template changes – After going through an upgrade of over 170 project prior to the changes in the RC I can tell you that that many projects is no fun. Standardises your Process Template – You will always have the same Process implementation across projects/products without exception You get tighter control over the permissions – Yes, you can do this on a standard Team Project, but it gets a lot easier with practice. You can “move” work items from one “product” to another – Have we not always wanted to do that. You can rename your projects – Wahoo: everyone wants to do this, now you can. One set of Reporting Services reports to manage – You set an area and iteration to run reports anyway, so you may as well set both. Simplified Check-In Policies– There is only one set of check-in policies per client. This simplifies administration of policies. Simplified Alerts – As alerts are applied across multiple projects this simplifies your alert rules as per client. Cons All of these cons could be mitigated by a custom tool that helps automate creation of “Sub-projects” within Team Projects. This custom tool could create areas, Iteration, permissions, SharePoint and queries. It just does not exist yet :) You need to configure the Areas and Iterations You need to configure the permissions You may need to configure sub sites for SharePoint (depends on your requirement) – If you have two projects/products in the same Team Project then you will not see the burn down for each one out-of-the-box, but rather a cumulative for the Team Project. This is not really that much of a problem as you would have to configure your burndown graphs for your current iteration anyway. note: When you create a sub site to a TFS linked portal it will inherit the settings of its parent site :) This is fantastic as it means that you can easily create sub sites and then set the Area and Iteration path in each of the reports to be the correct one. Every team wants their own customization (via Ewald Hofman) - small teams of 2 persons against teams of 30 – or even outsourcing – need their own process, you cannot allow that because everybody gets the same work item types. note: Luckily at SSW this is not a problem as our template is standardised across all projects and customers. Large list of builds (via Ewald Hofman) – As the build list in Team Explorer is just a flat list it can get very cluttered. note: I would mitigate this by removing any build that has not been run in over 30 days. The build template and workflow will still be available in version control, but it will clean the list. Feedback Now that I have explained this method, what do you think? What other pros and cons can you see? What do you think of this approach? Will you be using it? What tools would you like to support you?   Technorati Tags: Visual Studio ALM,TFS Administration,TFS,Team Foundation Server,Project Planning,TFS Customisation

    Read the article

  • Turn Photos and Home Videos into Movies with Windows Live Movie Maker

    - by DigitalGeekery
    Are you looking for an easy way to take your digital photos and videos and turn them into a movie or slideshow? Today we’ll take a detailed look at how to do use Windows Live Movie Maker. Installation Windows Live Movie Maker comes bundled as part of the Windows Live Essentials suite (link below). However, you don’t have to install any of the programs you may not want. Take notice of the You’re almost done screen. Before clicking Continue, be sure to uncheck the boxes to set your search provider and homepage. Adding Pictures and Videos Open Windows Live Movie Maker. You can add videos or photos by simply dragging and dropping them onto the storyboard area. You can also click on the storyboard area or on the Add videos and photos button on the Home tab to browse for videos and photos. Windows Live Movie Maker supports most video, image, and audio file types. Select your files and add click Open to add them to Windows Live Movie Maker. By default WLMM doesn’t allow you to add files from network locations…so check out our article on how to add network support to Windows Live MovieMaker if the files you want to add are on a network drive. Layout All of your added clips will appear in the storyboard area on the right, while the currently selected clip will appear in the preview window on the left. You can adjust the size of the two areas by clicking and dragging the dividing line in the middle.    Make the clips on the storyboard bigger or smaller by clicking on the thumbnail size icon. The slider at the lower right adjusts the zoom time scale.   Previewing your Movie At any time, you can playback your movie and preview how it will look in the Preview window by clicking the space bar, or by pushing the play button under the preview window. You can also manually move the preview bar slider across the storyboard to view the clips as the video progresses. Adjusting Clips on the Storyboard You can click and drag clips on the storyboard to change the order in which the photos and videos appear.   Adding Music Nothing brings a movie to life quite like music. Selecting Add music will add your music to the beginning of the movie. Select Add music at the current point to include it in the movie to the current location of your preview bar slider, then browse for your music clip. WLMM supports many common audio files such as WAV, MP3, M4A, WMA, AIFF, and ASF. The music clip will appear above the video / photos clips on the storyboard.   You can change the location of music clips by clicking and dragging them to a different location on the storyboard. Add Titles, Captions, and Credits To add a Title screen to your movie, click the Title button on the Home tab. Type your title directly into the text box on the preview screen. The title will be placed at the location of the preview slider on the storyboard. However, you can change the location by clicking and dragging title to other areas of the storyboard. On the Format tab, there are a handful of text settings. You can change the font, color, size, alignment,  and transparency. The Adjust group allows you to change the background color, edit the text, and set the length of time the Title will appear in the movie.   The Effects group on the Format tab allows you to select an effect for your title screen. By hovering your cursor over each option, you will get a live preview of how each effect will appear in the preview window. Click to apply any of the effects. For captions, select where you want your caption to appear with the preview slider on the storyboard, then click the captions button on the Home tab. Just like the title, you type your caption directly into the text box on the preview screen, and you can make any adjustments by using the Font and Paragraph, Adjust, and Effects groups above. Credits are done the same as titles and captions, except they are automatically placed at the end of the movie.   Transitions Go to the Animation tab on the ribbon to apply transitions. Select a clip from the storyboard and hover over one of the transition to see it in the preview window. Click on the transition to apply it to the clip. You can apply transitions separately to clips or hold down Ctrl button while clicking to select multiple clips to which to apply the same transition. Pan and zoom effects are also located on the Animations tab, but can be applied to photos only. Like transition, you can apply them individually to a clip or hold down Ctrl button while clicking to select multiple clips to which to apply the same pan and zoom effect. Once applied, you can adjust the duration of the transitions and pan and zoom effects. You can also click the dropdown for additional transitions or effects. Visual Effects Similar to Pan and Zoom and Transitions, you can apply a variety of Visual Effects to individual or multiple clips. Editing Video and Music Note: This does not actually edit the original video you imported into your Windows Live Movie Maker project, only how it appears in your WLMM project. There are some very basic editing tools located on the Home tab. The Rotate left and Rotate right button will adjust any clip that may be oriented incorrectly. The Fit to music button will automatically adjust the duration of the photos (if you have any in your project) to fit the length of the music in your movie. Audio mix allows you to change the volume level   You can also do some slightly more advanced editing from the Edit tab. Select the video clip on the storyboard and click the Trim tool to edit or remove portions of a video clip. Next, click and drag the sliders in the preview windows to select the are you wish to keep. For example, the area outside the sliders is the area trimmed from the movie. The area inside is the section that is kept in the movie. You can also adjust the Start and End points manually on the ribbon.   When you are finished, click Save trim. You can also split your video clips. Move the preview slider to the location in the video clip where you’d like to split it, and select Split. Your video will be split into separate sections. Now you can apply different effects or move them to different locations on the storyboard. Editing Music Clips Select the music clip on the storyboard and then the Options tab on the ribbon. You can adjust the music volume by moving the slider right and left.   You can also choose to have your music clip fade in or out at the beginning and end of your movie. From the Fade in and Fade out dropdowns, select None, Slow, Medium, or Fast. To adjust the sound of your audio clips, click on the Edit tab, select the Video volume button, and adjust the slider. Move it all the way to the left to mute any background noise in your video clips.   AutoMovie As you have seen, Windows Live Movie Maker allows you to add effects, transitions, titles, and more. If you don’t want to do any of that stuff yourself, AutoMovie will automatically add title, credits, cross fade transitions between items, pan and zoom effects to photos, and fit your project to the music. Just select the AutoMovie button on the Home tab. You can go from zero to movie in literally a couple minutes.   Uploading to YouTube You can share your video on YouTube directly from Windows Live Movie Maker. Click on the YouTube icon in the Sharing group on the Home tab. You’ll be prompted for your YouTube username and password. Fill in the details about your movie and click Publish. The movie will be converted to WMV before being uploaded to YouTube. As soon as the YouTube conversion is complete, you’re new movie is live and ready to be viewed. Saving your Movie as a Video File Select the icon at the top left, then select Save movie. As you hover your mouse over each of the options, you will see the output display size, aspect ratio, and estimated file size per minute of video. All of these settings will output your movie as a WMV file. (Unfortunately, the only option is to save a movie as a WMV file.) The only difference is how they are encoded based on preset common settings. The Burn to DVD option also outputs a WMV file, but then opens Windows DVD Maker and walks you through the process of creating and burning a DVD.   If you choose the Burn to DVD option, close this window when the WMV file conversion is complete and the Windows DVD Maker will prompt you to begin. When your movie is finished, it’s time to relax and enjoy.   Conclusion Windows Live Movie Maker makes it easy for the average person to quickly churn out nice looking movies and slideshows from there own pictures and videos. However, long time users of previous editions (formerly called Windows Movie Maker) will likely be disappointed by some features missing in Windows Live Movie Maker that existed in earlier editions. Looking for details on burning your new project to DVD, check out our article on how to create and author DVDs with Windows DVD Maker. Download Windows Live Movie Maker Similar Articles Productive Geek Tips Family Fun: Share Photos with Photo Gallery and Windows Live SpacesCreate and Author DVDs in Windows 7Rotate a Video 90 degrees with VLC or Windows Live Movie MakerInstall Windows Live Essentials In Windows 7How to Make/Edit a movie with Windows Movie Maker in Windows Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 Acronis Online Backup Windows Firewall with Advanced Security – How To Guides Sculptris 1.0, 3D Drawing app AceStock, a Tiny Desktop Quote Monitor Gmail Button Addon (Firefox) Hyperwords addon (Firefox) Backup Outlook 2010

    Read the article

  • ASP.NET GZip Encoding Caveats

    - by Rick Strahl
    GZip encoding in ASP.NET is pretty easy to accomplish using the built-in GZipStream and DeflateStream classes and applying them to the Response.Filter property.  While applying GZip and Deflate behavior is pretty easy there are a few caveats that you have watch out for as I found out today for myself with an application that was throwing up some garbage data. But before looking at caveats let’s review GZip implementation for ASP.NET. ASP.NET GZip/Deflate Basics Response filters basically are applied to the Response.OutputStream and transform it as data is written to it through the ASP.NET Response object. So a Response.Write eventually gets written into the output stream which if a filter is also written through the filter stream’s interface. To perform the actual GZip (and Deflate) encoding typically used by Web pages .NET includes the GZipStream and DeflateStream stream classes which can be readily assigned to the Repsonse.OutputStream. With these two stream classes in place it’s almost trivially easy to create a couple of reusable methods that allow you to compress your HTTP output. In my standard WebUtils utility class (from the West Wind West Wind Web Toolkit) created two static utility methods – IsGZipSupported and GZipEncodePage – that check whether the client supports GZip encoding and then actually encodes the current output (note that although the method includes ‘Page’ in its name this code will work with any ASP.NET output). /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } /// <summary> /// Sets up the current page or handler to use GZip through a Response.Filter /// IMPORTANT: /// You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() { HttpResponse Response = HttpContext.Current.Response; if (IsGZipSupported()) { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (AcceptEncoding.Contains("deflate")) { Response.Filter = new System.IO.Compression.DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "deflate"); } else { Response.Filter = new System.IO.Compression.GZipStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "gzip"); } } } As you can see the actual assignment of the Filter is as simple as: Response.Filter = new DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); which applies the filter to the OutputStream. You also need to ensure that your response reflects the new GZip or Deflate encoding and ensure that any pages that are cached in Proxy servers can differentiate between pages that were encoded with the various different encodings (or no encoding). To use this utility function now is trivially easy: In any ASP.NET code that wants to compress its Response output you simply use: protected void Page_Load(object sender, EventArgs e) { WebUtils.GZipEncodePage(); Entry = WebLogFactory.GetEntry(); var entries = Entry.GetLastEntries(App.Configuration.ShowEntryCount, "pk,Title,SafeTitle,Body,Entered,Feedback,Location,ShowTopAd", "TEntries"); if (entries == null) throw new ApplicationException("Couldn't load WebLog Entries: " + Entry.ErrorMessage); this.repEntries.DataSource = entries; this.repEntries.DataBind(); } Here I use an ASP.NET page, but the above WebUtils.GZipEncode() method call will work in any ASP.NET application type including HTTP Handlers. The only requirement is that the filter needs to be applied before any other output is sent to the OutputStream. For example, in my CallbackHandler service implementation by default output over a certain size is GZip encoded. The output that is generated is JSON or XML and if the output is over 5k in size I apply WebUtils.GZipEncode(): if (sbOutput.Length > GZIP_ENCODE_TRESHOLD) WebUtils.GZipEncodePage(); Response.ContentType = ControlResources.STR_JsonContentType; HttpContext.Current.Response.Write(sbOutput.ToString()); Ok, so you probably get the idea: Encoding GZip/Deflate content is pretty easy. Hold on there Hoss –Watch your Caching Or is it? There are a few caveats that you need to watch out for when dealing with GZip content. The fist issue is that you need to deal with the fact that some clients don’t support GZip or Deflate content. Most modern browsers support it, but if you have a programmatic Http client accessing your content GZip/Deflate support is by no means guaranteed. For example, WinInet Http clients don’t support GZip out of the box – it has to be explicitly implemented. Other low level HTTP clients on other platforms too don’t support GZip out of the box. The problem is that your application, your Web Server and Proxy Servers on the Internet might be caching your generated content. If you return content with GZip once and then again without, either caching is not applied or worse the wrong type of content is returned back to the client from a cache or proxy. The result is an unreadable response for *some clients* which is also very hard to debug and fix once in production. You already saw the issue of Proxy servers addressed in the GZipEncodePage() function: // Allow proxy servers to cache encoded and unencoded versions separately Response.AppendHeader("Vary", "Content-Encoding"); This ensures that any Proxy servers also check for the Content-Encoding HTTP Header to cache their content – not just the URL. The same thing applies if you do OutputCaching in your own ASP.NET code. If you generate output for GZip on an OutputCached page the GZipped content will be cached (either by ASP.NET’s cache or in some cases by the IIS Kernel Cache). But what if the next client doesn’t support GZip? She’ll get served a cached GZip page that won’t decode and she’ll get a page full of garbage. Wholly undesirable. To fix this you need to add some custom OutputCache rules by way of the GetVaryByCustom() HttpApplication method in your global_ASAX file: public override string GetVaryByCustomString(HttpContext context, string custom) { // Override Caching for compression if (custom == "GZIP") { string acceptEncoding = HttpContext.Current.Response.Headers["Content-Encoding"]; if (string.IsNullOrEmpty(acceptEncoding)) return ""; else if (acceptEncoding.Contains("gzip")) return "GZIP"; else if (acceptEncoding.Contains("deflate")) return "DEFLATE"; return ""; } return base.GetVaryByCustomString(context, custom); } In a page that use Output caching you then specify: <%@ OutputCache Duration="180" VaryByParam="none" VaryByCustom="GZIP" %> To use that custom rule. It’s all Fun and Games until ASP.NET throws an Error Ok, so you’re up and running with GZip, you have your caching squared away and your pages that you are applying it to are jamming along. Then BOOM, something strange happens and you get a lovely garbled page that look like this: Lovely isn’t it? What’s happened here is that I have WebUtils.GZipEncode() applied to my page, but there’s an error in the page. The error falls back to the ASP.NET error handler and the error handler removes all existing output (good) and removes all the custom HTTP headers I’ve set manually (usually good, but very bad here). Since I applied the Response.Filter (via GZipEncode) the output is now GZip encoded, but ASP.NET has removed my Content-Encoding header, so the browser receives the GZip encoded content without a notification that it is encoded as GZip. The result is binary output. Here’s what Fiddler says about the raw HTTP header output when an error occurs when GZip encoding was applied: HTTP/1.1 500 Internal Server Error Cache-Control: private Content-Type: text/html; charset=utf-8 Date: Sat, 30 Apr 2011 22:21:08 GMT Content-Length: 2138 Connection: close ?`I?%&/m?{J?J??t??` … binary output striped here Notice: no Content-Encoding header and that’s why we’re seeing this garbage. ASP.NET has stripped the Content-Encoding header but left our filter intact. So how do we fix this? In my applications I typically have a global Application_Error handler set up and in this case I’ve been using that. One thing that you can do in the Application_Error handler is explicitly clear out the Response.Filter and set it to null at the top: protected void Application_Error(object sender, EventArgs e) { // Remove any special filtering especially GZip filtering Response.Filter = null; … } And voila I get my Yellow Screen of Death or my custom generated error output back via uncompressed content. BTW, the same is true for Page level errors handled in Page_Error or ASP.NET MVC Error handling methods in a controller. Another and possibly even better solution is to check whether a filter is attached just before the headers are sent to the client as pointed out by Adam Schroeder in the comments: protected void Application_PreSendRequestHeaders() { // ensure that if GZip/Deflate Encoding is applied that headers are set // also works when error occurs if filters are still active HttpResponse response = HttpContext.Current.Response; if (response.Filter is GZipStream && response.Headers["Content-encoding"] != "gzip") response.AppendHeader("Content-encoding", "gzip"); else if (response.Filter is DeflateStream && response.Headers["Content-encoding"] != "deflate") response.AppendHeader("Content-encoding", "deflate"); } This uses the Application_PreSendRequestHeaders() pipeline event to check for compression encoding in a filter and adjusts the content accordingly. This is actually a better solution since this is generic – it’ll work regardless of how the content is cleaned up. For example, an error Response.Redirect() or short error display might get changed and the filter not cleared and this code actually handles that. Sweet, thanks Adam. It’s unfortunate that ASP.NET doesn’t natively clear out Response.Filters when an error occurs just as it clears the Response and Headers. I can’t see where leaving a Filter in place in an error situation would make any sense, but hey - this is what it is and it’s easy enough to fix as long as you know where to look. Riiiight! IIS and GZip I should also mention that IIS 7 includes good support for compression natively. If you can defer encoding to let IIS perform it for you rather than doing it in your code by all means you should do it! Especially any static or semi-dynamic content that can be made static should be using IIS built-in compression. Dynamic caching is also supported but is a bit more tricky to judge in terms of performance and footprint. John Forsyth has a great article on the benefits and drawbacks of IIS 7 compression which gives some detailed performance comparisons and impact reviews. I’ll post another entry next with some more info on IIS compression since information on it seems to be a bit hard to come by. Related Content Built-in GZip/Deflate Compression in IIS 7.x HttpWebRequest and GZip Responses © Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET   IIS7  

    Read the article

< Previous Page | 107 108 109 110 111 112 113 114 115 116  | Next Page >