Search Results

Search found 446 results on 18 pages for 'evil'.

Page 16/18 | < Previous Page | 12 13 14 15 16 17 18  | Next Page >

  • C# 4.0: Named And Optional Arguments

    - by Paulo Morgado
    As part of the co-evolution effort of C# and Visual Basic, C# 4.0 introduces Named and Optional Arguments. First of all, let’s clarify what are arguments and parameters: Method definition parameters are the input variables of the method. Method call arguments are the values provided to the method parameters. In fact, the C# Language Specification states the following on §7.5: The argument list (§7.5.1) of a function member invocation provides actual values or variable references for the parameters of the function member. Given the above definitions, we can state that: Parameters have always been named and still are. Parameters have never been optional and still aren’t. Named Arguments Until now, the way the C# compiler matched method call definition arguments with method parameters was by position. The first argument provides the value for the first parameter, the second argument provides the value for the second parameter, and so on and so on, regardless of the name of the parameters. If a parameter was missing a corresponding argument to provide its value, the compiler would emit a compilation error. For this call: Greeting("Mr.", "Morgado", 42); this method: public void Greeting(string title, string name, int age) will receive as parameters: title: “Mr.” name: “Morgado” age: 42 What this new feature allows is to use the names of the parameters to identify the corresponding arguments in the form: name:value Not all arguments in the argument list must be named. However, all named arguments must be at the end of the argument list. The matching between arguments (and the evaluation of its value) and parameters will be done first by name for the named arguments and than by position for the unnamed arguments. This means that, for this method definition: public static void Method(int first, int second, int third) this call declaration: int i = 0; Method(i, third: i++, second: ++i); will have this code generated by the compiler: int i = 0; int CS$0$0000 = i++; int CS$0$0001 = ++i; Method(i, CS$0$0001, CS$0$0000); which will give the method the following parameter values: first: 2 second: 2 third: 0 Notice the variable names. Although invalid being invalid C# identifiers, they are valid .NET identifiers and thus avoiding collision between user written and compiler generated code. Besides allowing to re-order of the argument list, this feature is very useful for auto-documenting the code, for example, when the argument list is very long or not clear, from the call site, what the arguments are. Optional Arguments Parameters can now have default values: public static void Method(int first, int second = 2, int third = 3) Parameters with default values must be the last in the parameter list and its value is used as the value of the parameter if the corresponding argument is missing from the method call declaration. For this call declaration: int i = 0; Method(i, third: ++i); will have this code generated by the compiler: int i = 0; int CS$0$0000 = ++i; Method(i, 2, CS$0$0000); which will give the method the following parameter values: first: 1 second: 2 third: 1 Because, when method parameters have default values, arguments can be omitted from the call declaration, this might seem like method overloading or a good replacement for it, but it isn’t. Although methods like this: public static StreamReader OpenTextFile( string path, Encoding encoding = null, bool detectEncoding = true, int bufferSize = 1024) allow to have its calls written like this: OpenTextFile("foo.txt", Encoding.UTF8); OpenTextFile("foo.txt", Encoding.UTF8, bufferSize: 4096); OpenTextFile( bufferSize: 4096, path: "foo.txt", detectEncoding: false); The complier handles default values like constant fields taking the value and useing it instead of a reference to the value. So, like with constant fields, methods with parameters with default values are exposed publicly (and remember that internal members might be publicly accessible – InternalsVisibleToAttribute). If such methods are publicly accessible and used by another assembly, those values will be hard coded in the calling code and, if the called assembly has its default values changed, they won’t be assumed by already compiled code. At the first glance, I though that using optional arguments for “bad” written code was great, but the ability to write code like that was just pure evil. But than I realized that, since I use private constant fields, it’s OK to use default parameter values on privately accessed methods.

    Read the article

  • Aamir Khan’s Satyamev Jayate stirs a movement

    - by Gopinath
    Bollywood actor Aamir Khan is known for his dedication and hard work in inspiring millions of viewers though movies by discussing social problems and motivating people to solve them. His movie Rang De Basanthi seeded Indian anti-corruption movement, Tare Zameen Par touched the problems faced by few challenged kids and the latest movie 3 idiots exposed how education institutions in India are producing lakhs of Donkeys out of colleges every year. He extended his dedication of serving the society to small screen with the launch of reality TV show Satyamev Jayate. Before you start misjudging it as one of those non sense drama / entertaining reality shows, let me tell you that it is not a typical music, games, fight or dance reality show. Satyamev Jayate is all about the real people of India, their problems and how to tackle them.  This is not just a reality show, its movement to educate people about the social evils. Its been many years since I spent couple of hours  in front of TV as most of the programs are too cynical or does not add much value.  In my childhood I use to anxiously wait for Mahabarath or He-Man TV shows to start but after a two decades I waited anxiously for the start of Satyamev Jayate. The wait was worth and the 1 hours 30 minutes spent watching it meaningful. When was the last time you were so satisfied after watching a TV show and inspired to do something? I don’t remember. Today, the show focused on female foeticide and its impact. It showed women who were tortured and forced to abort female foetuses. On the show few brave women shared their experiences of giving birth to girl babies and rough times they are going through with their in-laws & husbands. The show not only focused on the problem but also on the root cause of the evil,  inspiring people working to tackle it and what every individual can do his part to solve it.  The best part of the show is,  its not a blame game. When there is a problem most of the people quickly get into identifying who is wrong and start blaming them instead of solve the actual problem.  Aamir did not blame anyone for female foeticide – neither the government who don’t impose strict rules, nor the doctors who abort girl babies to make money or the mother-in-laws & husbands who torcher girl baby mothers are blamed. He careful highlighted the problem, showed horrifying statistics and their impact on the future society and few inspiring people working to tackle the problem.  He touched heart and stirred a movement against the issue. First time ever I voted for a reality show through SMS and it’s for Satyamev Jayate. I’m proud to do so. Here are the few reactions of popular people, activists & media about the program @aamir_khan absolutely the best program I have seen on TV in recent past. Thanku for converting an idiot box into an inspirationsl medium — Kiran Bedi (@thekiranbedi) May 6, 2012 Satyamev Jayate proves tht TV 2 can b a tool of social change. — Shekhar Kapur (@shekharkapur) May 6, 2012 i absolutely loved #satyamevjayate. at least aamir is doing what all of us only talk about. — Harsha Bhogle (@bhogleharsha) May 6, 2012 Now Television will no longer be called an idiot box,the VISION of Television broadens up with#SatyamevJayate !!! — Madhur Bhandarkar (@mbhandarkar268) May 6, 2012 The Sunday 11am slot seems to have come back with a bang… #SatyamevJayate — atul kasbekar (@atulkasbekar) May 6, 2012   I was spellbound, says Prasoon Joshi – It’s a unique show. I was completely bowled over by it. It’s a never-done before concept Aamir Khan strikes the right chord with Satyamev Jayate – The format is quite crisp. Talking about the emotional connect, there are moments when your eyes well up with tears, but the various segments ensure there’s more content than emotional drama ‘Satyamev Jayate’ gutsy, sensible show: Viewers – From filmmakers to clinical psychologists to professors – everyone has given the thumbs up to Aamir Khan’s television show ‘Satyamev Jayate’, saying it is a gutsy, hard-hitting and sensible programme that strikes an emotional chord with the audiences. Aamir Khan’s TV debut ‘Satyamev Jayate’ takes Twitter by storm – The roads of the capital sported a deserted look around 11 am on Sunday morning, as everyone was hooked on to their TV sets. Did you watch the program? What is your opinion? I’m waiting for next 11 AM of next Sunday. Are you?

    Read the article

  • Independence Day for Software Components &ndash; Loosening Coupling by Reducing Connascence

    - by Brian Schroer
    Today is Independence Day in the USA, which got me thinking about loosely-coupled “independent” software components. I was reminded of a video I bookmarked quite a while ago of Jim Weirich’s “Grand Unified Theory of Software Design” talk at MountainWest RubyConf 2009. I finally watched that video this morning. I highly recommend it. In the video, Jim talks about software connascence. The dictionary definition of connascence (con-NAY-sense) is: 1. The common birth of two or more at the same time 2. That which is born or produced with another. 3. The act of growing together. The brief Wikipedia page about Connascent Software Components says that: Two software components are connascent if a change in one would require the other to be modified in order to maintain the overall correctness of the system. Connascence is a way to characterize and reason about certain types of complexity in software systems. The term was introduced to the software world in Meilir Page-Jones’ 1996 book “What Every Programmer Should Know About Object-Oriented Design”. The middle third of that book is the author’s proposed graphical notation for describing OO designs. UML became the standard about a year later, so a revised version of the book was published in 1999 as “Fundamentals of Object-Oriented Design in UML”. Weirich says that the third part of the book, in which Page-Jones introduces the concept of connascence “is worth the price of the entire book”. (The price of the entire book, by the way, is not much – I just bought a used copy on Amazon for $1.36, so that was a pretty low-risk investment. I’m looking forward to getting the book and learning about connascence from the original source.) Meanwhile, here’s my summary of Weirich’s summary of Page-Jones writings about connascence: The stronger the form of connascence, the more difficult and costly it is to change the elements in the relationship. Some of the connascence types, ordered from weak to strong are: Connascence of Name Connascence of name is when multiple components must agree on the name of an entity. If you change the name of a method or property, then you need to change all references to that method or property. Duh. Connascence of name is unavoidable, assuming your objects are actually used. My main takeaway about connascence of name is that it emphasizes the importance of giving things good names so you don’t need to go changing them later. Connascence of Type Connascence of type is when multiple components must agree on the type of an entity. I assume this is more of a problem for languages without compilers (especially when used in apps without tests). I know it’s an issue with evil JavaScript type coercion. Connascence of Meaning Connascence of meaning is when multiple components must agree on the meaning of particular values, e.g that “1” means normal customer and “2” means preferred customer. The solution to this is to use constants or enums instead of “magic” strings or numbers, which reduces the coupling by changing the connascence form from “meaning” to “name”. Connascence of Position Connascence of positions is when multiple components must agree on the order of values. This refers to methods with multiple parameters, e.g.: eMailer.Send("[email protected]", "[email protected]", "Your order is complete", "Order completion notification"); The more parameters there are, the stronger the connascence of position is between the component and its callers. In the example above, it’s not immediately clear when reading the code which email addresses are sender and receiver, and which of the final two strings are subject vs. body. Connascence of position could be improved to connascence of type by replacing the parameter list with a struct or class. This “introduce parameter object” refactoring might be overkill for a method with 2 parameters, but would definitely be an improvement for a method with 10 parameters. This points out two “rules” of connascence:  The Rule of Degree: The acceptability of connascence is related to the degree of its occurrence. The Rule of Locality: Stronger forms of connascence are more acceptable if the elements involved are closely related. For example, positional arguments in private methods are less problematic than in public methods. Connascence of Algorithm Connascence of algorithm is when multiple components must agree on a particular algorithm. Be DRY – Don’t Repeat Yourself. If you have “cloned” code in multiple locations, refactor it into a common function.   Those are the “static” forms of connascence. There are also “dynamic” forms, including… Connascence of Execution Connascence of execution is when the order of execution of multiple components is important. Consumers of your class shouldn’t have to know that they have to call an .Initialize method before it’s safe to call a .DoSomething method. Connascence of Timing Connascence of timing is when the timing of the execution of multiple components is important. I’ll have to read up on this one when I get the book, but assume it’s largely about threading. Connascence of Identity Connascence of identity is when multiple components must reference the entity. The example Weirich gives is when you have two instances of the “Bob” Employee class and you call the .RaiseSalary method on one and then the .Pay method on the other does the payment use the updated salary?   Again, this is my summary of a summary, so please be forgiving if I misunderstood anything. Once I get/read the book, I’ll make corrections if necessary and share any other useful information I might learn.   See Also: Gregory Brown: Ruby Best Practices Issue #24: Connascence as a Software Design Metric (That link is failing at the time I write this, so I had to go to the Google cache of the page.)

    Read the article

  • Open World Day 3

    - by Antony Reynolds
    A Day in the Life of an Oracle OpenWorld Attendee Part IV My third day was exhibition day for me!  I took the opportunity to wander around the JavaOne and OpenWorld exhibitions to see what might be useful for me when selling WebLogic, Coherence & SOA Suite.  I found a number of interesting vendors and thought I would share what I found here.  These are not necessarily endorsements, but observations on companies that I thought had interesting looking products that fill a need I have seen at customers. Highly Available EBS Upgrades A few years ago I worked with a customer that was a port authority.  They wanted to tie E-Business Suite into their operations to provide faster processing of cargo and passengers.  However they only had a 2 hour downtime window to perform upgrades.  This was not a problem for core database and middleware technology, this could accommodate those upgrade timescales easily.  It was a problem for EBS however so I intrigued to find Rapid E-Suite Inc offering an 11i to 12i upgrade service that claims to require no outage.  This could be a real boon to EBS customers like my port friends that need to upgrade without disruption to their business. Mobile on WebLogic I have come across a number of customers who want a comprehensive mobile solution, connected and disconnected operation and so forth.  ADF only addresses part of these requirements currently so I was excited to discover mFrontiers Inc offering an apparently comprehensive solution that should integrate easily with Oracle SOA Suite to mobile enable a SOA infrastructure.  The ability to operate without a network is important for many applications, particularly in industries that require their engineers to enter buildings to perform maintenance or repairs, because network access is not always available – many of my colleagues don’t have mobile access from their homes because they live in the middle of nowhere – and disconnected support is crucial in these situations. Sharepoint Connector for WebCenter Content Obviously Sharepoint is an evil pernicious intrusion into a companies IT estate but it is widely deployed and many people like it but also would like to take advantage of Oracle products such as WebCenter Content.  So I was encouraged to see that Fishbowl Solutions have created a connector for Sharepoint that allows it to bring in content from WebCenter, it looks like a valuable way to maintain the Sharepoint interface end users are used to but extend the range of content by pulling stuff (technical term for content) from WebCenter.   Load Balancing The Enterprise Deployment Guides are Oracles bible on building highly available FMW environments, and each of them requires a front end load balancer.  I have been asked to help configure F5 Load Balancers on a number of occasions over my time at Oracle and each time I come back to it I find more useful features have been added to the BigIP line of load balancers that F5 sell, many of their documents are tailored to FMW.  I like F5, they provide (relatively) easy to use products that do what they say on the side of the box.  They may not have all the bells and whistles of some of their more expensive competitors but they do the job and do it well!  Besides which I like their logo! Other Stuff I saw lots of other interesting products and services, such as a lightweight monitoring tool for Coherence, Forms migration services, JCAPS migration services and lots of cool freebies to take home to the children! A Quiet Night Wednesday night was the partner appreciation event and I had decided to go back to the hotel and have an early night.  I decided to attend the last session of the day – a Maven/Hudson/WebLogic tutorial.  I got the wrong hotel for the session and snuck in 20 minutes late at the back and starting working on the hands on workshop.  One of my co-attendees raised his hand for help and as the presenter came over to help he suddenly stopped and yelled – “Is that Antony”!  It was my old friend Steve Button who used to be based in Redwood Shores but is now a WebLogic guru PM in Australia.  It was good to catch up with him.  As he yelled out a guy with really bad posture turned around to see who he was talking to, this turned out to be my friend Simon Haslan, Oracle ACE from the UK.  After the tutorial Simon and I retired to the coffee shop to catch up and share stories.  2 and half hours later we decided it was time to retire, so much for an early night but great to renew old friendships and find out what real customers are worrying about.

    Read the article

  • The standards that fail us and the intellectual bubble

    - by Jeff
    There has been a great deal of noise in the techie community about standards, and a sudden and unexplainable hate for Flash. This noise isn't coming from consumers... the countless soccer moms, teens and your weird uncle Bob, it's coming from the people who build (or at least claim to build) the stuff those consumers consume. If you could survey the position of consumers on the topic, they'd likely tell you that they just want stuff on the Web to work.The noise goes something like this: Web standards are the correct and right thing to use across the Intertubes, and anything not a part of those standards (Flash) is bad. Furthermore, the more recent noise is centered around the idea that HTML 5, along with Javascript, is the right thing to use. The arguments against Flash are, well, the truth is I haven't seen a good argument. I see anecdotal nonsense about high CPU usage and things I'd never think to check when I'm watching Piano Cat on YouTube, but these aren't arguments to me. Sure, I've seen it crash a browser a few times, but it's totally rare.But let's go back to standards. Yes, standards have played an important role in establishing the ubiquity of the Web. The protocols themselves, TCP/IP and HTTP, have been critical. HTML, which has served us well for a very long time, established an incredible foundation. Javascript did an OK job, and thanks to clever programmers writing great frameworks like JQuery, is becoming more and more useful. CSS is awful (there, I said it, I feel SO much better), and I'll never understand why it's so disconnected and different from anything else. It doesn't help that it's so widely misinterpreted by different browsers. Still, there's no question that standards are a good thing, and they've been good for the Web, consumers and publishers alike.HTML 4 has been with us for more than a decade. In Web years, that might as well be 80. HTML 5, contrary to popular belief, is not a standard, and likely won't be for many years to come. In fact, the Web hasn't really evolved at all in terms of its standards. The tools that generate the standard markup and script have, but at the end of the day, we're still living with standards that are more than ten years old. The "official" standards process has failed us.The Web evolved anyway, and did not wait for standards bodies to decide what to do next. It evolved in part because Macromedia, then Adobe, kept evolving Flash. In the earlier days, it mostly just did obnoxious splash pages, but then it started doing animation, and then rich apps as they added form input. Eventually it found its killer app: video. Now more than 95% of browsers have Flash installed. Consumers are better for it.But I'll do it one better... I'll go out on a limb and say that Flash is a standard. If it's that pervasive, I don't care what you tell me, it's a standard. Just because a company owns it doesn't mean that it's evil or not a standard. And hey, it pains me to say that as a developer, because I think the dev tools are the suck (more on that in a minute). But again, consumers don't care. They don't even pay for Flash. The bottom line is that if I put something Flash based on the Internet, it's likely that my audience will see it.And what about the speed of standards owned by a company? Look no further than Silverlight. Silverlight 2 (which I consider the "real" start to the story) came out about a year and a half ago. Now version 4 is out, and it has come a very long way in its capabilities. If you believe Riastats.com, more than half of browsers have it now. It didn't have to wait for standards bodies and nerds drafting documents, it's out today. At this rate, Silverlight will be on version 6 or 7 by the time HTML 5 is a ratified standard.Back to the noise, one of the things that has continually disappointed me about this profession is the number of people who get stuck in an intellectual bubble, color it with dogmatic principles, and completely ignore the actual marketplace where this stuff all has to live. We aren't machines; Binary thinking that forces us to choose between "open standards" and "proprietary lock-in" (the most loaded b.s. FUD term evar) isn't smart at all. The truth is that the <object> tag has allowed us to build incredible stuff on top of the old standards, and consumers have benefitted greatly. Consumer desire, capitalism, and yes, standards ratified by nerds who think about this stuff for years have all played a role in the broad adoption of the Interwebs.We could all do without the noise. At the end of the day, I'm going to build stuff for the Web that's good for my users, and I'm not going to base my decisions on a techie bubble religion. Imagine what the brilliant minds behind the noise could do for the Web if they joined me in that pursuit.

    Read the article

  • Fun With the Chrome JavaScript Console and the Pluralsight Website

    - by Steve Michelotti
    Originally posted on: http://geekswithblogs.net/michelotti/archive/2013/07/24/fun-with-the-chrome-javascript-console-and-the-pluralsight-website.aspxI’m currently working on my third course for Pluralsight. Everyone already knows that Scott Allen is a “dominating force” for Pluralsight but I was curious how many courses other authors have published as well. The Pluralsight Authors page - http://pluralsight.com/training/Authors – shows all 146 authors and you can click on any author’s page to see how many (and which) courses they have authored. The problem is: I don’t want to have to click into 146 pages to get a count for each author. With this in mind, I figured I could write a little JavaScript using the Chrome JavaScript console to do some “detective work.” My first step was to figure out how the HTML was structured on this page so I could do some screen-scraping. Right-click the first author - “Inspect Element”. I can see there is a primary <div> with a class of “main” which contains all the authors. Each author is in an <h3> with an <a> tag containing their name and link to their page:     This web page already has jQuery loaded so I can use $ directly from the console. This allows me to just use jQuery to inspect items on the current page. Notice this is a multi-line command. In order to use multiple lines in the console you have to press SHIFT-ENTER to go to the next line:     Now I can see I’m extracting data just fine. At this point I want to follow each URL. Then I want to screen-scrape this next page to see how many courses each author has done. Let’s take a look at the author detail page:       I can see we have a table (with a css class of “course”) that contains rows for each course authored. This means I can get the number of courses pretty easily like this:     Now I can put this all together. Back on the authors page, I want to follow each URL, extract the returned HTML, and grab the count. In the code below, I simply use the jQuery $.get() method to get the author detail page and the “data” variable that is in the callback contains the HTML. A nice feature of jQuery is that I can simply put this HTML string inside of $() and I can use jQuery selectors directly on it in conjunction with the find() method:     Now I’m getting somewhere. I have every Pluralsight author and how many courses each one has authored. But that’s not quite what I’m after – what I want to see are the authors that have the MOST courses in the library. What I’d like to do is to put all of the data in an array and then sort that array descending by number of courses. I can add an item to the array after each author detail page is returned but the catch here is that I can’t perform the sort operation until ALL of the author detail pages have executed. The jQuery $.get() method is naturally an async method so I essentially have 146 async calls and I don’t want to perform my sort action until ALL have completed (side note: don’t run this script too many times or the Pluralsight servers might think your an evil hacker attempting a DoS attack and deny you). My C# brain wants to use a WaitHandle WaitAll() method here but this is JavaScript. I was able to do this by using the jQuery Deferred() object. I create a new deferred object for each request and push it onto a deferred array. After each request is complete, I signal completion by calling the resolve() method. Finally, I use a $.when.apply() method to execute my descending sort operation once all requests are complete. Here is my complete console command: 1: var authorList = [], 2: defList = []; 3: $(".main h3 a").each(function() { 4: var def = $.Deferred(); 5: defList.push(def); 6: var authorName = $(this).text(); 7: var authorUrl = $(this).attr('href'); 8: $.get(authorUrl, function(data) { 9: var courseCount = $(data).find("table.course tbody tr").length; 10: authorList.push({ name: authorName, numberOfCourses: courseCount }); 11: def.resolve(); 12: }); 13: }); 14: $.when.apply($, defList).then(function() { 15: console.log("*Everything* is complete"); 16: var sortedList = authorList.sort(function(obj1, obj2) { 17: return obj2.numberOfCourses - obj1.numberOfCourses; 18: }); 19: for (var i = 0; i < sortedList.length; i++) { 20: console.log(authorList[i]); 21: } 22: });   And here are the results:     WOW! John Sonmez has 44 courses!! And Matt Milner has 29! I guess Scott Allen isn’t the only “dominating force”. I would have assumed Scott Allen was #1 but he comes in as #3 in total course count (of course Scott has 11 courses in the Top 50, and 14 in the Top 100 which is incredible!). Given that I’m in the middle of producing only my third course, I better get to work!

    Read the article

  • Problems with opening CHM Help files from Network or Internet

    - by Rick Strahl
    As a publisher of a Help Creation tool called Html Help Help Builder, I’ve seen a lot of problems with help files that won't properly display actual topic content and displays an error message for topics instead. Here’s the scenario: You go ahead and happily build your fancy, schmanzy Help File for your application and deploy it to your customer. Or alternately you've created a help file and you let your customers download them off the Internet directly or in a zip file. The customer downloads the file, opens the zip file and copies the help file contained in the zip file to disk. She then opens the help file and finds the following unfortunate result:     The help file  comes up with all topics in the tree on the left, but a Navigation to the WebPage was cancelled or Operation Aborted error in the Help Viewer's content window whenever you try to open a topic. The CHM file obviously opened since the topic list is there, but the Help Viewer refuses to display the content. Looks like a broken help file, right? But it's not - it's merely a Windows security 'feature' that tries to be overly helpful in protecting you. The reason this happens is because files downloaded off the Internet - including ZIP files and CHM files contained in those zip files - are marked as as coming from the Internet and so can potentially be malicious, so do not get browsing rights on the local machine – they can’t access local Web content, which is exactly what help topics are. If you look at the URL of a help topic you see something like this:   mk:@MSITStore:C:\wwapps\wwIPStuff\wwipstuff.chm::/indexpage.htm which points at a special Microsoft Url Moniker that in turn points the CHM file and a relative path within that HTML help file. Try pasting a URL like this into Internet Explorer and you'll see the help topic pop up in your browser (along with a warning most likely). Although the URL looks weird this still equates to a call to the local computer zone, the same as if you had navigated to a local file in IE which by default is not allowed.  Unfortunately, unlike Internet Explorer where you have the option of clicking a security toolbar, the CHM viewer simply refuses to load the page and you get an error page as shown above. How to Fix This - Unblock the Help File There's a workaround that lets you explicitly 'unblock' a CHM help file. To do this: Open Windows Explorer Find your CHM file Right click and select Properties Click the Unblock button on the General tab Here's what the dialog looks like:   Clicking the Unblock button basically, tells Windows that you approve this Help File and allows topics to be viewed.   Is this insecure? Not unless you're running a really old Version of Windows (XP pre-SP1). In recent versions of Windows Internet Explorer pops up various security dialogs or fires script errors when potentially malicious operations are accessed (like loading Active Controls), so it's relatively safe to run local content in the CHM viewer. Since most help files don't contain script or only load script that runs pure JavaScript access web resources this works fine without issues. How to avoid this Problem As an application developer there's a simple solution around this problem: Always install your Help Files with an Installer. The above security warning pop up because Windows can't validate the source of the CHM file. However, if the help file is installed as part of an installation the installation and all files associated with that installation including the help file are trusted. A fully installed Help File of an application works just fine because it is trusted by Windows. Summary It's annoying as all hell that this sort of obtrusive marking is necessary, but it's admittedly a necessary evil because of Microsoft's use of the insecure Internet Explorer engine that drives the CHM Html Engine's topic viewer. Because help files are viewing local content and script is allowed to execute in CHM files there's potential for malicious code hiding in CHM files and the above precautions are supposed to avoid any issues. © Rick Strahl, West Wind Technologies, 2005-2012 Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • SQLAuthority News – Great Time Spent at Great Indian Developers Summit 2014

    - by Pinal Dave
    The Great Indian Developer Conference (GIDS) is one of the most popular annual event held in Bangalore. This year GIDS is scheduled on April 22, 25. I will be presented total four sessions at this event and each session is very different from each other. Here are the details of four of my sessions, which I presented there. Pluralsight Shades This event was a great event and I had fantastic fun presenting a technology over here. I was indeed very excited that along with me, I had many of my friends presenting at the event as well. I want to thank all of you to attend my session and having standing room every single time. I have already sent resources in my newsletter. You can sign up for the newsletter over here. Indexing is an Art I was amazed with the crowd present in the sessions at GIDS. There was a great interest in the subject of SQL Server and Performance Tuning. Audience at GIDS I believe event like such provides a great platform to meet and share knowledge. Pinal at Pluralsight Booth Here are the abstract of the sessions which I had presented. They were recorded so at some point in time they will be available, but if you want the content of all the courses immediately, I suggest you check out my video courses on the same subject on Pluralsight. Indexes, the Unsung Hero Relevant Pluralsight Course Slow Running Queries are the most common problem that developers face while working with SQL Server. While it is easy to blame SQL Server for unsatisfactory performance, the issue often persists with the way queries have been written, and how Indexes has been set up. The session will focus on the ways of identifying problems that slow down SQL Server, and Indexing tricks to fix them. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. Indexes are the most crucial objects of the database. They are the first stop for any DBA and Developer when it is about performance tuning. There is a good side as well evil side to indexes. To master the art of performance tuning one has to understand the fundamentals of indexes and the best practices associated with the same. We will cover various aspects of Indexing such as Duplicate Index, Redundant Index, Missing Index as well as best practices around Indexes. SQL Server Performance Troubleshooting: Ancient Problems and Modern Solutions Relevant Pluralsight Course Many believe Performance Tuning and Troubleshooting is an art which has been lost in time. However, truth is that art has evolved with time and there are more tools and techniques to overcome ancient troublesome scenarios. There are three major resources that when bottlenecked creates performance problems: CPU, IO, and Memory. In this session we will focus on High CPU scenarios detection and their resolutions. If time permits we will cover other performance related tips and tricks. At the end of this session, attendees will have a clear idea as well as action items regarding what to do when facing any of the above resource intensive scenarios. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. To master the art of performance tuning one has to understand the fundamentals of performance, tuning and the best practices associated with the same. We will discuss about performance tuning in this session with the help of Demos. Pinal Dave at GIDS MySQL Performance Tuning – Unexplored Territory Relevant Pluralsight Course Performance is one of the most essential aspects of any application. Everyone wants their server to perform optimally and at the best efficiency. However, not many people talk about MySQL and Performance Tuning as it is an extremely unexplored territory. In this session, we will talk about how we can tune MySQL Performance. We will also try and cover other performance related tips and tricks. At the end of this session, attendees will not only have a clear idea, but also carry home action items regarding what to do when facing any of the above resource intensive scenarios. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. To master the art of performance tuning one has to understand the fundamentals of performance, tuning and the best practices associated with the same. You will also witness some impressive performance tuning demos in this session. Hidden Secrets and Gems of SQL Server We Bet You Never Knew Relevant Pluralsight Course SQL Trio Session! It really amazes us every time when someone says SQL Server is an easy tool to handle and work with. Microsoft has done an amazing work in making working with complex relational database a breeze for developers and administrators alike. Though it looks like child’s play for some, the realities are far away from this notion. The basics and fundamentals though are simple and uniform across databases, the behavior and understanding the nuts and bolts of SQL Server is something we need to master over a period of time. With a collective experience of more than 30+ years amongst the speakers on databases, we will try to take a unique tour of various aspects of SQL Server and bring to you life lessons learnt from working with SQL Server. We will share some of the trade secrets of performance, configuration, new features, tuning, behaviors, T-SQL practices, common pitfalls, productivity tips on tools and more. This is a highly demo filled session for practical use if you are a SQL Server developer or an Administrator. The speakers will be able to stump you and give you answers on almost everything inside the Relational database called SQL Server. I personally attended the session of Vinod Kumar, Balmukund Lakhani, Abhishek Kumar and my favorite Govind Kanshi. Summary If you have missed this event here are two action items 1) Sign up for Resource Newsletter 2) Watch my video courses on Pluralsight Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL Tagged: GIDS

    Read the article

  • .NET Security Part 4

    - by Simon Cooper
    Finally, in this series, I am going to cover some of the security issues that can trip you up when using sandboxed appdomains. DISCLAIMER: I am not a security expert, and this is by no means an exhaustive list. If you actually are writing security-critical code, then get a proper security audit of your code by a professional. The examples below are just illustrations of the sort of things that can go wrong. 1. AppDomainSetup.ApplicationBase The most obvious one is the issue covered in the MSDN documentation on creating a sandbox, in step 3 – the sandboxed appdomain has the same ApplicationBase as the controlling appdomain. So let’s explore what happens when they are the same, and an exception is thrown. In the sandboxed assembly, Sandboxed.dll (IPlugin is an interface in a partially-trusted assembly, with a single MethodToDoThings on it): public class UntrustedPlugin : MarshalByRefObject, IPlugin { // implements IPlugin.MethodToDoThings() public void MethodToDoThings() { throw new EvilException(); } } [Serializable] internal class EvilException : Exception { public override string ToString() { // show we have read access to C:\Windows // read the first 5 directories Console.WriteLine("Pwned! Mwuahahah!"); foreach (var d in Directory.EnumerateDirectories(@"C:\Windows").Take(5)) { Console.WriteLine(d.FullName); } return base.ToString(); } } And in the controlling assembly: // what can possibly go wrong? AppDomainSetup appDomainSetup = new AppDomainSetup { ApplicationBase = AppDomain.CurrentDomain.SetupInformation.ApplicationBase } // only grant permissions to execute // and to read the application base, nothing else PermissionSet restrictedPerms = new PermissionSet(PermissionState.None); restrictedPerms.AddPermission( new SecurityPermission(SecurityPermissionFlag.Execution)); restrictedPerms.AddPermission( new FileIOPermission(FileIOPermissionAccess.Read, appDomainSetup.ApplicationBase); restrictedPerms.AddPermission( new FileIOPermission(FileIOPermissionAccess.pathDiscovery, appDomainSetup.ApplicationBase); // create the sandbox AppDomain sandbox = AppDomain.CreateDomain("Sandbox", null, appDomainSetup, restrictedPerms); // execute UntrustedPlugin in the sandbox // don't crash the application if the sandbox throws an exception IPlugin o = (IPlugin)sandbox.CreateInstanceFromAndUnwrap("Sandboxed.dll", "UntrustedPlugin"); try { o.MethodToDoThings() } catch (Exception e) { Console.WriteLine(e.ToString()); } And the result? Oops. We’ve allowed a class that should be sandboxed to execute code with fully-trusted permissions! How did this happen? Well, the key is the exact meaning of the ApplicationBase property: The application base directory is where the assembly manager begins probing for assemblies. When EvilException is thrown, it propagates from the sandboxed appdomain into the controlling assembly’s appdomain (as it’s marked as Serializable). When the exception is deserialized, the CLR finds and loads the sandboxed dll into the fully-trusted appdomain. Since the controlling appdomain’s ApplicationBase directory contains the sandboxed assembly, the CLR finds and loads the assembly into a full-trust appdomain, and the evil code is executed. So the problem isn’t exactly that the sandboxed appdomain’s ApplicationBase is the same as the controlling appdomain’s, it’s that the sandboxed dll was in such a place that the controlling appdomain could find it as part of the standard assembly resolution mechanism. The sandbox then forced the assembly to load in the controlling appdomain by throwing a serializable exception that propagated outside the sandbox. The easiest fix for this is to keep the sandbox ApplicationBase well away from the ApplicationBase of the controlling appdomain, and don’t allow the sandbox permissions to access the controlling appdomain’s ApplicationBase directory. If you do this, then the sandboxed assembly can’t be accidentally loaded into the fully-trusted appdomain, and the code can’t be executed. If the plugin does try to induce the controlling appdomain to load an assembly it shouldn’t, a SerializationException will be thrown when it tries to load the assembly to deserialize the exception, and no damage will be done. 2. Loading the sandboxed dll into the application appdomain As an extension of the previous point, you shouldn’t directly reference types or methods in the sandboxed dll from your application code. That loads the assembly into the fully-trusted appdomain, and from there code in the assembly could be executed. Instead, pull out methods you want the sandboxed dll to have into an interface or class in a partially-trusted assembly you control, and execute methods via that instead (similar to the example above with the IPlugin interface). If you need to have a look at the assembly before executing it in the sandbox, either examine the assembly using reflection from within the sandbox, or load the assembly into the Reflection-only context in the application’s appdomain. The code in assemblies in the reflection-only context can’t be executed, it can only be reflected upon, thus protecting your appdomain from malicious code. 3. Incorrectly asserting permissions You should only assert permissions when you are absolutely sure they’re safe. For example, this method allows a caller read-access to any file they call this method with, including your documents, any network shares, the C:\Windows directory, etc: [SecuritySafeCritical] public static string GetFileText(string filePath) { new FileIOPermission(FileIOPermissionAccess.Read, filePath).Assert(); return File.ReadAllText(filePath); } Be careful when asserting permissions, and ensure you’re not providing a loophole sandboxed dlls can use to gain access to things they shouldn’t be able to. Conclusion Hopefully, that’s given you an idea of some of the ways it’s possible to get past the .NET security system. As I said before, this post is not exhaustive, and you certainly shouldn’t base any security-critical applications on the contents of this blog post. What this series should help with is understanding the possibilities of the security system, and what all the security attributes and classes mean and what they are used for, if you were to use the security system in the future.

    Read the article

  • Project of Projects with team Foundation Server 2010

    - by Martin Hinshelwood
    It is pretty much accepted that you should use Areas instead of having many small Team Projects when you are using Team Foundation Server 2010. I have implemented this scenario many times and this is the current iteration of layout and considerations. If like me you work with many customers you will find that you get into a grove for how to set these things up to make them as easily understandable for everyone, while giving the best functionality. The trick is in making it as intuitive as possible for both you and the developers that need to work with it. There are five main places where you need to have the Product or Project name in prominence of any other value. Area Iteration Source Code Work Item Queries Build Once you decide how you are doing this in each of these places you need to keep to it religiously. Evan if you have one source code file to keep, make sure it is in the right place. This makes your developers and others working with the format familiar with where everything should go, as well as building up mussel memory. This prevents the neat system degenerating into a nasty mess. Areas Areas are traditionally used to separate out parts of your product / project so that you can see how much effort has gone into each. Figure: The top level areas are for reporting and work item separation There are massive advantages of using this method. You can: move work from one project to another rename a project / product It is far more likely that a project or product gets renamed than a department. Tip: If you have many projects, over 100, you should consider categorising them here, but make sure that the actual project name always sits at the same level so you know which is which. Figure: Always keep things that are the same at the same level Note: You may use these categories only at the Area/Iteration level to make it easier to select on drop down lists. You may not want to use them everywhere. On the other hand, for consistency it would be better to. Iterations Iterations are usually used to some sort of time based consideration. Here I am splitting into Iterations with periodic releases. Figure: Each product needs to be able to have its own cadence The ability to have each project run at its own pace and to enable them to have their own release schedule is often of paramount importance and you don’t want to fix your 100+ projects to all be released on the same date. Source Code Having a good structure for your source even if you are not branching or having multiple products under the same structure is always a good idea. Figure: Separate out your products source You need to think about both your branches as well as the structure of your source. All your code should be under “Source” and everything you need to build your solution including Build Scripts and 3rd party tools should be under your “Main” (branch) folder. This should them be branched by “Quality”, “Release” or both to get the most out of your branching structure. The important thing is to make sure you branch (or be able to branch) everything you need to build, test and deploy your application to an environment. That environment may be development, test or even production, but I can’t stress the importance of having everything your need. Note: You usually will not be able to install custom software on your build server. Store any *.dll’s or *.exe’s that you need under the “Tools\Tool1” folder. Note: Consult the Branching Guidance for Team Foundation Server 2010 for more on branching Figure: Adding category may be a necessary evil Even if you have to have a couple of categories called “Default”, it is better than not knowing the difference between a folder, Product and Branch. Work Item Queries Queries are used to load lists of Work Items out of TFS so you can see what work you have. This means that you want to also separate queries out by Product / project to make it easier to Figure: Again you have the same first level structure Having Folders also in Work Item Tracking we do the same thing. We put all the queries under a folder named for the Product / Project and change each query to have “AreaPath=[TeamProject]\[ProductX]” in the query instead of the standard “Project=@Project”. Tip: Don’t have a folder with new queries for each iteration. Instead have a single “Current” folder that has queries that point to the current iteration. Just change the queries as you move from one iteration to another. Tip: You can ctrl+drag the “Product1” folder to create your “Product2” folder. Builds You may have many builds both for individual products but also for different quality's. This can be further complicated by having some builds that action “Gated Check-In” and others that are specifically for “Release”, “Test” or another purpose. Figure: There are no folders, yet, for the builds so you need a good naming convention Its a pity that there are no folders under builds, some way to categorise would be nice. In lue of that at the moment you can use a functional naming convention that at least allows you to find what you want. Conclusion It is really easy to both achieve and to stick to this format if you take the time to do it. Unless you have 1000+ builds or 100+ Products you are unlikely run into any issues. Even then there are things you can do to mitigate the issues and I have describes some of them above. Let me know if you can think of any other things to make this easier.

    Read the article

  • How Can I Safely Destroy Sensitive Data CDs/DVDs?

    - by Jason Fitzpatrick
    You have a pile of DVDs with sensitive information on them and you need to safely and effectively dispose of them so no data recovery is possible. What’s the most safe and efficient way to get the job done? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader HaLaBi wants to know how he can safely destroy CDs and DVDs with personal data on them: I have old CDs/DVDs which have some backups, these backups have some work and personal files. I always had problems when I needed to physically destroy them to make sure no one will reuse them. Breaking them is dangerous, pieces could fly fast and may cause harm. Scratching them badly is what I always do but it takes long time and I managed to read some of the data in the scratched CDs/DVDs. What’s the way to physically destroy a CD/DVD safely? How should he approach the problem? The Answer SuperUser contributor Journeyman Geek offers a practical solution coupled with a slightly mad-scientist solution: The proper way is to get yourself a shredder that also handles cds – look online for cd shredders. This is the right option if you end up doing this routinely. I don’t do this very often – For small scale destruction I favour a pair of tin snips – they have enough force to cut through a cd, yet are blunt enough to cause small cracks along the sheer line. Kitchen shears with one serrated side work well too. You want to damage the data layer along with shearing along the plastic, and these work magnificently. Do it in a bag, cause this generates sparkly bits. There’s also the fun, and probably dangerous way – find yourself an old microwave, and microwave them. I would suggest doing this in a well ventilated area of course, and not using your mother’s good microwave. There’s a lot of videos of this on YouTube – such as this (who’s done this in a kitchen… and using his mom’s microwave). This results in a very much destroyed cd in every respect. If I was an evil hacker mastermind, this is what I’d do. The other options are better for the rest of us. Another contributor, Keltari, notes that the only safe (and DoD approved) way to dispose of data is total destruction: The answer by Journeyman Geek is good enough for almost everything. But oddly, that common phrase “Good enough for government work” does not apply – depending on which part of the government. It is technically possible to recover data from shredded/broken/etc CDs and DVDs. If you have a microscope handy, put the disc in it and you can see the pits. The disc can be reassembled and the data can be reconstructed — minus the data that was physically destroyed. So why not just pulverize the disc into dust? Or burn it to a crisp? While technically, that would completely eliminate the data, it leaves no record of the disc having existed. And in some places, like DoD and other secure facilities, the data needs to be destroyed, but the disc needs to exist. If there is a security audit, the disc can be pulled to show it has been destroyed. So how can a disc exist, yet be destroyed? Well, the most common method is grinding the disc down to destroy the data, yet keep the label surface of the disc intact. Basically, it’s no different than using sandpaper on the writable side, till the data is gone. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • Bugs in Excel's ActiveX combo boxes?

    - by k.robinson
    I have noticed that I get all sorts of annoying errors when: I have ActiveX comboboxes on a worksheet (not an excel form) The comboboxes have event code linked to them (eg, onchange events) I use their listfillrange or linkedcell properties (clearing these properties seems to alleviate a lot of problems) (Not sure if this is connected) but there is data validation on the targeted linkedcell. I program a fairly complex excel application that does a ton of event handling and uses a lot of controls. Over the months, I have been trying to deal with a variety of bugs dealing with those combo boxes. I can't recall all the details of each instance now, but these bugs tend to involve pointing the listfillrange and linkedcell properties at named ranges, and often have to do with the combo box events triggering at inappropriate times (such as when application.enableevents = false). These problems seemed to grow bigger in Excel 2007, so that I had to give up on these combo boxes entirely (I now use combo boxes contained in user forms, rather than directly on the sheets). Has anyone else seen similar problems? If so, was there a graceful solution? I have looked around with Google and so far haven't spotted anyone with similar issues. Some of the symptoms I end up seeing are: Excel crashing when I start up (involves combobox_onchange, listfillrange-named range on another different sheet, and workbook_open interactions). (note, I also had some data validation on the the linked cells in case a user edited them directly.) Excel rendering bugs (usually when the combo box changes, some cells from another sheet get randomly drawn over the top of the current sheet) Sometimes it involves the screen flashing entirely to another sheet for a moment. Excel losing its mind (or rather, the call stack) (related to the first bullet point). Sometimes when a function modifies a property of the comboboxes, the combobox onchange event fires, but it never returns control to the function that caused the change in the first place. The combobox_onchange events are triggered even when application.enableevents = false. Events firing when they shouldn't (I posted another question on stack overflow related to this). At this point, I am fairly convinced that ActiveX comboboxes are evil incarnate and not worth the trouble. I have switched to including these comboboxes inside a userform module instead. I would rather inconvenience users with popup forms than random visual artifacts and crashing (with data loss).

    Read the article

  • dojo.require() prevents Firefox from rendering the page

    - by Eduard Wirch
    Im experiencing strange behavior with Firefox and Dojo. I have a html page with these lines in the <head> section: ... <script type="text/javascript" src="dojo.js" djconfig="parseOnLoad: true, locale: 'de'"></script> <script type="text/javascript"> dojo.require("dojo.number"); </script> ... Sometimes the page loads normally. But sometimes it won't. Firefox will fetch the whole html page but not render it. I see only a gray window. After some experiments I figured out that the rendering problem has something to do with the load time of the html. Firefox starts evaluating the html page while loading it. If the page takes too long to load the above javascript will be executed BEFORE the html finishes loading. If this happens I'll get the gray window. Advising Firefox to show me the source code of the page will display the correct complete html code. BUT: if I save the page to disk (File-Save Page As...) the html code will be truncated and the above part will look like this: ... <script type="text/javascript" src="dojo.js" djconfig="parseOnLoad: true, locale: 'de'"></script> <script type="text/javascript"> dojo.require("dojo.number"); </script></head><body></body></html> This explains why I get to see a gray area. But why does this code appear there? I assume the require() method of Dojo does something "evil". But I can't figure out what. There is no write.document("</head><body></body></html>"); in the Dojo code. I checked for it. The problem would be fixed, if I'd place the dojo.require("dojo.number"); statement in the window.load event: <script type="text/javascript"> window.load=function() { dojo.require("dojo.number"); } </script> But I'm curious why this happens. Is there a Javasctript function which forces Firefox to stop evaluating the page? Does Dojo do somethig "bad"? Can anyone explain this behavior to me? EDIT: Dojo 1.3.1, no JS errors or warnings.

    Read the article

  • Beautifulsoup recursive attribute

    - by Marcos Placona
    Hi, trying to parse an XML with Beautifulsoup, but hit a brick wall when trying to use the "recursive" attribute with findall() I have a pretty odd xml format shown below: <?xml version="1.0"?> <catalog> <book id="bk101"> <author>Gambardella, Matthew</author> <title>XML Developer's Guide</title> <genre>Computer</genre> <price>44.95</price> <publish_date>2000-10-01</publish_date> <description>An in-depth look at creating applications with XML.</description> <catalog>true</catalog> </book> <book id="bk102"> <author>Ralls, Kim</author> <title>Midnight Rain</title> <genre>Fantasy</genre> <price>5.95</price> <publish_date>2000-12-16</publish_date> <description>A former architect battles corporate zombies, an evil sorceress, and her own childhood to become queen of the world.</description> <catalog>false</catalog> </book> </catalog> As you can see, the catalog tag repeats inside the book tag, which causes an error when I try to to something like: from BeautifulSoup import BeautifulStoneSoup as BSS catalog = "catalog.xml" def open_rss(): f = open(catalog, 'r') return f.read() def rss_parser(): rss_contents = open_rss() soup = BSS(rss_contents) items = soup.findAll('catalog', recursive=False) for item in items: print item.title.string rss_parser() As you will see, on my soup.findAll I've added recursive=false, which in theory would make it no recurse through the item found, but skip to the next one. This doesn't seem to work, as I always get the following error: File "catalog.py", line 17, in rss_parser print item.title.string AttributeError: 'NoneType' object has no attribute 'string' I'm sure I'm doing something stupid here, and would appreciate if someone could give me some help on how to solve this problem. Changing the HTML structure is not an option, this this code needs to perform well as it will potentially parse a large XML file. Thanks in advance, Marcos

    Read the article

  • Controling virtualbox internet access?

    - by HandyGandy
    I am finally going through the process of moving my XP into a vbox (host linux). The thing is that I am migrating a virtually clean install. So aside from the occasional antivirus scan, I want to make sure that my XP is not sending malware data (keystoke.logs, spam etc. ) out silently ( and thus having picked up some virus ). To that end I want to limit XP to contacting my LAN and a few internet sites. ( mainly sites that require proprietary windows only software to access, AV sites and Windows update ). I want XP to only access preapproved addresses. If it is trying to contact a nonapproved address, I want it somehow logged and access restricted until I allow access. I also don't want to have to decide whether to allow access to a site at my leisure. To keeps things clear let me give an example: I start my vbox/XP ( which I call MYXP) running on my linux box ( called MYLINUX connecting to the net through a linksys wrt54g ) and connects via samba to my LAN ( since my LAN seems to be possessed of every evil thing, it's address is 192.168.666. ). At the moment my configuration is set so that I allow MYXP to access 192.168.666 and www.MYANTIVIRUS_UPDATES.com and www.MS_UPDATES.com. Then on the VM I start a program which tries to make a connection to www.playmygame.com . www.playmygame.com is on my preapproved list so the connection goes through. Later I check attempted accesses and discover that it also tried to connect to www.mygame_high_scores.com I figure this is OK so I add www.mygame_high_scores.com to my approved list. Later, I again check address and discover that my VM/XP tried to access www.mygame_steals_your_identity.com. I do some checking and discover the address is registered to someone in Kiev, Nigeria. Since this doesn't sound kosher to me, I replace the MYXP VM with one that was backed up before I installed mygame. I remove www.playmygame.com and www.mygame_high_scores.com from my access list for MYXP. It should acomplish this with little overheard. When I am not running the VM ideally it should not have any overhead. Suggestions?

    Read the article

  • POD global object initialization

    - by paercebal
    I've got bitten today by a bug. The following source can be copy/pasted (and then compiled) into a main.cpp file #include <iostream> // The point of SomeGlobalObject is for its // constructor to be launched before the main // ... struct SomeGlobalObject { SomeGlobalObject() ; } ; // ... // Which explains the global object SomeGlobalObject oSomeGlobalObject ; // A POD... I was hoping it would be constructed at // compile time when using an argument list struct MyPod { short m_short ; const char * const m_string ; } ; // declaration/Initialization of a MyPod array MyPod myArrayOfPod[] = { { 1, "Hello" }, { 2, "World" }, { 3, " !" } } ; // declaration/Initialization of an array of array of void * void * myArrayOfVoid[][2] = { { (void *)1, "Hello" }, { (void *)2, "World" }, { (void *)3, " !" } } ; // constructor of the global object... Launched BEFORE main SomeGlobalObject::SomeGlobalObject() { std::cout << "myArrayOfPod[0].m_short : " << myArrayOfPod[0].m_short << std::endl ; std::cout << "myArrayOfVoid[0][0] : " << myArrayOfVoid[0][0] << std::endl ; } // main... What else ? int main(int argc, char* argv[]) { return 0 ; } MyPod being a POD, I believed there would be no constructors. Only initialization at compile time. Thus, the global object SomeGlobalObject would have no problem to use the global array of PODs upon its construction. The problem is that in real life, nothing is so simple. On Visual C++ 2008 (I did not test on other compilers), upon execution myArrayOfPodis not initialized, even ifmyArrayOfVoid` is initialized. So my questions is: Are C++ compilers not supposed to initialize global PODs (including POD structures) at compilation time ? Note that I know global variable are evil, and I know that one can't be sure of the order of creation of global variables declared in different compilation units. The problem here is really the POD C-like initialization which seems to call a constructor (the default, compiler-generated one?). And to make everyone happy: This is on debug. On release, the global array of PODs is correctly initialized.

    Read the article

  • objective-c default init method for class?

    - by Alex
    Hello, I have two differing methods for initializing my objective-c class. One is the default, and one takes a configuration parameter. Now, I'm pretty green when it comes to objective-c, but I've implemented these methods and I'm wondering if there's a better (more correct/in good style) way to handle initialization than the way I have done it. Meaning, did I write these initialization functions in accordance with standards and good style? It just doesn't feel right to check for the existence of selfPtr and then return based on that. Below are my class header and implementation files. Also, if you spot anything else that is wrong or evil, please let me know. I am a C++/Javascript developer who is learning objective-c as hobby and would appreciate any tips that you could offer. #import <Cocoa/Cocoa.h> // class for raising events and parsing returned directives @interface awesome : NSObject { // silence is golden. Actually properties are golden. Hence this emptiness. } // properties @property (retain) SBJsonParser* parser; @property (retain) NSString* eventDomain; @property (retain) NSString* appid // constructors -(id) init; -(id) initWithAppId:(id) input; // destructor -(void) dealloc; @end #import "awesome.h" #import "JSON.h" @implementation awesome - (id) init { if (self = [super init]) { // if init is called directly, just pass nil to AppId contructor variant id selfPtr = [self initWithAppId:nil]; } if (selfPtr) { return selfPtr; } else { return self; } } - (id) initWithAppId:(id) input { if (self = [super init]) { if (input = nil) { input = [[NSString alloc] initWithString:@"a369x123"]; } [self setAppid:input]; [self setEventDomain:[[NSString alloc] initWithString:@"desktop"]]; } return self; } // property synthesis @synthesize parser; @synthesize appid; @synthesize eventDomain; // destructor - (void) dealloc { self.parser = nil; self.appid = nil; self.eventDomain = nil; [super dealloc]; } @end Thanks!

    Read the article

  • Queue ExternalInterface calls to Flash Object in UpdatePanel - Needs Improvement?

    - by Laramie
    A Flash (actually Flex) object is created on an ASP.Net page within an Update Panel using a modified version of the embedCallAC_FL_RunContent.js script so it can be written in dynamically. It is re-created with this script with each partial postback to that panel. There are also other Update Panels on the page. With some postbacks (partial and full), External Interface calls such as $get('FlashObj').ExternalInterfaceFunc('arg1', 0, true); are prepared server-side and added to the page using ScriptManager.RegisterStartupScript. They're embedded in a function and stuffed into Sys.Application's load event, for example Sys.Application.add_load(funcContainingExternalInterfaceCalls). The problem is that because the Flash object's state state may change with each partial postback, the Flash (Flex) object and/or External Interface may not be ready or even exist yet in the DOM when the JavaScript - Flash External Interface call is made. It results in an "Object doesn't support this property or method" exception. I have a working strategy to make the ExternalInterface calls immediately if Flash is ready or else queue them until such time that Flash announces its readiness. //Called when the Flash object is initialized and can accept ExternalInterfaceCalls var flashReady = false; //Called by Flash when object is fully initialized function setFlashReady() { flashReady = true; //Make any queued ExternalInterface calls, then dequeue while (extIntQueue.length > 0) (extIntQueue.shift())(); } var extIntQueue = []; function callExternalInterface(flashObjName, funcName, args) { //reference to the wrapped ExternalInterface Call var wrapped = extWrap(flashObjName, funcName, args); //only procede with ExternalInterface call if the global flashReady variable has been set if (flashReady) { wrapped(); } else { //queue the function so when flashReady() is called next, the function is called and the aruments are passed. extIntQueue.push(wrapped); } } //bundle ExtInt call and hold variables in a closure function extWrap(flashObjName, funcName, args) { //put vars in closure return function() { var funcCall = '$get("' + flashObjName + '").' + funcName; eval(funcCall).apply(this, args); } } I set the flashReady var to dirty whenever I update the Update Panel that contains the Flash (Flex) object. ScriptManager.RegisterClientScriptBlock(parentContainer, parentContainer.GetType(), "flashReady", "flashReady = false;", true); I'm pleased that I got it to work, but it feels like a hack. I am still on the learning curve with respect to concepts like closures why "eval()" is apparently evil, so I'm wondering if I'm violating some best practice or if this code should be improved, if so how? Thanks.

    Read the article

  • BindException/Too many file open while using HttpClient under load

    - by Langali
    I have got 1000 dedicated Java threads where each thread polls a corresponding url every one second. public class Poller { public static Node poll(Node node) { GetMethod method = null; try { HttpClient client = new HttpClient(new SimpleHttpConnectionManager(true)); ...... } catch (IOException ex) { ex.printStackTrace(); } finally { method.releaseConnection(); } } } The threads are run every one second: for (int i=0; i <1000; i++) { MyThread thread = threads.get(i) // threads is a static field if(thread.isAlive()) { // If the previous thread is still running, let it run. } else { thread.start(); } } The problem is if I run the job every one second I get random exceptions like these: java.net.BindException: Address already in use INFO httpclient.HttpMethodDirector: I/O exception (java.net.BindException) caught when processing request: Address already in use INFO httpclient.HttpMethodDirector: Retrying request But if I run the job every 2 seconds or more, everything runs fine. I even tried shutting down the instance of SimpleHttpConnectionManager() using shutDown() with no effect. If I do netstat, I see thousands of TCP connections in TIME_WAIT state, which means they are have been closed and are clearing up. So to limit the no of connections, I tried using a single instance of HttpClient and use it like this: public class MyHttpClientFactory { private static MyHttpClientFactory instance = new HttpClientFactory(); private MultiThreadedHttpConnectionManager connectionManager; private HttpClient client; private HttpClientFactory() { init(); } public static HttpClientFactory getInstance() { return instance; } public void init() { connectionManager = new MultiThreadedHttpConnectionManager(); HttpConnectionManagerParams managerParams = new HttpConnectionManagerParams(); managerParams.setMaxTotalConnections(1000); connectionManager.setParams(managerParams); client = new HttpClient(connectionManager); } public HttpClient getHttpClient() { if (client != null) { return client; } else { init(); return client; } } } However after running for exactly 2 hours, it starts throwing 'too many open files' and eventually cannot do anything at all. ERROR java.net.SocketException: Too many open files INFO httpclient.HttpMethodDirector: I/O exception (java.net.SocketException) caught when processing request: Too many open files INFO httpclient.HttpMethodDirector: Retrying request I should be able to increase the no of connections allowed and make it work, but I would just be prolonging the evil. Any idea what is the best practise to use HttpClient in a situation like above? Btw, I am still on HttpClient3.1.

    Read the article

  • XML validation in Java - why does this fail?

    - by jd
    hi, first time dealing with xml, so please be patient. the code below is probably evil in a million ways (I'd be very happy to hear about all of them), but the main problem is of course that it doesn't work :-) public class Test { private static final String JSDL_SCHEMA_URL = "http://schemas.ggf.org/jsdl/2005/11/jsdl"; private static final String JSDL_POSIX_APPLICATION_SCHEMA_URL = "http://schemas.ggf.org/jsdl/2005/11/jsdl-posix"; public static void main(String[] args) { System.out.println(Test.createJSDLDescription("/bin/echo", "hello world")); } private static String createJSDLDescription(String execName, String args) { Document jsdlJobDefinitionDocument = getJSDLJobDefinitionDocument(); String xmlString = null; // create the elements Element jobDescription = jsdlJobDefinitionDocument.createElement("JobDescription"); Element application = jsdlJobDefinitionDocument.createElement("Application"); Element posixApplication = jsdlJobDefinitionDocument.createElementNS(JSDL_POSIX_APPLICATION_SCHEMA_URL, "POSIXApplication"); Element executable = jsdlJobDefinitionDocument.createElement("Executable"); executable.setTextContent(execName); Element argument = jsdlJobDefinitionDocument.createElement("Argument"); argument.setTextContent(args); //join them into a tree posixApplication.appendChild(executable); posixApplication.appendChild(argument); application.appendChild(posixApplication); jobDescription.appendChild(application); jsdlJobDefinitionDocument.getDocumentElement().appendChild(jobDescription); DOMSource source = new DOMSource(jsdlJobDefinitionDocument); validateXML(source); try { Transformer transformer = TransformerFactory.newInstance().newTransformer(); transformer.setOutputProperty(OutputKeys.INDENT, "yes"); StreamResult result = new StreamResult(new StringWriter()); transformer.transform(source, result); xmlString = result.getWriter().toString(); } catch (Exception e) { e.printStackTrace(); } return xmlString; } private static Document getJSDLJobDefinitionDocument() { DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance(); DocumentBuilder builder = null; try { builder = factory.newDocumentBuilder(); } catch (Exception e) { e.printStackTrace(); } DOMImplementation domImpl = builder.getDOMImplementation(); Document theDocument = domImpl.createDocument(JSDL_SCHEMA_URL, "JobDefinition", null); return theDocument; } private static void validateXML(DOMSource source) { try { URL schemaFile = new URL(JSDL_SCHEMA_URL); Sche maFactory schemaFactory = SchemaFactory.newInstance(XMLConstants.W3C_XML_SCHEMA_NS_URI); Schema schema = schemaFactory.newSchema(schemaFile); Validator validator = schema.newValidator(); DOMResult result = new DOMResult(); validator.validate(source, result); System.out.println("is valid"); } catch (Exception e) { e.printStackTrace(); } } } it spits out a somewhat odd message: org.xml.sax.SAXParseException: cvc-complex-type.2.4.a: Invalid content was found starting with element 'JobDescription'. One of '{"http://schemas.ggf.org/jsdl/2005/11/jsdl":JobDescription}' is expected. Where am I going wrong here? Thanks a lot

    Read the article

  • A shortest path problem with superheroes and intergalactic journeys

    - by Dman
    You are a super-hero in the year 2222 and you are faced with this great challenge: starting from your home planet Ilop you must try to reach Acinhet or else your planet will be destroyed by evil green little monsters. To do this you are given a map of the universe: there are N planets and M inter-planetary connections ( bidirectional ) that bind these planets. Each connection requires a certain time and a certain amount of fuel in order for you to cover the connection from one planet to another. The total time spent going from one planet to another is obtained by multiplying the time past to cover each connection between all the planets you go through. There are some "key planets", that allow you to refuel if you arrive on those certain "key planets". A "key planet" is the planet with the property that if it disappears the road between at least two planets would be lost.(In the example posted below with the input/output files such a "key planet" is 2 because without it the road to 7 would be lost) When you start your mission you are given the possibility of choosing between K ships each with its own maximum fuel capacity. The goal is to find the SHORTEST TIME CONSUMING path but also choose the ship with the minimum fuel capacity that can cover that shortest path(this means that if more ships can cover the shortest path you choose the one with the minimum fuel capacity). Because the minimum time can be a rather large number (over long long int) you are asked to provide only the last 6 digits of the number. For a better understanding of the task, here is an example of input/output files: INPUT: mission.in 7 8 6 1 4 6 5 9 8 7 10 1 2 7 8 1 4 14 9 1 5 3 1 2 3 1 2 2 7 7 1 3 4 2 2 4 6 4 1 5 6 3 7 On the first line (in order): N M K On the second line :the number for the starting planet and the finishing planet On the third line :K numbers that represent the capacities of the ships you can choose from Then you have M lines, all of them have the same structure: Xi Yi Ti Fi-which means that there is a connection between Xi and Yi and you can cover the distance from Xi to Yi in Ti time and with a Fi fuel consumption. OUTPUT:mission.out 000014 8 1 2 3 4 On the first line:the minimum time and fuel consumption; On the second line :the path Restrictions: 2 = N = 1 000 1 = M = 30 000 1 = K = 10 000 Any suggestions or ideas of how this problem might be solved would be most welcomed.

    Read the article

  • How safe is my safe rethrow?

    - by gustafc
    (Late edit: This question will hopefully be obsolete when Java 7 comes, because of the "final rethrow" feature which seems like it will be added.) Quite often, I find myself in situations looking like this: do some initialization try { do some work } catch any exception { undo initialization rethrow exception } In C# you can do it like this: InitializeStuff(); try { DoSomeWork(); } catch { UndoInitialize(); throw; } For Java, there's no good substitution, and since the proposal for improved exception handling was cut from Java 7, it looks like it'll take at best several years until we get something like it. Thus, I decided to roll my own: (Edit: Half a year later, final rethrow is back, or so it seems.) public final class Rethrow { private Rethrow() { throw new AssertionError("uninstantiable"); } /** Rethrows t if it is an unchecked exception. */ public static void unchecked(Throwable t) { if (t instanceof Error) throw (Error) t; if (t instanceof RuntimeException) throw (RuntimeException) t; } /** Rethrows t if it is an unchecked exception or an instance of E. */ public static <E extends Exception> void instanceOrUnchecked( Class<E> exceptionClass, Throwable t) throws E, Error, RuntimeException { Rethrow.unchecked(t); if (exceptionClass.isInstance(t)) throw exceptionClass.cast(t); } } Typical usage: public void doStuff() throws SomeException { initializeStuff(); try { doSomeWork(); } catch (Throwable t) { undoInitialize(); Rethrow.instanceOrUnchecked(SomeException.class, t); // We shouldn't get past the above line as only unchecked or // SomeException exceptions are thrown in the try block, but // we don't want to risk swallowing an error, so: throw new SomeException("Unexpected exception", t); } private void doSomeWork() throws SomeException { ... } } It's a bit wordy, catching Throwable is usually frowned upon, I'm not really happy at using reflection just to rethrow an exception, and I always feel a bit uneasy writing "this will not happen" comments, but in practice it works well (or seems to, at least). What I wonder is: Do I have any flaws in my rethrow helper methods? Some corner cases I've missed? (I know that the Throwable may have been caused by something so severe that my undoInitialize will fail, but that's OK.) Has someone already invented this? I looked at Commons Lang's ExceptionUtils but that does other things. Edit: finally is not the droid I'm looking for. I'm only interested to do stuff when an exception is thrown. Yes, I know catching Throwable is a big no-no, but I think it's the lesser evil here compared to having three catch clauses (for Error, RuntimeException and SomeException, respectively) with identical code. Note that I'm not trying to suppress any errors - the idea is that any exceptions thrown in the try block will continue to bubble up through the call stack as soon as I've rewinded a few things.

    Read the article

  • .NET and C# Exceptions. What is it reasonable to catch.

    - by djna
    Disclaimer, I'm from a Java background. I don't do much C#. There's a great deal of transfer between the two worlds, but of course there are differences and one is in the way Exceptions tend to be thought about. I recently answered a C# question suggesting that under some circstances it's reasonable to do this: try { some work } catch (Exeption e) { commonExceptionHandler(); } (The reasons why are immaterial). I got a response that I don't quite understand: until .NET 4.0, it's very bad to catch Exception. It means you catch various low-level fatal errors and so disguise bugs. It also means that in the event of some kind of corruption that triggers such an exception, any open finally blocks on the stack will be executed, so even if the callExceptionReporter fuunction tries to log and quit, it may not even get to that point (the finally blocks may throw again, or cause more corruption, or delete something important from the disk or database). May I'm more confused than I realise, but I don't agree with some of that. Please would other folks comment. I understand that there are many low level Exceptions we don't want to swallow. My commonExceptionHandler() function could reasonably rethrow those. This seems consistent with this answer to a related question. Which does say "Depending on your context it can be acceptable to use catch(...), providing the exception is re-thrown." So I conclude using catch (Exception ) is not always evil, silently swallowing certain exceptions is. The phrase "Until .NET 4 it is very bad to Catch Exception" What changes in .NET 4? IS this a reference to AggregateException, which may give us some new things to do with exceptions we catch, but I don't think changes the fundamental "don't swallow" rule. The next phrase really bothers be. Can this be right? It also means that in the event of some kind of corruption that triggers such an exception, any open finally blocks on the stack will be executed (the finally blocks may throw again, or cause more corruption, or delete something important from the disk or database) My understanding is that if some low level code had lowLevelMethod() { try { lowestLevelMethod(); } finally { some really important stuff } } and in my code I call lowLevel(); try { lowLevel() } catch (Exception e) { exception handling and maybe rethrowing } Whether or not I catch Exception this has no effect whatever on the excution of the finally block. By the time we leave lowLevelMethod() the finally has already run. If the finally is going to do any of the bad things, such as corrupt my disk, then it will do so. My catching the Exception made no difference. If It reaches my Exception block I need to do the right thing, but I can't be the cause of dmis-executing finallys

    Read the article

  • Commitment to Zend Framework - any arguments against?

    - by Pekka
    I am refurbishing a big CMS that I have been working on for quite a number of years now. The product itself is great, but some components, the Database and translation classes for example, need urgent replacing - partly self-made as far back as 2002, grown into a bit of a chaos over time, and might have trouble surviving a security audit. So, I've been looking closely at a number of frameworks (or, more exactly, component Libraries, as I do not intend to change the basic structure of the CMS) and ended up with liking Zend Framework the best. They offer a solid MVC model but don't force you into it, and they offer a lot of professional components that have obviously received a lot of attention (Did you know there are multiple plurals in Russian, and you can't translate them using a simple ($number == 0) or ($number > 1) switch? I didn't, but Zend_Translate can handle it. Just to illustrate the level of thorougness the library seems to have been built with.) I am now literally at the point of no return, starting to replace key components of the system by the Zend-made ones. I'm not really having second thoughts - and I am surely not looking to incite a flame war - but before going onward, I would like to step back for a moment and look whether there is anything speaking against tying a big system closely to Zend Framework. What I like about Zend: As far as I can see, very high quality code Extremely well documented, at least regarding introductions to how things work (Haven't had to use detailed API documentation yet) Backed by a company that has an interest in seeing the framework prosper Well received in the community, has a considerable user base Employs coding standards I like Comes with a full set of unit tests Feels to me like the right choice to make - or at least, one of the right choices - in terms of modern, professional PHP development. I have been thinking about encapsulating and abstracting ZF's functionality into own classes to be able to switch frameworks more easily, but have come to the conclusion that this would not be a good idea because: it would be an unnecessary level of abstraction it could cost performance the big advantage of using a framework - the existence of a developer base that is familiar with its components - would partly be cancelled out therefore, the commitment to ZF would be a deep one. Thus my question: Is there anything substantial speaking against committing to the Zend Framework? Do you have insider knowledge of plans of Zend Inc.'s to go evil in 2011, and make it a closed source library? Is Zend Inc. run by vampires? Are there conceptual flaws in the code base you start to notice when you've transitioned all your projects to it? Is the appearance of quality code an illusion? Does the code look good, but run terribly slow on anything below my quad-core workstation?

    Read the article

  • Do you use an exception class in your Perl programs? Why or why not?

    - by daotoad
    I've got a bunch of questions about how people use exceptions in Perl. I've included some background notes on exceptions, skip this if you want, but please take a moment to read the questions and respond to them. Thanks. Background on Perl Exceptions Perl has a very basic built-in exception system that provides a spring-board for more sophisticated usage. For example die "I ate a bug.\n"; throws an exception with a string assigned to $@. You can also throw an object, instead of a string: die BadBug->new('I ate a bug.'); You can even install a signal handler to catch the SIGDIE psuedo-signal. Here's a handler that rethrows exceptions as objects if they aren't already. $SIG{__DIE__} = sub { my $e = shift; $e = ExceptionObject->new( $e ) unless blessed $e; die $e; } This pattern is used in a number of CPAN modules. but perlvar says: Due to an implementation glitch, the $SIG{DIE} hook is called even inside an eval(). Do not use this to rewrite a pending exception in $@ , or as a bizarre substitute for overriding CORE::GLOBAL::die() . This strange action at a distance may be fixed in a future release so that $SIG{DIE} is only called if your program is about to exit, as was the original intent. Any other use is deprecated. So now I wonder if objectifying exceptions in sigdie is evil. The Questions Do you use exception objects? If so, which one and why? If not, why not? If you don't use exception objects, what would entice you to use them? If you do use exception objects, what do you hate about them, and what could be better? Is objectifying exceptions in the DIE handler a bad idea? Where should I objectify my exceptions? In my eval{} wrapper? In a sigdie handler? Are there any papers, articles or other resources on exceptions in general and in Perl that you find useful or enlightening.

    Read the article

< Previous Page | 12 13 14 15 16 17 18  | Next Page >