Search Results

Search found 11888 results on 476 pages for 'hero vs zero'.

Page 22/476 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Progressive Enhancement vs. Single Page Apps

    - by SeanPlusPlus
    I just got back from a conference in Boston called An Event Apart. A really popular theme amongst the speakers was the idea of progressive enhancement - a site's content should go in the HTML, and JavaScript should only be used to enhance behavior. The arguments that the speakers gave for progressive enhancement were very compelling. Not only is it a solid pattern for supporting older browsers, and devices on a network with low bandwidth, but HTML fails much more gracefully than JavaScript (i.e. markup that is not supported is just ignored, while if a browser throws an exception while executing your script - you are hosed). Jeremy Keith gave a particularly insightful talk about this. But what about single page web apps like Backbone and Angular? The whole design behind these frameworks seems to push the developer toward moving content out of the HTML, and into something like a JSON API. I can not seem to gel these two design patterns: progressive enhancement vs. single page web apps. Are there instances when one is better than the other? Or are they not even antagonistic technologies, and I am missing something here with my mental model?

    Read the article

  • xVelocity engines compared: VertiPaq vs ColumnStore #ssas #vertipaq #xvelocity #sql #tabular

    - by Marco Russo (SQLBI)
    During the last months I and Alberto worked in several projects using Analysis Services Tabular and we had to face real world issues, such as complex queries, large data volume, frequent data updates and so on. Sometime we faced the challenge of comparing Tabular performance with SQL Server. It seemed a non-sense, because even if the same core xVelocity technology is implemented in both products (SQL Server 2012 uses ColumnStore indexes, whereas Analysis Services 2012 uses VertiPaq), we initially assumed that the better optimization for the in-memory engine used by Analysis Services would have been always better than SQL Server. However, we discovered several important things: Processing time might be different and having data on SQL Server could make ColumnStore way faster for processing. Partitioning in SQL Server might be much more effective for query performance than Analysis Services. A single query can scale easily on more processor on SQL Server, whereas in Analysis Services the formula engine is single-threaded and could be a bottleneck for certain queries. In case of a large workload with many concurrent users, storage engine cache in Analysis Services could be a big advantage over SQL Server, especially for scalability As you can see, these considerations are not always obvious and you might be tempted to make other assumptions based on these information. Well, don’t do that. Before anything else, read the whitepaper VertiPaq vs ColumnStore Comparison written by Alberto Ferrari. Then, measure your workload. Finally, make some conclusion. But don’t make too many assumptions. You might be wrong, as we did at the beginning of this journey.

    Read the article

  • SQL SERVER – Difference between COUNT(DISTINCT) vs COUNT(ALL)

    - by pinaldave
    This blog post is written in response to the T-SQL Tuesday hosted by Jes Schultz Borland. Earlier today, I was presenting a 45-minute session at the Community College about “The Beginning SQL Server Database”. One of the students asked me the following question. What is the difference between COUNT(DISTINCT) vs COUNT(ALL)? I found this question from the student very interesting. He seems to have read the documentation (Book Online) and was then asking me this question. I always carry laptop which has SQL Server installed. I quickly opened it and ran the following script. After looking at the result, I think it was clear to everybody. Here is the script: SELECT COUNT([Title]) Value FROM [AdventureWorks].[Person].[Contact] GO SELECT COUNT(ALL [Title]) ALLValue FROM [AdventureWorks].[Person].[Contact] GO SELECT COUNT(DISTINCT [Title]) DistinctValue FROM [AdventureWorks].[Person].[Contact] GO The above script will give me the following results. You can clearly notice from the result set that COUNT (ALL ColumnName) is the same as COUNT(ColumnName). The reality is that the “ALL” is actually  the default option and it needs not to be specified. The ALL keyword includes all the non-NULL values. I know this is very simple and may be it does not change how we work; however looking at the whole angle, I really enjoyed the question. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • SQL SERVER – ORDER BY ColumnName vs ORDER BY ColumnNumber

    - by pinaldave
    I strongly favor ORDER BY ColumnName. I read one of the blog post where blogger compared the performance of the two SELECT statement and come to conclusion that ColumnNumber has no harm to use it. Let us understand the point made by first that there is no performance difference. Run following two scripts together: USE AdventureWorks GO -- ColumnName (Recommended) SELECT * FROM HumanResources.Department ORDER BY GroupName, Name GO -- ColumnNumber (Strongly Not Recommended) SELECT * FROM HumanResources.Department ORDER BY 3,2 GO If you look at the result and see the execution plan you will see that both of the query will take the same amount of the time. However, this was not the point of this blog post. It is not good enough to stop here. We need to understand the advantages and disadvantages of both the methods. Case 1: When Not Using * and Columns are Re-ordered USE AdventureWorks GO -- ColumnName (Recommended) SELECT GroupName, Name, ModifiedDate, DepartmentID FROM HumanResources.Department ORDER BY GroupName, Name GO -- ColumnNumber (Strongly Not Recommended) SELECT GroupName, Name, ModifiedDate, DepartmentID FROM HumanResources.Department ORDER BY 3,2 GO Case 2: When someone changes the schema of the table affecting column order I will let you recreate the example for the same. If your development server where your schema is different than the production server, if you use ColumnNumber, you will get different results on the production server. Summary: When you develop the query it may not be issue but as time passes by and new columns are added to the SELECT statement or original table is re-ordered if you have used ColumnNumber it may possible that your query will start giving you unexpected results and incorrect ORDER BY. One should note that the usage of ORDER BY ColumnName vs ORDER BY ColumnNumber should not be done based on performance but usability and scalability. It is always recommended to use proper ORDER BY clause with ColumnName to avoid any confusion. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Style bits vs. Separate bool's

    - by peterchen
    My main platform (WinAPI) still heavily uses bits for control styles etc. (example). When introducing custom controls, I'm permanently wondering whether to follow that style or rather use individual bool's. Let's pit them against each other: enum EMyCtrlStyles { mcsUseFileIcon = 1, mcsTruncateFileName = 2, mcsUseShellContextMenu = 4, }; void SetStyle(DWORD mcsStyle); void ModifyStyle(DWORD mcsRemove, DWORD mcsAdd); DWORD GetStyle() const; ... ctrl.SetStyle(mcsUseFileIcon | mcsUseShellContextMenu); vs. CMyCtrl & SetUseFileIcon(bool enable = true); bool GetUseFileIcon() const; CMyCtrl & SetTruncteFileName(bool enable = true); bool GetTruncteFileName() const; CMyCtrl & SetUseShellContextMenu(bool enable = true); bool GetUseShellContextMenu() const; ctrl.SetUseFileIcon().SetUseShellContextMenu(); As I see it, Pro Style Bits Consistent with platform less library code (without gaining complexity), less places to modify for adding a new style less caller code (without losing notable readability) easier to use in some scenarios (e.g. remembering / transferring settings) Binary API remains stable if new style bits are introduced Now, the first and the last are minor in most cases. Pro Individual booleans Intellisense and refactoring tools reduce the "less typing" effort Single Purpose Entities more literate code (as in "flows more like a sentence") No change of paradim for non-bool properties These sound more modern, but also "soft" advantages. I must admit the "platform consistency" is much more enticing than I could justify, the less code without losing much quality is a nice bonus. 1. What do you prefer? Subjectively, for writing the library, or for writing client code? 2. Any (semi-) objective statements, studies, etc.?

    Read the article

  • 2 year degree plus experience vs 4 year degree

    - by CenterOrbit
    Alright, I have searched around a bit on this site and found two somewhat similar questions: Computer Science Programming Certificate vs. Computer Science Degree? Is it possible/likely to be paid fairly without a college degree? But these do not provide an answer specifically to what I am seeking. I have my 2 year A.A.S. Degree in computer programming, along with a networking certificate from a technical college. I also have been working at a small educational game development company for 3 years now in various positions, but steadily moving up and now as a lead programmer on a few projects. Some of the higher programmers I work with claim that no matter how much experience I develop it still will not mean as much as someone with a 4 year degree. Their argument is that most employers will look over my resume because of the common '4 yr' minimum requirement. I have also heard people state (not as many though) that experience is everything and that an employer would rather have someone that has worked in the field instead of a rookie fresh out of college. I have heard both sides of this argument, but am looking for a general consensus, or more arguments from both sides from the people who have been there, or are there.

    Read the article

  • Presenting at VS Live! Orlando in December

    - by Steve Michelotti
    I’ll be presenting at VS Live! December 10-14 in Orlando, FL. I’ll be presenting Azure Web Sites. This is the session abstract: Azure Web Sites brings a whole new level of power and simplicity to cloud computing. This demo-heavy session will show numerous features that allow you to deploy your site in a matter of seconds. Whether you are building a completely custom app or deploying from one of the numerous templates provided (such as WordPress), you’ll be up and running in no time. Want to use Node.js or PHP and deploy from Git? No problem! Azure Web Sites gives you the power of elastic scaling while still providing streamlined development and an effortless deployment experience. This presentation will also cover features including monitoring, custom domains, working with SQL databases or more!   SPECIAL OFFER: As a speaker, I can extend $500 savings on the 5-day package. Register here: http://bit.ly/VOSPK19Reg  and use code VOSPK19. The great part about Visual Studio Live!: four events in one! This year, the event will be co-located with SQL Server Live!, SharePoint Live!, and Cloud & Virtualization Live!. You can customize your conference agenda and attend ANY sessions from all four events. Register now: http://bit.ly/VOSPK19Reg

    Read the article

  • OpenGL CPU vs. GPU

    - by Nitrex88
    So I've always been under the impression that doing work on the GPU is always faster than on the CPU. Because of this, in OpenGL, I usually try to do intensive tasks in shaders so they get the speed boost from the GPU. However, now I'm starting to realize that some things simply work better on the CPU and actually perform worse on the GPU (particularly when a geometry shader is involved). For example, in a recent project I did involving procedurally generated terrain, I tried passing a grid of single triangles into a geometry shader, and tesselated each of these triangles into quads with 400 vertices whose height was determined by a noise function. This worked fine, and looked great, but easily maxed out the GPU with only 25 base triangles and caused a very slow framerate. I then discovered that tesselating on the CPU instead, and setting the height (using noise function) in the vertex shader was actually faster! This prompted me to question the benefits of using the GPU as much as possible... So, I was wondering if someone could describe the general pros and cons of using the GPU vs CPU for intensive graphics tasks. I know this mainly comes down to what your trying to achieve, so if necessary, use the above scenario to discuss why the "CPU + vertex shader" was actually faster than doing everything in the geometry shader on the GPU. It's possible my hardware (newest macbook pro) isn't optomized well for the geometry shader (thus causing the slow framerate). Also, I read that the vertex shader is very good with parallelism, and would love a quick explanation of how this may have played a role in speeding up my procedural terrain. Any info/advice about CPU/GPU/shaders would be awesome!

    Read the article

  • OpenGL CPU vs. GPU

    - by Nitrex88
    So I've always been under the impression that doing work on the GPU is always faster than on the CPU. Because of this, in OpenGL, I usually try to do intensive tasks in shaders so they get the speed boost from the GPU. However, now I'm starting to realize that some things simply work better on the CPU and actually perform worse on the GPU (particularly when a geometry shader is involved). For example, in a recent project I did involving procedurally generated terrain, I tried passing a grid of single triangles into a geometry shader, and tesselated each of these triangles into quads with 400 vertices whose height was determined by a noise function. This worked fine, and looked great, but easily maxed out the GPU with only 25 base triangles and caused a very slow framerate. I then discovered that tesselating on the CPU instead, and setting the height (using noise function) in the vertex shader was actually faster! This prompted me to question the benefits of using the GPU as much as possible... So, I was wondering if someone could describe the general pros and cons of using the GPU vs CPU for intensive graphics tasks. I know this mainly comes down to what your trying to achieve, so if necessary, use the above scenario to discuss why the "CPU + vertex shader" was actually faster than doing everything in the geometry shader on the GPU. It's possible my hardware (newest macbook pro) isn't optomized well for the geometry shader (thus causing the slow framerate). Also, I read that the vertex shader is very good with parallelism, and would love a quick explanation of how this may have played a role in speeding up my procedural terrain. Any info/advice about CPU/GPU/shaders would be awesome!

    Read the article

  • Automatic TRIM vs. manual TRIM

    - by Eike Cochu
    I am currently trying to find out how to trim with my new TP and was wondering about the difference of manual/online trimming. Here is my setup: ThinkPad T430s with SSD Samsung 830, 128GB and Xubuntu 12.10, here are some outputs to check if trim will work on my system (got these from here: http://wiki.ubuntuusers.de/SSD/TRIM) root@eike-tp:~# sudo hdparm -I /dev/sda | grep -i TRIM * Data Set Management TRIM supported (limit 8 blocks) First, I tried the online trimming: How to enable TRIM? my fstab with discard inserted: UUID=d6c49c17-a4f1-466c-9f7e-896c20db3bba / ext4 discard,noatime,errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=a0322f5f-c6c1-4896-863f-668f0638d8cf none swap sw 0 0 tmpfs /tmp tmpfs defaults,noatime,mode=1777 0 0 I tried to test if it works (but I don't get any zeroes when I try it with /dev/sda), but found out that this method is only possible with SSD type 2 and I seem to have type 3. So I don't know if it works or not. The Ubuntuwiki (first link) recommends manual trimming, so I set up a daily cronjob instead of discard: #!/bin/sh LOG=/var/log/batched_discard.log echo "*** $(date -R) ***" >> $LOG fstrim -v / >> $LOG the wiki article suggests weekly or daily. Now to my questions: How often executes the automated trim? How often is recommended? Online vs. manual trimming? Thank you for your help

    Read the article

  • Fixed-Function vs Shaders: Which for beginner?

    - by Rob Hays
    I'm currently going to college for computer science. Although I do plan on utilizing an existing engine at some point to create a small game, my aim right now is towards learning the fundamentals: namely, 3D programming. I've already done some research regarding the choice between DirectX and OpenGL, and the general sentiment that came out of that was that whether you choose OpenGL or DirectX as your training-wheels platform, a lot of the knowledge is transferrable to the other platform. Therefore, since OpenGL is supported by more systems (probably a silly reason to choose what to learn), I decided that I'm going to learn OpenGL first. After I made this decision to learn OpenGL, I did some more research and found out about a dichotomy that I was somewhere unaware of all this time: fixed-function OpenGL vs. modern programmable shader-based OpenGL. At first, I thought it was an obvious choice that I should choose to learn shader-based OpenGL since that's what's most commonly used in the industry today. However, I then stumbled upon the very popular Learning Modern 3D Graphics Programming by Jason L. McKesson, located here: http://www.arcsynthesis.org/gltut/ I read through the introductory bits, and in the "About This Book" section, the author states: "First, much of what is learned with this approach must be inevitably abandoned when the user encounters a graphics problem that must be solved with programmability. Programmability wipes out almost all of the fixed function pipeline, so the knowledge does not easily transfer." yet at the same time also makes the case that fixed-functionality provides an easier, more immediate learning curve for beginners by stating: "It is generally considered easiest to teach neophyte graphics programmers using the fixed function pipeline." Naturally, you can see why I might be conflicted about which paradigm to learn: Do I spend a lot of time learning (and then later unlearning) the ways of fixed-functionality, or do I choose to start out with shaders? My primary concern is that modern programmable shaders somehow require the programmer to already understand the fixed-function pipeline, but I doubt that's the case. TL;DR == As an aspiring game graphics programmer, is it in my best interest to learn 3D programming through fixed-functionality or modern shader-based programming?

    Read the article

  • A Consol Application or Windows Application in VS 2010 for Sharepoint 2010 : A common Error

    - by Gino Abraham
    I have seen many Sharepoint Newbies cracking their head to create a Console/Windows  application in VS2010 and make it talk to Sharepoint 2010 Server. I had the same problem when i started with Sharepoint in the begining. It is important for you to acknowledge that SharePoint 2010 is based on .NET Framework version 3.5 and not version 4.0. In VS 2010 when you create a Console/Windows application, Make Sure you select .Net Framework 3.5 in the New Project Dialog Window.If you have missed while creating new Project Go to the Application tab of project properties and verify that .NET Framework Version 3.5 is select as the Target Framework. Now that you have selected the correct framework, will it work? Nope if the application is configured as x86 one it will not work. Sharepoint is a 64 Bit application and when you create a windows application to talk to Sharepoint it should also be a 64 Bit one. Go to Configuration Manager, Select x64. If x64 is not available select <New…> and in the New Solution Platform dialog box select x64 as the new platform copying settings from x86 and checking the Create new project platforms check box. This is not applicable if you are making a console application to talk to sharepoint with Client Object Model.

    Read the article

  • TCO Comparison: Oracle Exadata vs IBM P-Series

    - by Javier Puerta
    Cost Comparison for Business Decision-makersOracle Exadata Database Machine vs. IBM Power SystemsHow to Weigh a Purchase DecisionOctober 2012 Download full report here In this research-based  white paper conducted at the request of Oracle, The FactPoint Group compares the cost of ownership of the Oracle Exadata engineered system to a traditional build-your-own (BYO) solution, in this case an IBM Power 770 (P770) with SAN storage.  The IBM P770 was chosen given it is IBM’s current most popular model, based on FactPoint primary and secondary research and IBM claims, and because at least one of the interviewed customers had specifically migrated from a P770 to Exadata, affording us a more specific data point for comparison. This research found that Oracle Exadata: Can be deployed more quickly and easily requiring 59% fewer man-hours than a traditional IBM Power Systems solution. Delivers dramatically higher performance typically up to 12X improvement, as described by customers, over their prior solution.  Requires 40% fewer systems administrator hours to maintain and operate annually, including quicker support calls because of less finger-pointing and faster service with a single vendor.  Will become even easier to operate over time as users become more proficient and organize around the benefits of integrated infrastructure. Supplies a highly available, highly scalable and robust solution that results in reserve capacity that make Exadata easier for IT to operate because IT administrators can manage proactively, not reactively.  Overall, Exadata operations and maintenance keep IT administrators from “living on the edge.”  And it’s pre-engineered for long-term growth. Finally, compared to IBM Power Systems hardware, Exadata is a bargain from a total cost of ownership perspective:  Over three years, the IBM hardware running Oracle Database cost 31% more in TCO than Exadata.

    Read the article

  • Chrome Mobile Monthly: Responsive vs Separate Sites

    Chrome Mobile Monthly: Responsive vs Separate Sites Join us on Wednesday October 31st at 9am PT for our Monthly Mobile Web Hangout! This month +Brad Frost will be joining us to talk about responsive design versus separate mobile sites. And in keeping with the season, it's a special Presidential Smackdown Edition. The US presidential race is in full swing, and the candidates are intensely debating the country's hot-button issues. The web design world is entrenched in our own debate about how to address the mobile web: should we create a separate mobile site or create a responsive experience instead? It just so happens that the two US presidential candidates have chosen different mobile web strategies for their official websites. In the red corner is Republican candidate Mitt Romney's dedicated mobile site, while in the blue corner is incumbent president Barack Obama's responsive website. Which will prevail? Sit back, crack open a cold one, and watch the battle unfold as Brad dissect the candidates' sites to uncover best practices and common mobile web pitfalls. From: GoogleDevelopers Views: 0 0 ratings Time: 00:00 More in Science & Technology

    Read the article

  • Grading an algorithm: Readability vs. Compactness

    - by amiregelz
    Consider the following question in a test \ interview: Implement the strcpy() function in C: void strcpy(char *destination, char *source); The strcpy function copies the C string pointed by source into the array pointed by destination, including the terminating null character. Assume that the size of the array pointed by destination is long enough to contain the same C string as source, and does not overlap in memory with source. Say you were the tester, how would you grade the following answers to this question? 1) void strcpy(char *destination, char *source) { while (*source != '\0') { *destination = *source; source++; destionation++; } *destionation = *source; } 2) void strcpy(char *destination, char *source) { while (*(destination++) = *(source++)) ; } The first implementation is straightforward - it is readable and programmer-friendly. The second implementation is shorter (one line of code) but less programmer-friendly; it's not so easy to understand the way this code is working, and if you're not familiar with the priorities in this code then it's a problem. I'm wondering if the first answer would show more complexity and more advanced thinking, in the tester's eyes, even though both algorithms behave the same, and although code readability is considered to be more important than code compactness. It seems to me that since making an algorithm this compact is more difficult to implement, it will show a higher level of thinking as an answer in a test. However, it is also possible that a tester would consider the first answer not good because it's not readable. I would also like to mention that this is not specific to this example, but general for code readability vs. compactness when implementing an algorithm, specifically in tests \ interviews.

    Read the article

  • A Console Application or Windows Application in VS 2010 for Sharepoint 2010 : A common Error

    - by Gino Abraham
    I have seen many Sharepoint Newbies cracking their head to create a Console/Windows  application in VS2010 and make it talk to Sharepoint 2010 Server. I had the same problem when i started with Sharepoint in the begining. It is important for you to acknowledge that SharePoint 2010 is based on .NET Framework version 3.5 and not version 4.0. In VS 2010 when you create a Console/Windows application, Make Sure you select .Net Framework 3.5 in the New Project Dialog Window.If you have missed while creating new Project Go to the Application tab of project properties and verify that .NET Framework Version 3.5 is select as the Target Framework. Now that you have selected the correct framework, will it work? Nope if the application is configured as x86 one it will not work. Sharepoint is a 64 Bit application and when you create a windows application to talk to Sharepoint it should also be a 64 Bit one. Go to Configuration Manager, Select x64. If x64 is not available select <New…> and in the New Solution Platform dialog box select x64 as the new platform copying settings from x86 and checking the Create new project platforms check box. This is not applicable if you are making a console application to talk to sharepoint with Client Object Model.

    Read the article

  • Docker vs ESXi for Startup Projects - Deploying Code for Dev Testing

    - by JasonG
    Why hello there little programmer dude! I have a question for you and all of your experience and knowledge. I have an ESXi whitebox that I built which is an 8 dude that sits in the corner. I made a mistake recently and took the key that had ESXi, formatted it and used it for something else. No big deal because the last project I worked on had stalled out. I'm about to pick up another project and now I need to spin up a whole bunch of stuff for CI, qa + db, ticket tracker, wikis etc etc. I've been hearing a lot about Docker recently and as this is just a consumer grade machine, I'm wondering if it may make more sense for me to use Docker on OpenOS and then put everything there - bamboo or hudson, jira, confluence, postgress for the tools to use, then a qa env. I can't really seem to find any documents that directly compare traditional VM infrastructure vs docker solutions and I'm wondering if it is fair to compare. Is there any reason why CoreOS w/ containers would be a strictly worse solution? Or do you have any insight into why I may want to stick with ESXi? I've looked on multiple occasions and can't find a good reason not to. I'm not going to run a production env on the server so I don't need to have HA if updating security or OS for example where esxi would allow me to restart one vm at a time. I can just shut the thing down and bring it back up if I need a reboot no problem. So what's up with this container stuff? Is it a fair replacement for ESXi? I'm guessing the atlassian products would run much better and my ram would go a lot farther using docker. Probably the CPU would run much cooler too and my expensive HDD space would be better utilized.

    Read the article

  • Software vs Network Engineer (Salary, Difficulty, Learning, Happiness)

    - by B Z
    What are your thoughts on being a Software Engineer vs a Network Engineer? I've been on the software field for almost 10 years now and although I still have a great deal of fun (and challenges), I am starting to think it could be better on the "other" side. Not to degrade network engineers (i know there are many great ones out there), it seems (in general) their job is easier, the learning curve from average to good is not as steep, job is less stressful and pay is better on average. I think as software developer I could make the switch to networking and still enjoy working with computers and feel productive. I spend an enormous amount of time learning about software, practices, new technologies, new patters, etc...I think I could spend a much smaller amount of time learning about networking and be just as "good". What are your thoughts? EDIT: This is not about making easy money. Networking and Software are closely related, I love computers and programming, but if I can work with both, make more money and have less stress in my life and can spend more time with my family, then I am willing to consider a change and hence I am looking for advice that Do or Don't support this view.

    Read the article

  • Updated Virtual Machine for VS/TFS 2010

    - by Enrique Lima
    If you had downloaded the previous version of the virtual machines, then you are likely aware they are set to expire soon (12/15/2010). Brian Keller announced yesterday (blog post here) the availability of a vm refresh (new expiration set for 6/1/2011). What is part of the refresh? Here is the excerpt from Brian’s post: “ The version of this virtual machine which was refreshed on December 9, 2010, includes the following additions: · Visual Studio 2010 Feature Pack 2 · Team Foundation Server 2010 Power Tools (September 2010 Release) · Visual Studio 2010 Productivity Power Tools (these are disabled in VS so that the screenshots of the hands-on-labs still match; you can quickly enable the Productivity Power Tools via Tools -> Extension Manager from within Visual Studio) · Test Scribe for Microsoft Test Manager · Visual Studio Scrum 1.0 Process Template · All Windows Updates through December 8, 2010 · Lab Management GDR (KB983578) · Visual Studio 2010 Feature Pack 2 pre-requisite hotfix (KB2403277) · Microsoft Test Manager hotfix (KB2387011) · Minor fit-and-finish fixes based on customer feedback · A new expiration date of June 1, 2011” The links to download the Virtual Machines are: Hyper-V: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=e0198b64-4acb-4709-b07f-359fb4d523bc&displaylang=en Windows Virtual PC (Win 7): http://www.microsoft.com/downloads/en/details.aspx?FamilyID=509c3ba1-4efc-42b5-b6d8-0232b2cbb26e&displaylang=en

    Read the article

  • @staticmethod vs module-level function

    - by darkfeline
    This is not about @staticmethod and @classmethod! I know how staticmethod works. What I want to know is the proper use cases for @staticmethod vs. a module-level function. I've googled this question, and it seems there's some general agreement that module-level functions are preferred over static methods because it's more pythonic. Static methods have the advantage of being bound to its class, which may make sense if only that class uses it. However, in Python functionality is usually organized by module not class, so usually making it a module function makes sense too. Static methods can also be overridden by subclasses, which is an advantage or disadvantage depending on how you look at it. Although, static methods are usually "functionally pure" so overriding it may not be smart, but it may be convenient sometimes (though this may be one of those "convenient, but NEVER DO IT" kind of things only experience can teach you). Are there any general rule-of-thumbs for using either staticmethod or module-level functions? What concrete advantages or disadvantages do they have (e.g. future extension, external extension, readability)? If possible, also provide a case example.

    Read the article

  • Editing Project files, Resource Editors in VS 2010

    - by rajbk
    Editing Project Files Visual Studio 2010 gives you the ability to easily edit the project file associated with your project (.csproj or .vbproj). You might do this to change settings related to how the project is compiled since proj files are MSBuild files. One would normally close Visual Studio and edit the proj file using a text editor.  The better way is to first unload the project in Visual Studio by right clicking on the project in the solution explorer and selecting “Unload Project”   The project gets unloaded and is marked “unavailable” The project file can now be edited by right clicking on the unloaded project.    After editing the file, the project can be reloaded. Resource editors in VS 2010 Visual studio also comes with a number of resource editors (see list here). For example, you could open a file using the Binary editor like so. Go to File > Open > File.. Select a File and choose the “Open With..” option in the bottom right.   We are given the option to choose an editor.   Note that clicking on the “Add..” in the dialog above allows you to include your favorite editor.   Choosing the “Binary editor” above allows us to edit the file in hex format. In addition, we can also search for hex bytes or ASCII strings using the Find command.   The “Open With..” option is also available from within the solution explorer as shown below: Enjoy!   Mr. Incredible: No matter how many times you save the world, it always manages to get back in jeopardy again. Sometimes I just want it to stay saved! You know, for a little bit? I feel like the maid; I just cleaned up this mess! Can we keep it clean for... for ten minutes!

    Read the article

  • Multi-Threaded Application vs. Single Threaded Application

    Why would we use a multi threaded application vs. a single threaded application? First we must define multithreading. Multithreading is a feature of an operating system that allows programs to run subcomponents or threads in parallel. Typically most applications only need to use one thread because they do not perform time consuming tasks. The use of multiple threads allows an application to distribute long running tasks so that they can be executed in parallel. This gives the user the perceived appearance that the application is working faster due to the fact that while one thread is waiting on an IO process the remaining tasks can make use of the available CPU. The allows working threads to execute in tandem so that they can be competed sooner. Multithreading Benefits Improved responsiveness — Users usually report improved responsiveness compared to single thread applications. Faster applications — Multiple threads can lead to improved application performance. Prioritization — Threads can be assigned a priority which would allow higher priority tasks to take precedence over lower priority tasks. Single Threading Benefits Programming and debugging —These activities are easier compared to multithreaded applications due to the reduced complexity Less Overhead — Threads add overhead to an application When developing multi-threaded applications, the following must be considered. Deadlocks occur when two threads hold a monitor that the other one requires. In essence each task is blocking the other and both tasks are waiting for the other monitor to be released. This forces an application to hang or deadlock. Resource allocation is used to prevent deadlocks because the system determines if approving the resource request will render the system in an unsafe state. An unsafe state could result in a deadlock. The system only approves requests that will lead to safe states. Thread Synchronization is used when multiple threads use the same instance of an object. The threads accessing the object can then be locked and then synchronized so that each task can interact with the static object on at a time.

    Read the article

  • “Play Now” via website vs. download & install

    - by Inside
    I've spent some time looking over the various threads here on GDSE and also on the regular Stackoverflow site, and while I saw a lot of posts and threads regarding various engines that could be used in game development, I haven't seen very much discussion regarding the various platforms that they can be used on. In particular, I'm talking about browser games vs. desktop games. I want to develop a simple 3D networked multiplayer game - roughly on the graphics level of Paper Mario and gameplay with roughly the same level of interaction as a hack & slash action/adventure game - and I'm having a hard time deciding what platform I want to target with it. I have some experience with using C++/Ogre3D and Python/Panda3D (and also some synchronized/networked programming), but I'm wondering if it's worth it to spend the extra time to learn another language and another engine/toolkit just so that the game can be played in a browser window (I'm looking at jMonkeyEngine right now). Is it worth it to go with engines that are less-mature, have less documentation, have fewer features, and smaller communities* just so that a (possibly?) larger audience can be reached? Does it make sense to even go with a web-environment for the kind of game that I want to make? Does anyone have any experiences with decisions like this? (* With the exception of Flash-based engines it seems like most of the other approaches have downsides when compared to what is available for desktop-based environments. I'd go with Flash, but I'm worried that Flash's 3D capabilities aren't mature enough right now to do what I want easily. There's also Unity3D, but I'm not sure how I feel about that at all. It seems highly polished, but requires a plugin to be downloaded for the game to be played -- at that rate I might as well have players download my game.) For simple & short games the Newgrounds approach (go to the site, click "play now", instant gratification) seems to work well. What about for more complex games? Is there a point where the complexity of a game is enough for people to say "OK, I'm going to download and play that"?

    Read the article

  • const vs. readonly for a singleton

    - by GlenH7
    First off, I understand there are folk who oppose the use of singletons. I think it's an appropriate use in this case as it's constant state information, but I'm open to differing opinions / solutions. (See The singleton pattern and When should the singleton pattern not be used?) Second, for a broader audience: C++/CLI has a similar keyword to readonly with initonly, so this isn't strictly a C# type question. (Literal field versus constant variable in C++/CLI) Sidenote: A discussion of some of the nuances on using const or readonly. My Question: I have a singleton that anchors together some different data structures. Part of what I expose through that singleton are some lists and other objects, which represent the necessary keys or columns in order to connect the linked data structures. I doubt that anyone would try to change these objects through a different module, but I want to explicitly protect them from that risk. So I'm currently using a "readonly" modifier on those objects*. I'm using readonly instead of const with the lists as I read that using const will embed those items in the referencing assemblies and will therefore trigger a rebuild of those referencing assemblies if / when the list(s) is/are modified. This seems like a tighter coupling than I would want between the modules, but I wonder if I'm obsessing over a moot point. (This is question #2 below) The alternative I see to using "readonly" is to make the variables private and then wrap them with a public get. I'm struggling to see the advantage of this approach as it seems like wrapper code that doesn't provide much additional benefit. (This is question #1 below) It's highly unlikely that we'll change the contents or format of the lists - they're a compilation of things to avoid using magic strings all over the place. Unfortunately, not all the code has converted over to using this singleton's presentation of those strings. Likewise, I don't know that we'd change the containers / classes for the lists. So while I normally argue for the encapsulations advantages a get wrapper provides, I'm just not feeling it in this case. A representative sample of my singleton public sealed class mySingl { private static volatile mySingl sngl; private static object lockObject = new Object(); public readonly Dictionary<string, string> myDict = new Dictionary<string, string>() { {"I", "index"}, {"D", "display"}, }; public enum parms { ABC = 10, DEF = 20, FGH = 30 }; public readonly List<parms> specParms = new List<parms>() { parms.ABC, parms.FGH }; public static mySingl Instance { get { if(sngl == null) { lock(lockObject) { if(sngl == null) sngl = new mySingl(); } } return sngl; } } private mySingl() { doSomething(); } } Questions: Am I taking the most reasonable approach in this case? Should I be worrying about const vs. readonly? is there a better way of providing this information?

    Read the article

  • JavaOne: Parleys.com, Spring Vs. Java EE and HTML5 tooling

    - by delabassee
    Parleys.com, a 2012 Duke's Choice Award winner, is an E-Learning platform that host content from different sources (conferences, JUGs meetings, etc.). There is a lot of technical content available for online but also offline consumption, including many sessions on Java EE. Parleys has just released, for free, all the Devoxx 2011 sessions (video and slides sync'ed!). From a technical point of view, Parleys.com is interesting as they have switched from Spring to Java EE 6 to avoid being locked in a proprietary framework. During the GlassFish Community BoF, Stephan Janssen (Parleys.com and Devoxx founder) also presented how GlassFish is used to support 2000 concurrent Parleys users over a cluster of 2 GlassFish instances. Talking about Java EE and/or Spring, Harshad Oak has posted an update on the 'Spring Vs. Java EE' panel discussion that took place on Tuesday. As Arun said standards such as Java EE does not necessarily refrain innovation: "JBoss Forge & Arquillian from RedHat are great examples of innovation in the JavaEE community. Standardization is important but innovation does continue even within that framework." Simplicity, productivity along with HTML5 are the driving themes of Java EE 7. In terms of simplicity and productivity, the developer experience can also be improved by the tooling. Every NetBeans release comes with a large set of improvements, the just released NetBeans 7.3 beta is no exception. The goal of ‘NB 7.3’s Project Easel’ is to improve HTML5 development, something that will be handy for Java EE 7 developers. Project Easel can, for example, communicate directly to Chrome's WebKit engine, this feature was shown during Sunday's Technical Keynote at the end of the Java EE section. In this beta release, Chrome and the embedded JavaFX browser are the only supported browsers but the NetBeans team plan to add support, over time, for other WebKit based browsers. NetBans 7.3 beta NetBeans 7.3 screenscasts Today (i.e. Wednesday 3rd) is also the final exhibition day, so make sure to visit the Java EE and the GlassFish pods on the Java DEMOgrounds (Hilton Grand Ballroom, 9:30 am - 5:00 pm). Finally, here are some Java EE and GlassFish related activities worth attending today if you are at JavaOne : Wednesday October 3rd Time Title Location 8:30-9:30am What's New in Servlet 3.1: An Overview Parc 55 Mission 8:30-9:30am Bean Validation 1.1: What's New Under the Hood Parc 55Cyril Magnin II/III 10:00-11:00am JSR 353: Java API for JSON Processing Parc 55 Mission 10:00-12:00pm Tutorial : Integrating Your Service into the GlassFish PaaS Platform Parc 55 Devisidero 11:30-12:30pm What's New in JSF: A Complete Tour of JSF 2.2 Parc 55Cyril Magnin I 11:30-12:30pm Best of Both Worlds: Java Persistence with NoSQL and SQL Parc 55 Mission 1:00-2:00pm Sharding Middleware to Achieve Elasticity and High Availability in the Cloud Parc 55Market Street 1:00-2:00pm Pimp My RESTful Java Applications Parc 55Cyril Magnin I 3:00-4:00pm Migrating Spring to Java EE Parc 55Cyril Magnin II/III 4:30-5:30pm JavaEE.Next(): Java EE 7, 8, and Beyond Parc 55Cyril Magnin II/III 4:30-5:30pm HTML5 WebSocket and Java Parc 55Cyril Magnin I 4:30-5:30pm Easy Middleware for Your Embedded Device Nikko Ballroom II/III

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >