Search Results

Search found 27011 results on 1081 pages for 'buy vs build'.

Page 50/1081 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • Unable to build Python modules in Mandriva 2010

    - by SteveJ
    I am trying to build a Python module (pyfits) but I get the following error: # python setup.py install /home/steve/src/pyfits-2.2.2/stsci_distutils_hack.py:239: DeprecationWarning: os.popen3 is deprecated. Use the subprocess module. (sin, sout, serr) = os.popen3(cmd) running install error: invalid Python installation: unable to open /usr/lib64/python2.6/config/Makefile (No such file or directory) I get the same error when I try and build other modules so my guess is I am missing a Python development library. I am running Mandriva 2010.0, any suggestions?

    Read the article

  • Visual Studio Project build Error [closed]

    - by Mina Sobhy
    I installed Visual Studio 2010 on Windows7 SP1 but a debug error occurs: 1>------ Build started: Project: x, Configuration: Debug Win32 ------ 1>MSVCRTD.lib(crtexe.obj) : error LNK2019: unresolved external symbol main referenced in function __tmainCRTStartup 1>C:\Users\mina\Documents\Visual Studio 2010\Projects\x\Debug\x.exe : fatal error LNK1120: 1 unresolved externals ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========

    Read the article

  • Bamboo to Build Specific SVN Revision

    - by Anton Gogolev
    Hi! Imagine there's a project in Bamboo with two build plans: Staging Deployment (SD) and Production Deployment (PD). Building SD checks out latest sources, builds them and deploys a web site to a staging server. Currently, PD does all the same, namely deploys the latest version of a web site to a production server. Clearly, this is not very good: I want to be able to deploy the same exact version of a web site that was previously deployed on a staging server, not the latest one. To illustrate: suppose we're at r101 in SVN repo. Clicking "Build SD" will deploy a web site version, say, 2.1.0.101 to staging server. Now we commit a breaking change and end up at r102. Now I want to deploy to a production server. If I hit "Build PD", Bamboo will happily check out r102 and build it, resulting in version 2.1.0.102 being deployed to a production server. What I want it to do, however, is to build and deploy a version which was previously built in an SD plan (that is, 2.1.0.101). Of course I can make SD plan to tag latest-successful build as tags/builds/latest, but I would rather have Bamboo itself handle that.

    Read the article

  • In what (small) ways can I modify Octave's compile options to enhance it without breaking it?

    - by irrational John
    If the title to this question seems a bit vague, I am sorry. But I wasn't sure how to distill what I am attempting to do into a single sentence. A few weeks back I learned that I could build and install recent releases of Octave on an Ubuntu 12.04 system by following the steps below. Install the tools needed to compile, link, and run octave. For Ubuntu the commands below have worked for me. sudo apt-get build-dep octave3.2 sudo apt-get install build-essential gnuplot gtk2-engines-pixbuf sudo apt-get install libfontconfig-dev bison Next, download the source code for an Octave release from the Gnu Project Archives for Octave and unpack the archive into a folder on your system. Use the commands below to build, check, and install octave. ./configure make make check sudo make install Unfortunately it turns out that the above builds an Octave that contains all the debugging symbol tables. The object files alone are huge taking up around 1.7 GB. The current Octave documentation suggests To compile without debugging symbols try the command make CFLAGS=-O CXXFLAGS=-O LDFLAGS= instead of just make. However, when I tried this it did not work. The -g option was still used for the compiles. For the heck of it I instead tried ./configure CFLAGS=-O CXXFLAGS=-O and this did work. (Instead of ~1.7GB the result of the build now takes up around 253MB). My questions are Is this actually the correct (recommended?) method to use to compile Octave without debugging symbols (i.e. without -g)? How would I compile Octave so it uses x86_64 rather than x86? Note: I am not asking how to compile Octave to use the (experimental) 64-bit integers for array dimensions. I just want to allow the compiler to use the extra registers and word sizes available when an app runs in 64-bit mode. Is a (more) complete list available for the directives used with the Octave Makefile? I have only seen make, make check, and make install documented. But apparently make distclean is also allowed. (It removes the compilation results so you can do a complete rebuild of everything.) I'm wondering what else might be available. FWIW, I have tried using ./configure CFLAGS="-O3 -mtune=core2 -m64" CXXFLAGS="-O3 -mtune=core2 -m64" and, surprisingly, it not only appeared to build, but also ran and passed the make check tests. The ./configure script even gave me the (deceptively?) reassuring message "Octave is now configured for x86_64-unknown-linux-gnu". But of course that's not the same thing as saying it actually "works". Is there a recommended way to enable Octave to run as an x86_64 app? I have also tried looking inside the Octave Makefile to see if I could decipher what command line directives it accepts. I got nowhere. I have not a single clue as to how that Makefile does whatever it is that it does.

    Read the article

  • How can I build the Boost.Python example on Ubuntu 9.10?

    - by Gatlin
    I am using Ubuntu 9.10 beta, whose repositories contain boost 1.38. I would like to build the hello-world example. I followed the instructions here (http://www.boost.org/doc/libs/1%5F40%5F0/libs/python/doc/tutorial/doc/html/python/hello.html), found the example project, and issued the "bjam" command. I have installed bjam and boost-build. I get the following output: Jamroot:18: in modules.load rule python-extension unknown in module Jamfile</usr/share/doc/libboost1.38-doc/examples/libs/python/example>. /usr/share/boost-build/build/project.jam:312: in load-jamfile /usr/share/boost-build/build/project.jam:68: in load /usr/share/boost-build/build/project.jam:170: in project.find /usr/share/boost-build/build-system.jam:248: in load /usr/share/boost-build/kernel/modules.jam:261: in import /usr/share/boost-build/kernel/bootstrap.jam:132: in boost-build /usr/share/doc/libboost1.38-doc/examples/libs/python/example/boost-build.jam:7: in module scope I do not know enough about Boost (this is an exploratory exercise for myself) to understand why the python-extension macro in the included Jamroot is not valid. I am running this example from the install directory, so I have not altered the Jamroot's use-project setting. As a side question, if I were to just willy-nilly start a project in an arbitrary directory, how would I write my jamroot?

    Read the article

  • Customizing the TFS 2008 build sequence to avoid compilation and deploy SSRS

    - by Andrew
    I'm trying to create a CI process for SQL Server Reporting Services. I am fairly new to TFS but quite experienced with MSBuild. In the past I've used a combination of MSBuild with Team City so the whole build process is more or less custom. Here lies the start of my problems, as the solution I am deploying only contains Report Server projects (rds), no compilation is required. I thought that I would override the the first default task that TFS runs (EndToEndIteration) to override the default TFS build sequence and inject my own. The first snag that I have come across is that the build always fails, how can I set the status of the build to success? Currently the EndToEndIteration task is very light and only has a message. Is this the best method to create a custom build process in TFS where compilation is not required? Or should I use the default sequence and override one of the hook tasks mentioned in http://msdn.microsoft.com/en-us/library/aa337604%28VS.80%29.aspx (ie: AfterCompile) The core steps that I'd like to achieve are: Bundle the RDL and datasource files Connect to the host server to register/deploy the reports Re-apply any subscriptions that previously existed Run tests to verify the deployment succeeded and is returning results as expected I have found another article on Report services deployment: http://stackoverflow.com/questions/88710/reporting-services-deployment But it doesn't mention the best practice for customizing the standard build process. Any help would be appreciated.

    Read the article

  • What scenarios are possible where the VS C# compiler would not compile a reference of a reference?

    - by SuperKing
    Hello, I'm probably asking this question wrong (and that may be why Google isn't helping), but here goes: In Visual Studio I am compiling a C# project (let's call it Project A, the startup project) which has a reference to Project B. Project B has a reference to a Project C, so when A gets built, the dlls for B gets placed in the bin directory of A, as does the dll for C (because B requires C, and A requires B). However, I have apparently made some change recently so that the dll for Project C does not go into the bin directory of Project A when rebuilding the solution. I have no idea what I've done to make this happen. I have not modified the setup of the solution itself, and I have only added additional references to the project files. Code wise, I have commented out most of the actual code in Project B that references classes in Project C, but did not remove the reference from the project itself (I don't think this matters). I was told that perhaps the C# compiler was optimizing somehow so that it was not building Project C, but really I'm out of ideas. I would think someone has run into something similar before Any thoughts? Thanks!

    Read the article

  • VS 2010 SP1 installation error: Generic Trust Failure

    - by guybarrette
    I tried to install VS SP1 from the ISO (not the Web Installer) on a machine and ended up with a non successful install with the following error: Generic Trust Failure.   The log file said: Possible transient lock. WinVerifyTrust failed with error: 2148204800 [3/9/2011, 10:6:29]Possible transient lock. WinVerifyTrust failed with error: 2148204800 [3/9/2011, 10:6:30]C:\Dev\VSSP1\VS2010SP1\VC10sp1-KB983509-x86.msp - Signature verification for file VC10sp1-KB983509-x86.msp (C:\Dev\VSSP1\VS2010SP1\VC10sp1-KB983509-x86.msp) failed with error 0x800b0100 (No signature was present in the subject.) [3/9/2011, 10:6:30] C:\Dev\VSSP1\VS2010SP1\VC10sp1-KB983509-x86.msp Signature could not be verified for VC10sp1-KB983509-x86.msp [3/9/2011, 10:6:30]No FileHash provided. Cannot perform FileHash verification for VC10sp1-KB983509-x86.msp [3/9/2011, 10:6:30]File VC10sp1-KB983509-x86.msp (C:\Dev\VSSP1\VS2010SP1\VC10sp1-KB983509-x86.msp), failed authentication. (Error = -2146762496). It is recommended that you delete this file and retry setup again. Since I didn’t want to download the 1.5GB ISO a second time, I tried the Web installer and this time it worked like a charm.  Was the problem with a corrupt download or a file missing a signature I can’t say. var addthis_pub="guybarrette";

    Read the article

  • Deferred vs Immediate execution in Linq

    - by Jalpesh P. Vadgama
    In this post, We are going to learn about Deferred vs Immediate execution in Linq.  There an interesting variations how Linq operators executes and in this post we are going to learn both Deferred execution and immediate execution. What is Deferred Execution? In the Deferred execution query will be executed and evaluated at the time of query variables usage. Let’s take an example to understand Deferred Execution better. Example: Following is a code for that. using System; using System.Collections.Generic; using System.Linq; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { var customers = new List<Customer>( new[] { new Customer{FirstName = "Jalpesh",LastName = "Vadgama"}, new Customer{FirstName = "Vishal",LastName = "Vadgama"}, new Customer{FirstName = "Tushar",LastName = "Maru"} } ); var newCustomers = customers.Where(c => c.LastName == "Vadgama"); customers.Add(new Customer {FirstName = "Teerth", LastName = "Vadgama"}); foreach (var c in newCustomers) { Console.WriteLine(c.FirstName); } } public class Customer { public string FirstName { get; set; } public string LastName { get; set; } } } } More on my personal blog @www.dotnetjalps.com

    Read the article

  • Progressive Enhancement vs. Single Page Apps

    - by SeanPlusPlus
    I just got back from a conference in Boston called An Event Apart. A really popular theme amongst the speakers was the idea of progressive enhancement - a site's content should go in the HTML, and JavaScript should only be used to enhance behavior. The arguments that the speakers gave for progressive enhancement were very compelling. Not only is it a solid pattern for supporting older browsers, and devices on a network with low bandwidth, but HTML fails much more gracefully than JavaScript (i.e. markup that is not supported is just ignored, while if a browser throws an exception while executing your script - you are hosed). Jeremy Keith gave a particularly insightful talk about this. But what about single page web apps like Backbone and Angular? The whole design behind these frameworks seems to push the developer toward moving content out of the HTML, and into something like a JSON API. I can not seem to gel these two design patterns: progressive enhancement vs. single page web apps. Are there instances when one is better than the other? Or are they not even antagonistic technologies, and I am missing something here with my mental model?

    Read the article

  • xVelocity engines compared: VertiPaq vs ColumnStore #ssas #vertipaq #xvelocity #sql #tabular

    - by Marco Russo (SQLBI)
    During the last months I and Alberto worked in several projects using Analysis Services Tabular and we had to face real world issues, such as complex queries, large data volume, frequent data updates and so on. Sometime we faced the challenge of comparing Tabular performance with SQL Server. It seemed a non-sense, because even if the same core xVelocity technology is implemented in both products (SQL Server 2012 uses ColumnStore indexes, whereas Analysis Services 2012 uses VertiPaq), we initially assumed that the better optimization for the in-memory engine used by Analysis Services would have been always better than SQL Server. However, we discovered several important things: Processing time might be different and having data on SQL Server could make ColumnStore way faster for processing. Partitioning in SQL Server might be much more effective for query performance than Analysis Services. A single query can scale easily on more processor on SQL Server, whereas in Analysis Services the formula engine is single-threaded and could be a bottleneck for certain queries. In case of a large workload with many concurrent users, storage engine cache in Analysis Services could be a big advantage over SQL Server, especially for scalability As you can see, these considerations are not always obvious and you might be tempted to make other assumptions based on these information. Well, don’t do that. Before anything else, read the whitepaper VertiPaq vs ColumnStore Comparison written by Alberto Ferrari. Then, measure your workload. Finally, make some conclusion. But don’t make too many assumptions. You might be wrong, as we did at the beginning of this journey.

    Read the article

  • SQL SERVER – Difference between COUNT(DISTINCT) vs COUNT(ALL)

    - by pinaldave
    This blog post is written in response to the T-SQL Tuesday hosted by Jes Schultz Borland. Earlier today, I was presenting a 45-minute session at the Community College about “The Beginning SQL Server Database”. One of the students asked me the following question. What is the difference between COUNT(DISTINCT) vs COUNT(ALL)? I found this question from the student very interesting. He seems to have read the documentation (Book Online) and was then asking me this question. I always carry laptop which has SQL Server installed. I quickly opened it and ran the following script. After looking at the result, I think it was clear to everybody. Here is the script: SELECT COUNT([Title]) Value FROM [AdventureWorks].[Person].[Contact] GO SELECT COUNT(ALL [Title]) ALLValue FROM [AdventureWorks].[Person].[Contact] GO SELECT COUNT(DISTINCT [Title]) DistinctValue FROM [AdventureWorks].[Person].[Contact] GO The above script will give me the following results. You can clearly notice from the result set that COUNT (ALL ColumnName) is the same as COUNT(ColumnName). The reality is that the “ALL” is actually  the default option and it needs not to be specified. The ALL keyword includes all the non-NULL values. I know this is very simple and may be it does not change how we work; however looking at the whole angle, I really enjoyed the question. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • SQL SERVER – ORDER BY ColumnName vs ORDER BY ColumnNumber

    - by pinaldave
    I strongly favor ORDER BY ColumnName. I read one of the blog post where blogger compared the performance of the two SELECT statement and come to conclusion that ColumnNumber has no harm to use it. Let us understand the point made by first that there is no performance difference. Run following two scripts together: USE AdventureWorks GO -- ColumnName (Recommended) SELECT * FROM HumanResources.Department ORDER BY GroupName, Name GO -- ColumnNumber (Strongly Not Recommended) SELECT * FROM HumanResources.Department ORDER BY 3,2 GO If you look at the result and see the execution plan you will see that both of the query will take the same amount of the time. However, this was not the point of this blog post. It is not good enough to stop here. We need to understand the advantages and disadvantages of both the methods. Case 1: When Not Using * and Columns are Re-ordered USE AdventureWorks GO -- ColumnName (Recommended) SELECT GroupName, Name, ModifiedDate, DepartmentID FROM HumanResources.Department ORDER BY GroupName, Name GO -- ColumnNumber (Strongly Not Recommended) SELECT GroupName, Name, ModifiedDate, DepartmentID FROM HumanResources.Department ORDER BY 3,2 GO Case 2: When someone changes the schema of the table affecting column order I will let you recreate the example for the same. If your development server where your schema is different than the production server, if you use ColumnNumber, you will get different results on the production server. Summary: When you develop the query it may not be issue but as time passes by and new columns are added to the SELECT statement or original table is re-ordered if you have used ColumnNumber it may possible that your query will start giving you unexpected results and incorrect ORDER BY. One should note that the usage of ORDER BY ColumnName vs ORDER BY ColumnNumber should not be done based on performance but usability and scalability. It is always recommended to use proper ORDER BY clause with ColumnName to avoid any confusion. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Style bits vs. Separate bool's

    - by peterchen
    My main platform (WinAPI) still heavily uses bits for control styles etc. (example). When introducing custom controls, I'm permanently wondering whether to follow that style or rather use individual bool's. Let's pit them against each other: enum EMyCtrlStyles { mcsUseFileIcon = 1, mcsTruncateFileName = 2, mcsUseShellContextMenu = 4, }; void SetStyle(DWORD mcsStyle); void ModifyStyle(DWORD mcsRemove, DWORD mcsAdd); DWORD GetStyle() const; ... ctrl.SetStyle(mcsUseFileIcon | mcsUseShellContextMenu); vs. CMyCtrl & SetUseFileIcon(bool enable = true); bool GetUseFileIcon() const; CMyCtrl & SetTruncteFileName(bool enable = true); bool GetTruncteFileName() const; CMyCtrl & SetUseShellContextMenu(bool enable = true); bool GetUseShellContextMenu() const; ctrl.SetUseFileIcon().SetUseShellContextMenu(); As I see it, Pro Style Bits Consistent with platform less library code (without gaining complexity), less places to modify for adding a new style less caller code (without losing notable readability) easier to use in some scenarios (e.g. remembering / transferring settings) Binary API remains stable if new style bits are introduced Now, the first and the last are minor in most cases. Pro Individual booleans Intellisense and refactoring tools reduce the "less typing" effort Single Purpose Entities more literate code (as in "flows more like a sentence") No change of paradim for non-bool properties These sound more modern, but also "soft" advantages. I must admit the "platform consistency" is much more enticing than I could justify, the less code without losing much quality is a nice bonus. 1. What do you prefer? Subjectively, for writing the library, or for writing client code? 2. Any (semi-) objective statements, studies, etc.?

    Read the article

  • 2 year degree plus experience vs 4 year degree

    - by CenterOrbit
    Alright, I have searched around a bit on this site and found two somewhat similar questions: Computer Science Programming Certificate vs. Computer Science Degree? Is it possible/likely to be paid fairly without a college degree? But these do not provide an answer specifically to what I am seeking. I have my 2 year A.A.S. Degree in computer programming, along with a networking certificate from a technical college. I also have been working at a small educational game development company for 3 years now in various positions, but steadily moving up and now as a lead programmer on a few projects. Some of the higher programmers I work with claim that no matter how much experience I develop it still will not mean as much as someone with a 4 year degree. Their argument is that most employers will look over my resume because of the common '4 yr' minimum requirement. I have also heard people state (not as many though) that experience is everything and that an employer would rather have someone that has worked in the field instead of a rookie fresh out of college. I have heard both sides of this argument, but am looking for a general consensus, or more arguments from both sides from the people who have been there, or are there.

    Read the article

  • Presenting at VS Live! Orlando in December

    - by Steve Michelotti
    I’ll be presenting at VS Live! December 10-14 in Orlando, FL. I’ll be presenting Azure Web Sites. This is the session abstract: Azure Web Sites brings a whole new level of power and simplicity to cloud computing. This demo-heavy session will show numerous features that allow you to deploy your site in a matter of seconds. Whether you are building a completely custom app or deploying from one of the numerous templates provided (such as WordPress), you’ll be up and running in no time. Want to use Node.js or PHP and deploy from Git? No problem! Azure Web Sites gives you the power of elastic scaling while still providing streamlined development and an effortless deployment experience. This presentation will also cover features including monitoring, custom domains, working with SQL databases or more!   SPECIAL OFFER: As a speaker, I can extend $500 savings on the 5-day package. Register here: http://bit.ly/VOSPK19Reg  and use code VOSPK19. The great part about Visual Studio Live!: four events in one! This year, the event will be co-located with SQL Server Live!, SharePoint Live!, and Cloud & Virtualization Live!. You can customize your conference agenda and attend ANY sessions from all four events. Register now: http://bit.ly/VOSPK19Reg

    Read the article

  • OpenGL CPU vs. GPU

    - by Nitrex88
    So I've always been under the impression that doing work on the GPU is always faster than on the CPU. Because of this, in OpenGL, I usually try to do intensive tasks in shaders so they get the speed boost from the GPU. However, now I'm starting to realize that some things simply work better on the CPU and actually perform worse on the GPU (particularly when a geometry shader is involved). For example, in a recent project I did involving procedurally generated terrain, I tried passing a grid of single triangles into a geometry shader, and tesselated each of these triangles into quads with 400 vertices whose height was determined by a noise function. This worked fine, and looked great, but easily maxed out the GPU with only 25 base triangles and caused a very slow framerate. I then discovered that tesselating on the CPU instead, and setting the height (using noise function) in the vertex shader was actually faster! This prompted me to question the benefits of using the GPU as much as possible... So, I was wondering if someone could describe the general pros and cons of using the GPU vs CPU for intensive graphics tasks. I know this mainly comes down to what your trying to achieve, so if necessary, use the above scenario to discuss why the "CPU + vertex shader" was actually faster than doing everything in the geometry shader on the GPU. It's possible my hardware (newest macbook pro) isn't optomized well for the geometry shader (thus causing the slow framerate). Also, I read that the vertex shader is very good with parallelism, and would love a quick explanation of how this may have played a role in speeding up my procedural terrain. Any info/advice about CPU/GPU/shaders would be awesome!

    Read the article

  • OpenGL CPU vs. GPU

    - by Nitrex88
    So I've always been under the impression that doing work on the GPU is always faster than on the CPU. Because of this, in OpenGL, I usually try to do intensive tasks in shaders so they get the speed boost from the GPU. However, now I'm starting to realize that some things simply work better on the CPU and actually perform worse on the GPU (particularly when a geometry shader is involved). For example, in a recent project I did involving procedurally generated terrain, I tried passing a grid of single triangles into a geometry shader, and tesselated each of these triangles into quads with 400 vertices whose height was determined by a noise function. This worked fine, and looked great, but easily maxed out the GPU with only 25 base triangles and caused a very slow framerate. I then discovered that tesselating on the CPU instead, and setting the height (using noise function) in the vertex shader was actually faster! This prompted me to question the benefits of using the GPU as much as possible... So, I was wondering if someone could describe the general pros and cons of using the GPU vs CPU for intensive graphics tasks. I know this mainly comes down to what your trying to achieve, so if necessary, use the above scenario to discuss why the "CPU + vertex shader" was actually faster than doing everything in the geometry shader on the GPU. It's possible my hardware (newest macbook pro) isn't optomized well for the geometry shader (thus causing the slow framerate). Also, I read that the vertex shader is very good with parallelism, and would love a quick explanation of how this may have played a role in speeding up my procedural terrain. Any info/advice about CPU/GPU/shaders would be awesome!

    Read the article

  • Automatic TRIM vs. manual TRIM

    - by Eike Cochu
    I am currently trying to find out how to trim with my new TP and was wondering about the difference of manual/online trimming. Here is my setup: ThinkPad T430s with SSD Samsung 830, 128GB and Xubuntu 12.10, here are some outputs to check if trim will work on my system (got these from here: http://wiki.ubuntuusers.de/SSD/TRIM) root@eike-tp:~# sudo hdparm -I /dev/sda | grep -i TRIM * Data Set Management TRIM supported (limit 8 blocks) First, I tried the online trimming: How to enable TRIM? my fstab with discard inserted: UUID=d6c49c17-a4f1-466c-9f7e-896c20db3bba / ext4 discard,noatime,errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=a0322f5f-c6c1-4896-863f-668f0638d8cf none swap sw 0 0 tmpfs /tmp tmpfs defaults,noatime,mode=1777 0 0 I tried to test if it works (but I don't get any zeroes when I try it with /dev/sda), but found out that this method is only possible with SSD type 2 and I seem to have type 3. So I don't know if it works or not. The Ubuntuwiki (first link) recommends manual trimming, so I set up a daily cronjob instead of discard: #!/bin/sh LOG=/var/log/batched_discard.log echo "*** $(date -R) ***" >> $LOG fstrim -v / >> $LOG the wiki article suggests weekly or daily. Now to my questions: How often executes the automated trim? How often is recommended? Online vs. manual trimming? Thank you for your help

    Read the article

  • Fixed-Function vs Shaders: Which for beginner?

    - by Rob Hays
    I'm currently going to college for computer science. Although I do plan on utilizing an existing engine at some point to create a small game, my aim right now is towards learning the fundamentals: namely, 3D programming. I've already done some research regarding the choice between DirectX and OpenGL, and the general sentiment that came out of that was that whether you choose OpenGL or DirectX as your training-wheels platform, a lot of the knowledge is transferrable to the other platform. Therefore, since OpenGL is supported by more systems (probably a silly reason to choose what to learn), I decided that I'm going to learn OpenGL first. After I made this decision to learn OpenGL, I did some more research and found out about a dichotomy that I was somewhere unaware of all this time: fixed-function OpenGL vs. modern programmable shader-based OpenGL. At first, I thought it was an obvious choice that I should choose to learn shader-based OpenGL since that's what's most commonly used in the industry today. However, I then stumbled upon the very popular Learning Modern 3D Graphics Programming by Jason L. McKesson, located here: http://www.arcsynthesis.org/gltut/ I read through the introductory bits, and in the "About This Book" section, the author states: "First, much of what is learned with this approach must be inevitably abandoned when the user encounters a graphics problem that must be solved with programmability. Programmability wipes out almost all of the fixed function pipeline, so the knowledge does not easily transfer." yet at the same time also makes the case that fixed-functionality provides an easier, more immediate learning curve for beginners by stating: "It is generally considered easiest to teach neophyte graphics programmers using the fixed function pipeline." Naturally, you can see why I might be conflicted about which paradigm to learn: Do I spend a lot of time learning (and then later unlearning) the ways of fixed-functionality, or do I choose to start out with shaders? My primary concern is that modern programmable shaders somehow require the programmer to already understand the fixed-function pipeline, but I doubt that's the case. TL;DR == As an aspiring game graphics programmer, is it in my best interest to learn 3D programming through fixed-functionality or modern shader-based programming?

    Read the article

  • A Consol Application or Windows Application in VS 2010 for Sharepoint 2010 : A common Error

    - by Gino Abraham
    I have seen many Sharepoint Newbies cracking their head to create a Console/Windows  application in VS2010 and make it talk to Sharepoint 2010 Server. I had the same problem when i started with Sharepoint in the begining. It is important for you to acknowledge that SharePoint 2010 is based on .NET Framework version 3.5 and not version 4.0. In VS 2010 when you create a Console/Windows application, Make Sure you select .Net Framework 3.5 in the New Project Dialog Window.If you have missed while creating new Project Go to the Application tab of project properties and verify that .NET Framework Version 3.5 is select as the Target Framework. Now that you have selected the correct framework, will it work? Nope if the application is configured as x86 one it will not work. Sharepoint is a 64 Bit application and when you create a windows application to talk to Sharepoint it should also be a 64 Bit one. Go to Configuration Manager, Select x64. If x64 is not available select <New…> and in the New Solution Platform dialog box select x64 as the new platform copying settings from x86 and checking the Create new project platforms check box. This is not applicable if you are making a console application to talk to sharepoint with Client Object Model.

    Read the article

  • TCO Comparison: Oracle Exadata vs IBM P-Series

    - by Javier Puerta
    Cost Comparison for Business Decision-makersOracle Exadata Database Machine vs. IBM Power SystemsHow to Weigh a Purchase DecisionOctober 2012 Download full report here In this research-based  white paper conducted at the request of Oracle, The FactPoint Group compares the cost of ownership of the Oracle Exadata engineered system to a traditional build-your-own (BYO) solution, in this case an IBM Power 770 (P770) with SAN storage.  The IBM P770 was chosen given it is IBM’s current most popular model, based on FactPoint primary and secondary research and IBM claims, and because at least one of the interviewed customers had specifically migrated from a P770 to Exadata, affording us a more specific data point for comparison. This research found that Oracle Exadata: Can be deployed more quickly and easily requiring 59% fewer man-hours than a traditional IBM Power Systems solution. Delivers dramatically higher performance typically up to 12X improvement, as described by customers, over their prior solution.  Requires 40% fewer systems administrator hours to maintain and operate annually, including quicker support calls because of less finger-pointing and faster service with a single vendor.  Will become even easier to operate over time as users become more proficient and organize around the benefits of integrated infrastructure. Supplies a highly available, highly scalable and robust solution that results in reserve capacity that make Exadata easier for IT to operate because IT administrators can manage proactively, not reactively.  Overall, Exadata operations and maintenance keep IT administrators from “living on the edge.”  And it’s pre-engineered for long-term growth. Finally, compared to IBM Power Systems hardware, Exadata is a bargain from a total cost of ownership perspective:  Over three years, the IBM hardware running Oracle Database cost 31% more in TCO than Exadata.

    Read the article

  • Chrome Mobile Monthly: Responsive vs Separate Sites

    Chrome Mobile Monthly: Responsive vs Separate Sites Join us on Wednesday October 31st at 9am PT for our Monthly Mobile Web Hangout! This month +Brad Frost will be joining us to talk about responsive design versus separate mobile sites. And in keeping with the season, it's a special Presidential Smackdown Edition. The US presidential race is in full swing, and the candidates are intensely debating the country's hot-button issues. The web design world is entrenched in our own debate about how to address the mobile web: should we create a separate mobile site or create a responsive experience instead? It just so happens that the two US presidential candidates have chosen different mobile web strategies for their official websites. In the red corner is Republican candidate Mitt Romney's dedicated mobile site, while in the blue corner is incumbent president Barack Obama's responsive website. Which will prevail? Sit back, crack open a cold one, and watch the battle unfold as Brad dissect the candidates' sites to uncover best practices and common mobile web pitfalls. From: GoogleDevelopers Views: 0 0 ratings Time: 00:00 More in Science & Technology

    Read the article

  • Grading an algorithm: Readability vs. Compactness

    - by amiregelz
    Consider the following question in a test \ interview: Implement the strcpy() function in C: void strcpy(char *destination, char *source); The strcpy function copies the C string pointed by source into the array pointed by destination, including the terminating null character. Assume that the size of the array pointed by destination is long enough to contain the same C string as source, and does not overlap in memory with source. Say you were the tester, how would you grade the following answers to this question? 1) void strcpy(char *destination, char *source) { while (*source != '\0') { *destination = *source; source++; destionation++; } *destionation = *source; } 2) void strcpy(char *destination, char *source) { while (*(destination++) = *(source++)) ; } The first implementation is straightforward - it is readable and programmer-friendly. The second implementation is shorter (one line of code) but less programmer-friendly; it's not so easy to understand the way this code is working, and if you're not familiar with the priorities in this code then it's a problem. I'm wondering if the first answer would show more complexity and more advanced thinking, in the tester's eyes, even though both algorithms behave the same, and although code readability is considered to be more important than code compactness. It seems to me that since making an algorithm this compact is more difficult to implement, it will show a higher level of thinking as an answer in a test. However, it is also possible that a tester would consider the first answer not good because it's not readable. I would also like to mention that this is not specific to this example, but general for code readability vs. compactness when implementing an algorithm, specifically in tests \ interviews.

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >