Search Results

Search found 1803 results on 73 pages for 'boost dataflow'.

Page 57/73 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • OpenGL CPU vs. GPU

    - by Nitrex88
    So I've always been under the impression that doing work on the GPU is always faster than on the CPU. Because of this, in OpenGL, I usually try to do intensive tasks in shaders so they get the speed boost from the GPU. However, now I'm starting to realize that some things simply work better on the CPU and actually perform worse on the GPU (particularly when a geometry shader is involved). For example, in a recent project I did involving procedurally generated terrain, I tried passing a grid of single triangles into a geometry shader, and tesselated each of these triangles into quads with 400 vertices whose height was determined by a noise function. This worked fine, and looked great, but easily maxed out the GPU with only 25 base triangles and caused a very slow framerate. I then discovered that tesselating on the CPU instead, and setting the height (using noise function) in the vertex shader was actually faster! This prompted me to question the benefits of using the GPU as much as possible... So, I was wondering if someone could describe the general pros and cons of using the GPU vs CPU for intensive graphics tasks. I know this mainly comes down to what your trying to achieve, so if necessary, use the above scenario to discuss why the "CPU + vertex shader" was actually faster than doing everything in the geometry shader on the GPU. It's possible my hardware (newest macbook pro) isn't optomized well for the geometry shader (thus causing the slow framerate). Also, I read that the vertex shader is very good with parallelism, and would love a quick explanation of how this may have played a role in speeding up my procedural terrain. Any info/advice about CPU/GPU/shaders would be awesome!

    Read the article

  • Customer Concepts TV

    - by Richard Lefebvre
    Eliminate the Guesswork in Your Customer's Sales Organization Selling is the lifeblood of every business. In the past, companies would increase headcount to boost sales. In today’s business environment, companies need to re-evaluate the way in which they sell. Sales and marketing organisations must optimise performance, increase team productivity and focus on the best opportunities. Oracle Fusion CRM has been specifically designed with tools to help sales and marketing teams improve efficiency and drive revenue. Territory modeling and management, quota and commission management, collaborative features, real-time customer information and mobile device integration are just some features incorporated. Join us on Customer Concepts TV as we aim to help you find the right strategy for your prospect and customers. Whether they already have a CRM solution in place or are looking for the next level of CRM implementation, this online TV show will give you very practical advice that can help you to make the most out of your CRM implementation.Register now to reserve your spot for this exclusive, live-stream event. Customer Concepts TV comes to you on April 24. Watch the Customer Concepts TV trailer here

    Read the article

  • Doing powerups in a component-based system

    - by deft_code
    I'm just starting really getting my head around component based design. I don't know what the "right" way to do this is. Here's the scenario. The player can equip a shield. The the shield is drawn as bubble around the player, it has a separate collision shape, and reduces the damage the player receives from area effects. How is such a shield architected in a component based game? Where I get confused is that the shield obviously has three components associated with it. Damage reduction / filtering A sprite A collider. To make it worse different shield variations could have even more behaviors, all of which could be components: boost player maximum health health regen projectile deflection etc Am I overthinking this? Should the shield just be a super component? I really think this is wrong answer. So if you think this is the way to go please explain. Should the shield be its own entity that tracks the location of the player? That might make it hard to implement the damage filtering. It also kinda blurs the lines between attached components and entities. Should the shield be a component that houses other components? I've never seen or heard of anything like this, but maybe it's common and I'm just not deep enough yet. Should the shield just be a set of components that get added to the player? Possibly with an extra component to manage the others, e.g. so they can all be removed as a group. (accidentally leave behind the damage reduction component, now that would be fun). Something else that's obvious to someone with more component experience?

    Read the article

  • Today's Links (6/17/2011)

    - by Bob Rhubart
    Call for Nominations: Oracle Eco-Enterprise Innovation Awards Is your organization using Oracle products to reduce your environmental footprint while reducing costs? If so, submit your nomination for Oracle's Eco-Enterprise Innovation award. These awards will be presented to select customers and their partners who are using any of Oracle's products to not only take an environmental lead, but also to reduce their costs and improve their business efficiencies by using green business practices. Beyond The Data Grid: Coherence, Normalization, Joins, and Linear Scalability | Ben Stopford Ben Stopford presents ODC, a highly distributed in-memory normalized NoSQL datastore designed for scalability, based on normalized data, Snowflake Schema, and Connected Replication pattern. Upgrading ALSB services to OSB | John Chin-a-Woeng John Chin-a-Woeng walks you through the upgrade from Aqualogic Service Bus (ALSB 3.0) to Oracle Service Bus (OSB 10.3). SOA & Middleware: Pinning tasks to a user in BPM 11g | Niall Commiskey Commiskey illustrates a scenario. JDeveloper 11gR2: New option Test WebService in WSDL editor | Lucas Jellema The "Test WebService" button in the WSDL Editor in JDeveloper 11gr2 is "just a little feature addition," says Oracle ACE Director Lucas Jellema. "But it can be quite useful all the same." Enterprise Business Intelligence 11g Seminar with Mark Rittman Oracle ACE Director Mark Rittman conducts a two-day course for Oracle University, in Dublin, IE, July 4-5, 2011. Data Integration Webcast Series Join Oracle experts for a series covering our data integration solutions. You’ll get invaluable information to help boost your data infrastructure so that you can accelerate your business.

    Read the article

  • Doing powerups in a component-based system

    - by deft_code
    I'm just starting really getting my head around component based design. I don't know what the "right" way to do this is. Here's the scenario. The player can equip a shield. The the shield is drawn as bubble around the player, it has a separate collision shape, and reduces the damage the player receives from area effects. How is such a shield architected in a component based game? Where I get confused is that the shield obviously has three components associated with it. Damage reduction / filtering A sprite A collider. To make it worse different shield variations could have even more behaviors, all of which could be components: boost player maximum health health regen projectile deflection etc Am I overthinking this? Should the shield just be a super component? I really think this is wrong answer. So if you think this is the way to go please explain. Should the shield be its own entity that tracks the location of the player? That might make it hard to implement the damage filtering. It also kinda blurs the lines between attached components and entities. Should the shield be a component that houses other components? I've never seen or heard of anything like this, but maybe it's common and I'm just not deep enough yet. Should the shield just be a set of components that get added to the player? Possibly with an extra component to manage the others, e.g. so they can all be removed as a group. (accidentally leave behind the damage reduction component, now that would be fun). Something else that's obvious to someone with more component experience?

    Read the article

  • How do you cope with change in open source frameworks that you use for your projects?

    - by Amy
    It may be a personal quirk of mine, but I like keeping code in living projects up to date - including the libraries/frameworks that they use. Part of it is that I believe a web app is more secure if it is fully patched and up to date. Part of it is just a touch of obsessive compulsiveness on my part. Over the past seven months, we have done a major rewrite of our software. We dropped the Xaraya framework, which was slow and essentially dead as a product, and converted to Cake PHP. (We chose Cake because it gave us the chance to do a very rapid rewrite of our software, and enough of a performance boost over Xaraya to make it worth our while.) We implemented unit testing with SimpleTest, and followed all the file and database naming conventions, etc. Cake is now being updated to 2.0. And, there doesn't seem to be a viable migration path for an upgrade. The naming conventions for files have radically changed, and they dropped SimpleTest in favor of PHPUnit. This is pretty much going to force us to stay on the 1.3 branch because, unless there is some sort of conversion tool, it's not going to be possible to update Cake and then gradually improve our legacy code to reap the benefits of the new Cake framework. So, as usual, we are going to end up with an old framework in our Subversion repository and just patch it ourselves as needed. And this is what gets me every time. So many open source products don't make it easy enough to keep projects based on them up to date. When the devs start playing with a new shiny toy, a few critical patches will be done to older branches, but most of their focus is going to be on the new code base. How do you deal with radical changes in the open source projects that you use? And, if you are developing an open source product, do you keep upgrade paths in mind when you develop new versions?

    Read the article

  • The Science Behind Salty Airline Food

    - by Jason Fitzpatrick
    In this collection, Artist Signe Emma combines a scientific overview of the role salt plays in airline food with electron microscope scans of salt crystals arranged to look like the views from an airplane–a rather clever and visually stunning way to deliver the message. Attached to the collection is this explaination of why airlines load their snacks and meals with salt: White noise consists of a random collection of sounds at different frequencies and scientists have demonstrated that it is capable of diminishing the taste of salt. At low-pressure conditions, higher taste and odour thresholds of flavourings are generally observed. At 30.000 feet the cabin humidity drops by 15%, and the lowered air pressure forces bodily fluids upwards. With less humidity, people have less moisture in their throat, which slows the transport of odours to the brains smell and taste receptors. That means that if a meal should taste the same up in the air, as on ground it needs 30% of extra salt. To combat the double assault on our sense of taste, the airlines boost the salt content to compensate. For more neat microscope scans as high-altitude view photographs, hit up the link below. How to Play Classic Arcade Games On Your PC How to Use an Xbox 360 Controller On Your Windows PC Download the Official How-To Geek Trivia App for Windows 8

    Read the article

  • Performance of concurrent software on multicore processors

    - by Giorgio
    Recently I have often read that, since the trend is to build processors with multiple cores, it will be increasingly important to have programming languages that support concurrent programming in order to better exploit the parallelism offered by these processors. In this respect, certain programming paradigms or models are considered well-suited for writing robust concurrent software: Functional programming languages, e.g. Haskell, Scala, etc. The actor model: Erlang, but also available for Scala / Java (Akka), C++ (Theron, Casablanca, ...), and other programming languages. My questions: What is the state of the art regarding the development of concurrent applications (e.g. using multi-threading) using the above languages / models? Is this area still being explored or are there well-established practices already? Will it be more complex to program applications with a higher level of concurrency, or is it just a matter of learning new paradigms and practices? How does the performance of highly concurrent software compare to the performance of more traditional software when executed on multiple core processors? For example, has anyone implemented a desktop application using C++ / Theron, or Java / Akka? Was there a boost in performance on a multiple core processor due to higher parallelism?

    Read the article

  • How to use Pixel Bender (pbj) in ActionScript3 on large Vectors to make fast calculations?

    - by Arthur Wulf White
    Remember my old question: 2d game view camera zoom, rotation & offset using 'Filter' / 'Shader' processing? I figured I could use a Pixel Bender Shader to do the computation for any large group of elements in a game to save on processing time. At least it's a theory worth checking. I also read this question: Pass large array to pixel shader Which I'm guessing is about accomplishing the same thing in a different language. I read this tutorial: http://unitzeroone.com/blog/2009/03/18/flash-10-massive-amounts-of-3d-particles-with-alchemy-source-included/ I am attempting to do some tests. Here is some of the code: private const SIZE : int = Math.pow(10, 5); private var testVectorNum : Vector.<Number>; private function testShader():void { shader.data.ab.value = [1.0, 8.0]; shader.data.src.input = testVectorNum; shader.data.src.width = SIZE/400; shader.data.src.height = 100; shaderJob = new ShaderJob(shader, testVectorNum, SIZE / 4, 1); var time : int = getTimer(), i : int = 0; shaderJob.start(true); trace("TEST1 : ", getTimer() - time); } The problem is that I keep getting a error saying: [Fault] exception, information=Error: Error #1000: The system is out of memory. Update: I managed to partially workaround the problem by converting the vector into bitmapData: (Using this technique I still get a speed boost of 3x using Pixel Bender) private function testShader():void { shader.data.ab.value = [1.0, 8.0]; var time : int = getTimer(), i : int = 0; testBitmapData.setVector(testBitmapData.rect, testVectorInt); shader.data.src.input = testBitmapData; shaderJob = new ShaderJob(shader, testBitmapData); shaderJob.start(true); testVectorInt = testBitmapData.getVector(testBitmapData.rect); trace("TEST1 : ", getTimer() - time); }

    Read the article

  • Performing client-side OAuth authorized Twitter API calls versus server side, how much of a difference is there in terms of performance?

    - by Terence Ponce
    I'm working on a Twitter application in Ruby on Rails. One of the biggest arguments that I have with other people on the project is the method of calling the Twitter API. Before, everything was done on the server: OAuth login, updating the user's Twitter data, and retrieving tweets. Retrieving tweets was the heaviest thing to do since we don't store the tweets in our database, so viewing the tweets means that we have to call the API every time. One of the people in the project suggested that we call the tweets through Javascript instead to lessen the load on the server. We used GET search, which, correct me if I'm wrong, will be removed when v1.0 becomes completely deprecated, but that really isn't a concern now. When the Twitter API has migrated completely to v1.1 (again, correct me if I'm wrong), every calls to the API must be authenticated, so we have to authenticate our Javascript requests to the API. As said here: We don't support or recommend performing OAuth directly through Javascript -- it's insecure and puts your application at risk. The only acceptable way to perform it is if you kept all keys and secrets server-side, computed the OAuth signatures and parameters server side, then issued the request client-side from the server-generated OAuth values. If we do exactly what Twitter suggests, the only difference between this and doing everything server-side is that our server won't have to contact the Twitter API anymore every time the user wants to view tweets. Here's how I would picture what's happening every time the user makes a request: If we do it through Javascript, it would be harder on my part because I would have to create the signatures manually for every request, but I will gladly do it if the boost in performance is worth all the trouble. Doing it through Ruby on Rails would be very easy since the Twitter gem does most of the grunt work already, so I'm really encouraging the other people in the project to agree with me. Is the difference in performance trivial or is it significant enough to switch to Javascript?

    Read the article

  • Ranking drop after using reverse proxy for blog subdirectory and robots.txt for old blog subdomain

    - by user40387
    We have a 3Dcart store and a WordPress blog hosted on a separate server. Originally, we had a CNAME set up to point the blog to http://blog.example.com/. However, in our attempt to boost link-based and traffic-based authority on the main site, we've opted to do a reverse proxy to http://www.example.com/blog/. It’s been about two months since we finished the reverse proxy migration. It appears that everything is technically working as intended, including some robots and sitemap changes; the new URLs are even generating some traffic, as indicated on Google Analytics. While Google has been indexing the new URL locations, they’re ranking very poorly, even for non-competitive, long-tail keywords. Meanwhile, the old subdomain URLs are still ranking mostly as well as they used to (even though they aren’t showing meta titles and descriptions due to being blocked by robots.txt). Our working theory is that Google has an old index of the subdomain URLs, and is considering the new URLs to be duplicate content, since it’s being told not to crawl the subdomain and therefore can’t see the rel canonicals we have in place. To resolve this, we’ve updated the subdomain’s robot.txt to no longer block crawling and indexing. Theoretically, seeing the canonical tag on the subdomain pages will resolve any perceived duplicate content issues. In the meantime, we were wondering if anyone would have any other ideas. We are very concerned that we’ll be losing valuable traffic, as we’re entering our on season at the moment.

    Read the article

  • A (slight) Change of Focus

    - by StuartBrierley
    When I started this blog in September 2009 I was working as a BizTalk developer for a financial institution based in the South West of England.  At the time I was developing using BizTalk Server 2004 and intended to use my blog to collate and share any useful information and experiences that I had using this version of BizTalk (and occasionally other technologies) in an effort to bring together as many useful details as I could in one place. Since then my circumstances have changed and I am no longer working in the financial industry using BizTalk 2004.  Instead I have recently started a new post in the logistics industry, in the North of England, as "IT Integration Manager".  The company I now work for has identified a need to boost their middleware/integration platform and have chosen BizTalk Server 2009 as their platform of choice; this is where I come in. To start with my role is to provide the expertise with BizTalk that they currently lack, design and direct the initial BizTalk 2009 implementation and act as lead developer on all pending BizTalk projects.  Following this it is my hope that we will be able to build on the initial BizTalk "proof of concept" and eventually implement a fully robust enterprise level BizTalk 2009 environment. As such, this blog is going to see a shift in focus from BizTalk 2004 to BizTalk 2009 and at least initially is likely to include posts on the design and installation of our BizTalk environment - assuming of course that I have the time to write them! The last post I made was the start of a chapter by chapter look at the book SOA Patterns with BizTalk Server 2009.  Due to my change of job I am currently "paused" half way through this book, and my lack of posts on the subject are directly as a result of the job move and the pending relocation of my family.  I am hoping to write about my overall opinion of this book sometime soon; so far it certainly looks like it will be a positive one. Thanks for reading; I'm off to manage some integration.

    Read the article

  • What kind of language will replace C++ as C++ replaced C ? [closed]

    - by jokoon
    I think I'm not totally wrong when thinking that C++0x (or C++1x) is still C++, just better, with functionnalities coming from boost. I can't stop thinking that computer sciences, even with all that has been made so far, have to evolve again. I don't really like D since it just try to be some sort of "what C++ should have been", and Go seems to be too sophisticated when I dig a little into it, especially after watching some presentation video like this one http://www.youtube.com/watch?v=rKnDgT73v8s The first thing that come into my mind is a new kind of syntax to directly handle specific datatypes and containers such as map, vectors, queues... What kind of things are researchers thinking about ? What are the real features that could make C++ better or a new C-like language could invent ? Does Go features such things ? Would there be a new kind of syntax that would "unbloat" C++ while keeping its advantages ? Could C++ have some of the interesting stuff of languages such as C# and ObjC ? EDIT: Please consider that I'm talking about a system language, not a VM/CLI/bytecode thing.

    Read the article

  • Transition from 2D to 3D Game development [closed]

    - by jakebird451
    I have been working in the 2D world for a long time from manual blitting in windows to SDL to Python (pygame, pyopengl) and a bunch in between. Needless to say I have been programming for a while. So a while ago I started to program in OpenGL via C++ on my Mac. I then got a little intricate with my work after a while (3D models with skeleton structure and terrain development). After a long time of tinkering, I stopped due to the heavy work just to yield a low level understanding of how OpenGL works. Still interested in Graphics and Game Development I went on a search for a stable game engine with some features to grow on. Licence Requirement: Anything other than GPL (LGPL will do) OS Requirement: Mac & Windows Shader: GLSL or CG (GLSL preferred due to experience) Models: Any model structure with rigging (bone) support & animation I am looking at http://www.ogre3d.org/ currently and am starting to meddle around with some examples. However I am a little reluctant to spend a lot of time on it only to yield another dead end. So instead of falling down a spiraling black pit, I am posting my question to you guys to lead me in the right direction based on my requirements. How was your experience with the engine you recommend? Is it well documented? Does it have well documented examples? Any library requirements (Boost, libpng, etc)?

    Read the article

  • Armchair CEO: Windows

    - by Scott Kuhl
    Originally posted on: http://geekswithblogs.net/scottkuhl/archive/2013/10/12/armchair-ceo-windows.aspxWelcome to part 3 of my Armchair CEO series where I prove just why I’m not running Microsoft.  In this insightful edition I’ll tell you how to make Windows, the golden flagship of Microsoft, a better product. Android Apps Windows Phone is not the only app store that needs a boost.  But unlike Windows Phone, there is a very easy way to get a lot more apps on your Windows PC: BlueStacks.  Right now BlueStacks has 3 things going against it: its UI integration is a desktop app hack, it does not work on RT, and no one know about it.  All three could be fixed if Microsoft bought the company or pulled off the same thing.  The store can be designed to give preference to Windows Store apps but it closes a lot of holes quickly. The Desktop Experience Windows should switch between desktop mode and tablet mode automatically.  Laptops without touch and desktops should work a lot more like Windows 7.  The PC should boot to desktop and Metro apps should run in windows, like MetroMix.  A tablet should boot to the Start Screen by default and pretty much work the same way it does now in 8.1.  Touch laptops should give the user an in your face option on first boot to pick the experience.  And finally, the experience can be changed automatically if the PC is docked or has external monitors hooked up. Death of the Desktop This might seem completely opposite to the last feature, but its not.  I should have no need to ever see the desktop from Start Screen mode.  Every settings needs to be available, an amazing port of the file explorer is needed, and Office Metro must be released.  Desktop apps should also be able to run in full screen mode like other Metro apps.

    Read the article

  • How to stay creative when going through tough emotional times (divorce, family death, etc)? [closed]

    - by gaearon
    Hi everyone. I believe this is not a duplicate of motivation question because I want to especially emphasize the emotional breakdown. You may conquer lack of motivation by working harder and getting through the dip, however this was not the case when I was separating with my girlfriend. I actually liked the project, it was (and it still is!) my first programming job at an amazing workplace and I wasn't being pressured in any way but I found myself absolutely unable to code, blankly staring at the screen, my thoughts disorganized, the feeling of emptiness all in my chest. I could perform some straightforward coding but anything that involves creative thinking, designing abstractions, solving new problems and, worst of all, fixing bugs in legacy code, completely wiped out my brain to the point I started avoiding work, which I never have done before. Coffee only used to make it worse. Eventually I got over that, and I remember the happy day I solved a problem elegantly and thought—hell, first time in a month! Thankfully the project wasn't top priority and I had the time to catch up. I wonder now, was there any other way to boost my productivity back then? I bet people would say I should've taken a break—and I think I really should have—but what if I needed the money? Didn't want to lose my job? Are there any ways to trick your brain into being creative despite emotional losses? From your experience, would it be worth talking to my boss, collegues?

    Read the article

  • What would be a good topic for research on "edge of multiple processors / computers programming" topic?

    - by Kabumbus
    This is a subjective discussion so we can express our dreams and hopes here. A "topic" must be like a task with point to have as end result a software poduct. A "topic" must be mainly about "Software engineering", "Algorithm and data structure concepts" and perhaps "Design patterns". I mean let us try to look what is not already there? What can be developed in fiew month and give a breakthrue / start a new leap / show somethig not realized before in science of f multiple computers programming? What i see is already there: LAN / wire and other infrastractural programms for connecting on device level MPI/ Bit torrent/Jabber protocols / APIs / servers for messaging on top Boost and analogs on evry OS in most languages for multithreading there are lots of CUDA like on computer frameworks for fast calculating on computers GPUs What I personally do not see out there is a crossplatform framework for multiple processes interaction. Meaning one that would allow easy creation of multyple processes running in paralell inside one hoster app on one machine. In level not harder than needed for threads creation (so no seprate server apps - just one lib doing it all) Is there ny such lib and what can you propose for research topic?

    Read the article

  • What are some good tips for a developer trying to design a scalable MySQL database?

    - by CFL_Jeff
    As the question states, I am a developer, not a DBA. I have experience with designing good ER schemas and am fairly knowledgeable about normalization and good schema design. I have also worked with data warehouses that use dimensional modeling with fact tables and dim tables. However, all of the database-driven applications I've developed at previous jobs have been internal applications on the company's intranet, never receiving "real-world traffic". Furthermore, at previous jobs, I have always had a DBA or someone who knew much more than me about these things. At this new job I just started, I've been asked to develop a public-facing application with a MySQL backend and the data stored by this application is expected to grow very rapidly. Oh, and we don't have a DBA. Well, I guess I am the DBA. ;) As far as designing a database to be scalable, I don't even know where to start. Does anyone have any good tips or know of any good educational materials for a developer who has been sort of shoved into a DBA/database designer role and has been tasked with designing a scalable database to support an application like this? Have any other developers been through this sort of thing? What did you do to quickly become good at this role? I've found some good slides on the subject here but it's hard to glean details from slides. Wish I could've attended that guy's talk. I also found a good blog entry called 5 Ways to Boost MySQL Scalability which had some good information, though some of it was over my head. tl;dr I just want to make sure the database doesn't have to be completely redesigned when it scales up, and I'm looking for tips to get it right the first time. The answer I'm looking for is a "list of things every developer should know about making a scalable MySQL database so your application doesn't perform like crap when the data gets huge".

    Read the article

  • I'm a Subversion geek, why should I consider or not consider Mercurial or Git or any other DVCS?

    - by user2567
    I try to understand the benefits of distributed version control system (DVCS). I found Subversion Re-education and this article by Martin Fowler very useful. Mercurial and others DVCS promote a new way of working on code with changesets and local commits. It prevents from merging hell and other collaboration issues We are not affected by this as I practice continuous integration and working alone in a private branch is not an option, unless we are experimenting. We use a branch for every major version, in which we fix bugs merged from the trunk. Mercurial allows you to have lieutenants I understand this can be useful for very large projects like Linux, but I don't see the value in small and highly collaborative teams (5 to 7 people). Mercurial is faster, takes less disk space and full local copy allows faster logs & diffs operations. I'm not concerned by this either, as I didn't notice speed or space problems with SVN even with very large projects I'm working on. I'm seeking for your personal experiences and/or opinions from former SVN geeks. Especially regarding the changesets concept and overall performance boost you measured. UPDATE (12th Jan): I'm now convinced that it worth a try. UPDATE (12th Jun): I kissed Mercurial and I liked it. The taste of his cherry local commits. I kissed Mercurial just to try it. I hope my SVN Server don't mind it. It felt so wrong. It felt so right. Don't mean I'm in love tonight. FINAL UPDATE (29th Jul): I had the privilege to review Eric Sink's next book called Version Control by Example. He finished to convince me. I'll go for Mercurial.

    Read the article

  • Will we be penalized for having multiple external links to the same site?

    - by merk
    There seem to be conflicting answers on this question. The most relevant ones seem to be at least a year or two old, so I thought it would be worth re-asking this question. My gut says it's ok, because there are plenty of sites out there that do this already. Every major retailer site usually has links to the manufacturer of whatever item they are selling. go to www.newegg.com and they have hundreds of links to the same site since they sell multiple items from the same brand. Our site allows people to list a specific genre of items for sale (not porn - i'm just keeping it generic since I'm not trying to advertise) and on each item listing page, we have a link back to their website if they want. Our SEO guy is saying this is really bad and google is going to treat us as a link farm. My gut says when we have to start limiting user useful features to our site to boost our ranking, then something is wrong. Or start jumping through hoops by trying to hide text using javascript etc Some clients are only selling 1 to a handful of items, while a couple of our bigger clients have hundreds of items listed so will have hundreds of pages that link back to their site. I should also mention, there will be a handful of pages with the bigger clients where it may appear they have duplicate pages, because they will be selling 2 or 3 of the same item, and the only difference in the content of the page might just be a stock #. The majority of the pages though will have unique content. So - will we be penalized in some way for having anywhere from a handful to a few hundred pages that all point to the same link? If we are penalized, what's the suggested way to handle this? We still want to give users the option to go to the clients site, and we would still like to give a link back to the clients site to help their own SE rankings.

    Read the article

  • How to encourage domain experts familiar only with C into a C++ opensource project [closed]

    - by paperjam
    Possible Duplicate: How to persuade C fanatics to work on my C++ open source project? I am launching an open-source project into a space where a lot of development is done Linux-kernel-style, i.e. C-language with a low-level mindset. My project is broad and complex and uses aspects of the C++ language and libraries, including the Boost library to best effect for simple, slightly syntactically sweetened, elegant and well structured high level code. We are using C++ templates too to avoid duplication of code and for static polymorphism in code specialisation for performance. Many of the experts in this field are well used to pure C-language projects. How can I persuade them to contribute to my idiomatic C++ based project? I have no objection to C-language subcomponents or the use of a C-like subset for parts of the project so that might be part of the answer. This is a rewritten and retagged rehash of my previous question that was closed. Apologies to those who read and answered for it not being constructive. I hope this new question is viewed as constructive. Please note that this is not a language advocacy question and please keep answers in that spirit.

    Read the article

  • what will EcmaScript 6 bring to the table for us

    - by user697296
    Our company ported moderate chunks of business logic to JavaScript. We compile the code with a minifier, which further improves performance. Since the language is dynamically typed, it lends itself well to obfuscation, which occurs as a byproduct of minification. We went to great efforts to ensure it positively screams, performance-wise. We can now do what we did before, faster, better, with less code, on more platforms. In summary, we are very satisfied with the current state of the language. I personally love the language especially for its cross-platform nature. So naturally, I read up a lot about the state of JavaScript compilers, performance and compatibility across as many browsers and platforms as I have time to research. The one theme which has been growing louder and louder these days, is the news about ECMAScript 6. So far, what I have been able to gather is that ES6 promises a better development experience; firstly by enabling new ways to do things, secondly by reporting errors early. This sounds great for those who are still waiting for the language to meet their needs before jumping on board. But we have already jumped on board in a big way. Sure, I expect that we will have to do ongoing maintenance and feature revisions on our code through the years, and that we would obviously make use of best practices at the time. But I don't see us refactoring major portions of it to take advantage of language features that are mostly intended to boost developer productivity. I keep wondering, what impact will the language advances ultimately have on our existing, well-written, well-performing code base? Is there something I am missing? Is there something we ought to look out for? Does anyone have tips or guidance on how we should approach the ecmascript.next finalization? Should we care?

    Read the article

  • How to persuade C fanatics to work on my C++ open source project?

    - by paperjam
    I am launching an open-source project into a space where a lot of the development is still done Linux-kernel-style, i.e. C-language with a low-level mindset. There are multiple benefits to C++ in our space but I fear those used to working in C will be scared off. How can I make the case for the benefits of C++? Specifically, the following C++ attributes are very valuable: concept of objects and reference-counting pointers - really don't want to have to malloc(sizeof(X)) or memcpy() structs templates for specialising whole bodies of code with specific performance optimizations and for avoiding duplication of code. template metaprogramming related to the above syntactic sweetness available (e.g. operator overloading, to be used in very small doses) STL Boost libraries Many of the knee-jerk negative reactions to C++ are illfounded. Performance does not suffer: modern compilers can flatten dozens of call stack levels and avoid bloat through wide use of template specializations. Granted, when using metaprogramming and building multiple specializations of a large call tree, compile time is slower but there are ways to mitigate this. How can I sell C++?

    Read the article

  • Just installed Ubuntu 12.04. When booting, all I get is a black screen with cursor

    - by user66378
    Installation appears to go fine. After rebooting, I get my motherboard loading screens, but when it comes time for Ubuntu to boot, I just get a black screen with a blinking white underscore in the top-left - same as I got when waiting for the install CD to load, except it lasts forever. The only keypress it seems to recognize is ctrl+alt+del, which reboots. Letters don't register, function keys w/ or w/o modifiers do nothing. I've installed Ubuntu 12.04 twice and got the same error. The first time, I installed it as the only OS, and had it take up the whole disk. The second time, I installed Windows 7 first, then Ubuntu by specifying custom partitions. After this install, it would boot straight to Windows without showing grub. I used EasyBCD to add the Ubuntu installation to grub, and this got grub to show, and let me select it, but it led back to the same error described up top. I've had Linux Mint 11 and 12 installed on this PC, but was unable to get previous versions of Ubuntu to install (always had errors while installing, not after). Hardware: Intel Core i7-2600K Sandy Bridge 3.4GHz (3.8GHz Turbo Boost) LGA 1155 ASUS SABERTOOTH P67 (REV 3.0) LGA 1155 Intel P67 SATA 6Gb/s USB 3.0 ATX Intel Motherboard EVGA 01G-P3-1371-TR GeForce GTX 460 (Fermi) CORSAIR Vengeance 16GB (4 x 4GB) 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Western Digital RE4 WD5003ABYX 500GB 7200 RPM SATA 3.0Gb/s 3.5" Internal Hard Drive

    Read the article

  • The very beggining research for web developing

    - by stckxchng
    It is a general question to ask what would be the easiest way to start a website, however every one need a little boost up at the begging. There are content management systems, or admin panels, or some people have developed their own systems and many more diffirent things people use at the very beggining. Among others my way was looking for a very lightweight cms and develop a system on to it, unfortunately I couldn't find a good system to develop on, for example I decided to develop on snewscms (the lightest weight cms) but it is really hard to manage since it is not developed with object oriented programming standarts and I thought it would be a mistake to continue developing on to such a system. On the other hand starting to code from the core would be weak about security, and using a massive cms like joomla or drupal will not be good since there will be many things that are unused and moreover I will not feel well when I don't know what I am controlling since it is not easy to know every part of them. What I am asking is what would you do if you had lost all the files you are currently using today, where would you start from? What tool would you use? Or is there a lightweight and well coded tool to get start with?

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >