Search Results

Search found 93649 results on 3746 pages for 'protector one'.

Page 361/3746 | < Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >

  • why there is no power operator in java / c ++?

    - by RanZilber
    While there is such operator - ** in Python , i was wondering why java and c++ havent got one too. It is easy to make one for classes you define in C++ with operator overloading ( and i believe such thing is possible also in java) , but when talking about primitive types such as int, double and so on , you'll have to use library function like Math.power (and usaully have to cast both to double). So - why not define such operator for primitive types ?

    Read the article

  • Which hardware to VM ratio for Build-Server virtualization?

    - by Martin
    Let's start with saying that I'm a total noob wrt. to server virtualization. That is, I use VMs often during development, but they're simple desktop machine things for me. Now to my problem: We have two (physical) build servers, one master, one slave running Jenkins to do daily tasks and build (Visual C++ Builds) our release packages for our software. As such these machines are critical to our company, because we do lot's releases and without a controlled environment to create them, we can't ship fixes. (And currently there's no proper backup of these machines in place, because they do not hold any data as such - it just would be a major pain to setup them again should they go bust. (But setting up backup that I'd know would work in case of HW failure would even be more pain, so we have skipped that until now.)) Therefore (and for scaling purposes) we would like to go virtual with these machines. Outsourcing to the cloud is not an option, not at all, so we'll have to use on-premises hardware and VM hosts. Each Build-Server (master or slave) is a fully configured (installs, licenses, shares in case of the master, ...) Windows Server box. I would now ideally like to just convert the (two) existing physical nodes to VM images and run them. Later add more VM slave instances as clones of the existing ones. And here begin my questions: Should I go for one VM per one hardware-box or should I go for something where a single hardware runs multiple VMs? That would mean a single point of failure hardware wise and doesn't seem like a good idea ... or?? Since we're doing C++ compilation with Visual Studio, I assume that during a build the hardware (processor cores + disk) will be fully utilized, so going with more than one build-node per hardware doesn't seem to make much sense?? Wrt. to hardware options, does it make any difference which VM software we use (VMWare, MS, Virtualbox, ... ?) (We're using Windows exclusively for our builds.) Regarding budget: We have a normal small company (20 developers) budget for this. ;-) That is, if it's going to cost a few k$ it's going to cost. If it's free - the better. I strongly prefer solutions where there's no multi-k$ maintenance costs per year.

    Read the article

  • SEO - A Game of Numbers and Strategy

    Traditional marketing strategy would capitalize on the usual television commercial, banner ads, holding a social event to promote your new product, print ads and more. However, these portfolios might be missing one vital ingredient: the internet. Over one billion people have the access to internet.

    Read the article

  • Learning SEO For Your Online Business

    One question that pops in my mind relating to SEO is, "Why SEO techniques should be considered for your Online Business?". Maybe one of the possible answers would be, "SEO helps a lot in the promotion of online business like attracting world wide audience." In this article we will take a closer look at how SEO can help in marketing different business online.

    Read the article

  • Exposure Blending with digiKam

    <b>Scribbles and Snaps:</b> "One way to solve this problem is to use exposure blending. This technique involves taking several shots of the same scene or subject with different exposures and then fusing these shots into one perfectly exposed photo."

    Read the article

  • SSIS Basics: Using the Merge Join Transformation

    SSIS is able to take sorted data from more than one OLE DB data source and merge them into one table which can then be sent to an OLE DB destination. This 'Merge Join' transformation works in a similar way to a SQL join by specifying a 'join key' relationship. this transformation can save a great deal of processing on the destination. Annette Allen, as usual, gives clear guidance on how to do it.

    Read the article

  • How to reset darktable

    - by AKAGoodGravy
    I need a way of reseting darktable to its factory settings. With shotwell it is simply a case of deleting the home directory for shotwell, but there doesn't appear to be one for DT under it's name. I can't use darktable for more than about 30 seconds as it attempts to load up the library of about 30,000 RAW files from one folder and unsurprisingly crashes. This is the only thing holding me back from 100% cross over to ubuntu.

    Read the article

  • SQL Server Connectivity Portal

    - by Enrique Lima
    We all love one stop portals :-) Browsing around MSDN I came across this for connectivity, yes, a one stop portal, find info on connecting using a variety of technologies and good guides. http://msdn.microsoft.com/en-us/sqlserver/connectivity.aspx

    Read the article

  • Tutorial on OpenGL texture formats

    - by Cyan
    Looking at the documentation glGetTexImage(), one can see that there are plenty of available texture formats. GL_TEXTURE_1D, GL_TEXTURE_2D, GL_TEXTURE_3D, GL_TEXTURE_1D_ARRAY, GL_TEXTURE_2D_ARRAY, GL_TEXTURE_RECTANGLE, GL_TEXTURE_CUBE_MAP_POSITIVE_X, GL_TEXTURE_CUBE_MAP_NEGATIVE_X, GL_TEXTURE_CUBE_MAP_POSITIVE_Y, GL_TEXTURE_CUBE_MAP_NEGATIVE_Y, GL_TEXTURE_CUBE_MAP_POSITIVE_Z, and GL_TEXTURE_CUBE_MAP_NEGATIVE_Z I've only used GL_TEXTURE_2D for the time being. Is there any place / documentation where one can learn about these other formats ? PS : and yes, of course, i've googled for it, results are pretty poor

    Read the article

  • Caching: the Good, the Bad and the Hype

    One of the more important aspects of the scalability of an ASP.NET site is caching. To do this effectively, one must understand the relative permanence and importance of the data that is presented to the user, and work out which of the four major aspects of caching should be used. There is always a compromise, but in most cases it is an easy compromise to make considering its effects in a heavily-loaded production system

    Read the article

  • Dell Backs Ubuntu Enterprise Cloud

    <b>The VAR Guy:</b> "It's one small step for Dell, and one big strategic win for Canonical's Ubuntu Linux cloud strategy. Specifically, Dell on March 24 said it would support Ubuntu Enterprise Cloud (UEC) as an infrastructure solution."

    Read the article

  • RPi and Java Embedded GPIO: It all begins with hardware

    - by hinkmond
    So, you want to connect low-level peripherals (like blinky-blinky LEDs) to your Raspberry Pi and use Java Embedded technology to program it, do you? You sick foolish masochist. No, just kidding! That's awesome! You've come to the right place. I'll step you though it. And, as with many embedded projects, it all begins with hardware. So, the first thing to do is to get acquainted with the GPIO header on your RPi board. A "header" just means a thingy with a bunch of pins sticking up from it where you can connect wires. See the the red box outline in the photo. Now, there are many ways to connect to that header outlined by the red box in the photo (which the RPi folks call the P1 header). One way is to use a breakout kit like the one at Adafruit. But, we'll just use jumper wires in this example. So, to connect jumper wires to the header you need a map of where to connect which wire. That's why you need to study the pinout in the photo. That's your map for connecting wires. But, as with many things in life, it's not all that simple. RPi folks have made things a little tricky. There are two revisions of the P1 header pinout. One for older boards (RPi boards made before Sep 2012), which is called Revision 1. And, one for those fancy 512MB boards that were shipped after Sep 2012, which is called Revision 2. So, first make sure which board you have: either you have the Model A or B with 128MB or 256MB built before Sep 2012 and you need to look at the pinout for Rev. 1, or you have the Model B with 512MB and need to look at Rev. 2. That's all you need for now. More to come... Hinkmond

    Read the article

  • Tips For SEO Friendly Press Releases

    With the emergent commercial nature of the web it is becoming harder to get quality one way inbound links. One of the best ways of achieving incoming links is by submitting free or cheap press releases that help you build link justice without spending lots of money. An SEO press release is primary way to deliver news of new events taking place within your company.

    Read the article

  • Does OO, TDD, and Refactoring to Smaller Functions affect Speed of Code?

    - by Dennis
    In Computer Science field, I have noticed a notable shift in thinking when it comes to programming. The advice as it stands now is write smaller, more testable code refactor existing code into smaller and smaller chunks of code until most of your methods/functions are just a few lines long write functions that only do one thing (which makes them smaller again) This is a change compared to the "old" or "bad" code practices where you have methods spanning 2500 lines, and big classes doing everything. My question is this: when it call comes down to machine code, to 1s and 0s, to assembly instructions, should I be at all concerned that my class-separated code with variety of small-to-tiny functions generates too much extra overhead? While I am not exactly familiar with how OO code and function calls are handled in ASM in the end, I do have some idea. I assume that each extra function call, object call, or include call (in some languages), generate an extra set of instructions, thereby increasing code's volume and adding various overhead, without adding actual "useful" code. I also imagine that good optimizations can be done to ASM before it is actually ran on the hardware, but that optimization can only do so much too. Hence, my question -- how much overhead (in space and speed) does well-separated code (split up across hundreds of files, classes, and methods) actually introduce compared to having "one big method that contains everything", due to this overhead? UPDATE for clarity: I am assuming that adding more and more functions and more and more objects and classes in a code will result in more and more parameter passing between smaller code pieces. It was said somewhere (quote TBD) that up to 70% of all code is made up of ASM's MOV instruction - loading CPU registers with proper variables, not the actual computation being done. In my case, you load up CPU's time with PUSH/POP instructions to provide linkage and parameter passing between various pieces of code. The smaller you make your pieces of code, the more overhead "linkage" is required. I am concerned that this linkage adds to software bloat and slow-down and I am wondering if I should be concerned about this, and how much, if any at all, because current and future generations of programmers who are building software for the next century, will have to live with and consume software built using these practices. UPDATE: Multiple files I am writing new code now that is slowly replacing old code. In particular I've noted that one of the old classes was a ~3000 line file (as mentioned earlier). Now it is becoming a set of 15-20 files located across various directories, including test files and not including PHP framework I am using to bind some things together. More files are coming as well. When it comes to disk I/O, loading multiple files is slower than loading one large file. Of course not all files are loaded, they are loaded as needed, and disk caching and memory caching options exist, and yet still I believe that loading multiple files takes more processing than loading a single file into memory. I am adding that to my concern.

    Read the article

  • URL slugs: ideal length, and the real SEO effects of these slugs

    - by tattvamasi
    this question is addressed widely on SO and outside it, but for some reason, instead of taking it as a good load of great advice, all this information is confusing me. ** Problem ** I already had, on one of my sites, "prettified" urls. I had taken out the query strings, rewritten the URLS, and the link was short enough for me, but had a problem: the ID of the item or post in the URL isn't good for users. One of the users asked is there's a way to get rid of numbers, and I thought it was better for users to just see a clue of the page content in the URL. ** Solution ** With this in mind, I am trying with a section of the site.Armed with 301 redirects, some parsing work, and a lot of patience, I have added the URL slugs to some blog entries, and the slug of the URL reports the title of the article (something close to http://example.com/my-news/terribly-boring-and-long-url-that-replaces-the-number-I-liked-so-much/ ** Problems after Solution ** The problem, as I see it, is that now the URL of those blog articles is very descriptive for sure, but it is also impossible to remember. So, this brings me to the same issue I had with my previous problem: if numbers say nothing and can't be remembered, what's the use of these slugs? I prefer to see http://example.com/my-news/1/ than http://example.com/my-news/terribly-boring-and-long-url-that-replaces-the-number-I-liked-so-much/ To avoid forcing my user to memorize my URLS, I have added a script that finds the closest match to the URL you type, and redirects there. This is something I like, because the page now acts as a sort of little search engine, and users can play with the URLS to find articles. ** Open questions ** I still have some open questions, and don't seem to be able to find an answer, because answers tend to contradict one another. 1) How many characters should an URL ideally be long? I've read the magic number 115 and am sticking to that, but am not sure. 2) Is this really good for SEO? One of those blog articles I have redirected, with ID number in the URL and all, ranked second on Google. I've just found this question, and the answer seems to be consistent with what I think URL slug and SEO - structure (but see this other question with the opposite opinion) 3) To make a question with a specific example, would this URL risk to be penalized? Is it acceptable? Is it too long? StackOverflow seems to have comparably long URLs, but I'm not sure it's a winning strategy in my case. I just wanted to facilitate my users without running into Google's algorithms.

    Read the article

  • What DNS server to use for dynamic load-balancing of website?

    - by Marki555
    I will have 2 servers in different datacenters (different countries) and I want to use DNS load-balancing mainly for High Availability of website hosted on those 2 servers. It is just ad tracking site, which records hit in local database and returns few lines on html code. I want to return 2 A records each time because of DNS pinning in browsers (if one server fails, browser will try second A record which it has already cached). Both servers will be acting also as DNS servers for redundancy. Now comes my proposed solution: I will use BIND and have both servers as a master for that zone. On each server there will be running script, which will periodically test availability (http) of both servers and remove IP from DNS in case of failure. Now the questions :) 1) Is BIND suitable for this solution? I think BIND performance is good and it is easy to manipulate the zone file via script. And as I will modify the zone only in case of failure/maintenance, the modifications (and thus bind reload) won't be often. 2) I plan to use TTL of 5 minutes. The website will have about 1000-3000 req/s but from distinct clients (each IP only 1-3 requests), so I think the DNS load won't be too much. I suppose their ISPs will cache the responses for those 5 mins. Is there any reason to lower the TTL even more? 3) Is my master-master approach good? Or should I make one of the servers master and the other one slave? Right now each server can monitor both itself and the other one. If only webservice fails, both DNS nodes will notice it. If the whole server fails, then the remaining DNS node will notice it and the failed node will not answer DNS queries anyway. 4) Is it a big issue when one NS server does not respond to queries? If yes, I can make a third DNS, so anytime at least 2 of them would accept queries... 5) Should I rewrite the zone file via script, or just use dynamic DNS update (for example via nsupdateutility)?

    Read the article

  • SSIS Basics: Using the Merge Join Transformation

    SSIS is able to take sorted data from more than one OLE DB data source and merge them into one table which can then be sent to an OLE DB destination. This 'Merge Join' transformation works in a similar way to a SQL join by specifying a 'join key' relationship. this transformation can save a great deal of processing on the destination. Annette Allen, as usual, gives clear guidance on how to do it.

    Read the article

  • Syncing apt-get installations between multiple computers

    - by Chris
    Is there a way to synchronize my installations (and removals) between multiple PCs? Preferably with dropbox - since I'm already using that to keep my files in sync. Edit: I thought of an alias for the apt-get install and apt-get remove commands that stores the parameters to a file (one for install, one for remove) and another command that reads all the entries in the file and executes the respective command. Is this an realistic approach?

    Read the article

  • SEO Services

    With all of the social networking websites popping up all over the internet, many are afraid that all of these new pages will make it increasingly harder for one to get his or her website noticed. This may be the case considering that new people are creating social network web pages at the rate of about one per minute.

    Read the article

  • Choosing the Right Company For Your SEO Needs

    One important factor that determines or contributes to the success of a website is through SEO services. In a world where competition is rife and where one has to keep on promoting their product constantly, advertising has emerged as the main avenue for making your products known to many potential customers. Some years back, businesses and individuals advertised their services and products in the yellow pages or newspapers.

    Read the article

  • Choosing Unlimited Web Hosting

    There are so many web hosting companies available either online or offline and one can select any one depending upon the choice and interest. Nowadays, several hosting companies are providing unlimit... [Author: Anand Maheshwari - Web Design and Development - May 16, 2010]

    Read the article

  • How to Build Quality Back Links For Your Site

    Search engine optimization, known as SEO, relies on high quality back links In this regard; one should always know that their sites will rank higher in the search engines if they have quality back links, which is to say that one will receive organic traffic to their site from those who may be searching for what you're offering. To help you get quality back links, the following tips can help you.

    Read the article

  • Advanced SEO Strategy Guaranteed to Boost Your SEO

    One of the main factors for optimising your website for page 1 on Google is to get incoming links from other web pages, and to boost the effectiveness of those links there are a few things to consider, including the relevance of the page where your link is coming from, plus the link itself needs to be relevant to the keywords that you are targeting on Google. Here's one strategy you can use for getting great incoming links to help you achieve higher Google rankings.

    Read the article

< Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >