Search Results

Search found 59278 results on 2372 pages for 'time estimation'.

Page 494/2372 | < Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >

  • Geek Bike Ride Sao Paulo

    - by Tori Wieldt
    What do you do on sunny Saturday in Sao Paulo when you have several Java enthusiasts, street lanes closed off for bicyclists, new cool Duke jerseys, and some wonderful bike angels to provide a tour through the city? A GEEK BIKE RIDE, of course! The weekend before JavaOne Latin America, the Sao Paulo geek bike ride was held today. We had 20+ riders and a wonderful route that took us from the Bicycle Park to and through downtown. It was a 30Km ride, but our hosts were kind enough to give riders the option to take the subway for part of the trip. Thanks to our wonderful bike angels, the usual rental bike problems like rubbing brakes, dropped chains, and even a flat tire were handled with ease.  The geek bike ride wasn't just for out-of-towners. Loiane Groner, who lives in Sao Paulo said, "I love the Geek Bike Ride! The last time I was in these parts of the city, I think I was five years-old!" A good time was had by all. (My only crash of the day was riding up an escalator with my bike. Luckily, the bikers with me were so busy helping me that no pictures were taken. <phew>) Enjoy this video by Hugo Lavalle You can also view Hugo's pictures. More pictures to come on Stephen Chin's blog.  So, what city is up next?  

    Read the article

  • Useful certifications for a young programmer

    - by Alain
    As @Paddyslacker elegantly stated in Are certifications worth it? The main purpose of certifications is to make money for the certifying body. I am a fairly young developer, with only an undergraduate degree, and my job is (graciously) offering to sponsor some professional development of my choice (provided it can be argued that it will contribute to the quality of work I do for them). A search online offers a slew of (mostly worthless) certifications one can attain. I'm wondering if there are any that are actually recognized in the (North American) industry as an asset. My local university promoted CIPS (I.S.P., ITCP) at the time I was graduating, but for all I can tell it's just the one that happened to get its foot in the door. It's certainly money grubbing - with a $205 a year fee. So are there any such certifications that provide useful credentials? To better define 'useful' - would it benefit full time developers, or is it only something worth while to the self-employed? Would any certifications lead me to being considered for higher wages, or can that only be achieved with more experience and an higher-level degree?

    Read the article

  • Ubuntu 12.04 LTS loops the login screen unless you login as Guest

    - by Mário Silva
    I am running a VMWare Player with a Ubuntu 12.04 LTS Precise Pangloin as Guest on my Windows 7. Sometimes I get the shutdown blue screen error in Windows, this time it happened when I was running the Player. When I restarted everything Ubuntu gave the not so unfamiliar in this forum Login Loop in adminstrator login. I login and there's this black screen where I can only read: "piix4...smbus:0.0.0.07.3 Host Smbus controller not enabled" . When I go to the Prompt in root mode it fails to update and only upgraded, specially some plugins ( I think graphic plugins) which also appear in one an error message after quitting the prompt, but they´are successfully installed. They are not the error message. After that I have been working with the Fail/safe Mode recovery panel. When I try to update via Root I get errors like this: W:failed to fetch http://extras.ubuntu.com/ubuntu/dists/precise/release.gpg could not resolve 'extras/ubuntu.com There are 8 more like this referring to areas like: -archive/canonical.com -ppa.Launchpad.net -security.Ubuntu.com -Us.archive.ubuntu.com - release.gpg precise-updates/release.gpg precise_backport/release.gpg Final Message: some index files failed to download.....they have been ignored or old files are used. The black screens most of the time pass by too fast for me to pick up any information. But in general I think I have done everything I was able to in the recovery panel including updating network and graphic packages and recovering filesystem packages and the basic stuff ( I am a beginner regarding Linux ) in the root prompt. Now I am stuck in this screen with graphic options: - Run in low-graphics mode just for one session - Reconfigure Graphics - Troubleshoot the error - Exit to console login I am trying to choose to reconfigure graphics but the mouse disappears in the virtual machine screen and sometimes when options change ity´s only the first and last option. ut this happens from the blue without messages. This particular option menu is in the regular GUI style against a black screen in Terminal style. Really strange. Thanx in advance, all is welcome and appreciated.

    Read the article

  • Solaris: What comes next?

    - by alanc
    As you probably know by now, a few months ago, we released Solaris 11 after years of development. That of course means we now need to figure out what comes next - if Solaris 11 is “The First Cloud OS”, then what do we need to make future releases of Solaris be, to be modern and competitive when they're released? So we've been having planning and brainstorming meetings, and I've captured some notes here from just one of those we held a couple weeks ago with a number of the Silicon Valley based engineers. Now before someone sees an idea here and calls their product rep wanting to know what's up, please be warned what follows are rough ideas, and as I'll discuss later, none of them have any committment, schedule, working code, or even plan for integration in any possible future product at this time. (Please don't make me force you to read the full Oracle future product disclaimer here, you should know it by heart already from the front of every Oracle product slide deck.) To start with, we did some background research, looking at ideas from other Oracle groups, and competitive OS'es. We examined what was hot in the technology arena and where the interesting startups were heading. We then looked at Solaris to see where we could apply those ideas. Making Network Admins into Socially Networking Admins We all know an admin who has grumbled about being the only one stuck late at work to fix a problem on the server, or having to work the weekend alone to do scheduled maintenance. But admins are humans (at least most are), and crave companionship and community with their fellow humans. And even when they're alone in the server room, they're never far from a network connection, allowing access to the wide world of wonders on the Internet. Our solution here is not building a new social network - there's enough of those already, and Oracle even has its own Oracle Mix social network already. What we proposed is integrating Solaris features to help engage our system admins with these social networks, building community and bringing them recognition in the workplace, using achievement recognition systems as found in many popular gaming platforms. For instance, if you had a Facebook account, and a group of admin friends there, you could register it with our Social Network Utility For Facebook, and then your friends might see: Alan earned the achievement Critically Patched (April 2012) for patching all his servers. Matt is only at 50% - encourage him to complete this achievement today! To avoid any undue risk of advertising who has unpatched servers that are easier targets for hackers to break into, this information would be tightly protected via Facebook's world-renowned privacy settings to avoid it falling into the wrong hands. A related form of gamification we considered was replacing simple certfications with role-playing-game-style Experience Levels. Instead of just knowing an admin passed a test establishing a given level of competency, these would provide recruiters with a more detailed level of how much real-world experience an admin has. Achievements such as the one above would feed into it, but larger numbers of experience points would be gained by tougher or more critical tasks - such as recovering a down system, or migrating a service to a new platform. (As long as it was an Oracle platform of course - migrating to an HP or IBM platform would cause the admin to lose points with us.) Unfortunately, we couldn't figure out a good way to prevent (if you will) “gaming” the system. For instance, a disgruntled admin might decide to start ignoring warnings from FMA that a part is beginning to fail or skip preventative maintenance, in the hopes that they'd cause a catastrophic failure to earn more points for bolstering their resume as they look for a job elsewhere, and not worrying about the effect on your business of a mission critical server going down. More Z's for ZFS Our suggested new feature for ZFS was inspired by the worlds most successful Z-startup of all time: Zynga. Using the Social Network Utility For Facebook described above, we'd tie it in with ZFS monitoring to help you out when you find yourself in a jam needing more disk space than you have, and can't wait a month to get a purchase order through channels to buy more. Instead with the click of a button you could post to your group: Alan can't find any space in his server farm! Can you help? Friends could loan you some space on their connected servers for a few weeks, knowing that you'd return the favor when needed. ZFS would create a new filesystem for your use on their system, and securely share it with your system using Kerberized NFS. If none of your friends have space, then you could buy temporary use space in small increments at affordable rates right there in Facebook, using your Facebook credits, and then file an expense report later, after the urgent need has passed. Universal Single Sign On One thing all the engineers agreed on was that we still had far too many "Single" sign ons to deal with in our daily work. On the web, every web site used to have its own password database, forcing us to hope we could remember what login name was still available on each site when we signed up, and which unique password we came up with to avoid having to disclose our other passwords to a new site. In recent years, the web services world has finally been reducing the number of logins we have to manage, with many services allowing you to login using your identity from Google, Twitter or Facebook. So we proposed following their lead, introducing PAM modules for web services - no more would you have to type in whatever login name IT assigned and try to remember the password you chose the last time password aging forced you to change it - you'd simply choose which web service you wanted to authenticate against, and would login to your Solaris account upon reciept of a cookie from their identity service. Pinning notes to the cloud We also all noted that we all have our own pile of notes we keep in our daily work - in text files in our home directory, in notebooks we carry around, on white boards in offices and common areas, on sticky notes on our monitors, or on scraps of paper pinned to our bulletin boards. The contents of the notes vary, some are things just for us, some are useful for our groups, some we would share with the world. For instance, when our group moved to a new building a couple years ago, we had a white board in the hallway listing all the NIS & DNS servers, subnets, and other network configuration information we needed to set up our Solaris machines after the move. Similarly, as Solaris 11 was finishing and we were all learning the new network configuration commands, we shared notes in wikis and e-mails with our fellow engineers. Users may also remember one of the popular features of Sun's old BigAdmin site was a section for sharing scripts and tips such as these. Meanwhile, the online "pin board" at Pinterest is taking the web by storm. So we thought, why not mash those up to solve this problem? We proposed a new BigAddPin site where users could “pin” notes, command snippets, configuration information, and so on. For instance, once they had worked out the ideal Automated Installation manifest for their app server, they could pin it up to share with the rest of their group, or choose to make it public as an example for the world. Localized data, such as our group's notes on the servers for our subnet, could be shared only to users connecting from that subnet. And notes that they didn't want others to see at all could be marked private, such as the list of phone numbers to call for late night pizza delivery to the machine room, the birthdays and anniversaries they can never remember but would be sleeping on the couch if they forgot, or the list of automatically generated completely random, impossible to remember root passwords to all their servers. For greater integration with Solaris, we'd put support right into the command shells — redirect output to a pinned note, set your path to include pinned notes as scripts you can run, or bring up your recent shell history and pin a set of commands to save for the next time you need to remember how to do that operation. Location service for Solaris servers A longer term plan would involve convincing the hardware design groups to put GPS locators with wireless transmitters in future server designs. This would help both admins and service personnel trying to find servers in todays massive data centers, and could feed into location presence apps to help show potential customers that while they may not see many Solaris machines on the desktop any more, they are all around. For instance, while walking down Wall Street it might show “There are over 2000 Solaris computers in this block.” [Note: this proposal was made before the recent media coverage of a location service aggregrator app with less noble intentions, and in hindsight, we failed to consider what happens when such data similarly falls into the wrong hands. We certainly wouldn't want our app to be misinterpreted as “There are over $20 million dollars of SPARC servers in this building, waiting for you to steal them.” so it's probably best it was rejected.] Harnessing the power of the GPU for Security Most modern OS'es make use of the widespread availability of high powered GPU hardware in today's computers, with desktop environments requiring 3-D graphics acceleration, whether in Ubuntu Unity, GNOME Shell on Fedora, or Aero Glass on Windows, but we haven't yet made Solaris fully take advantage of this, beyond our basic offering of Compiz on the desktop. Meanwhile, more businesses are interested in increasing security by using biometric authentication, but must also comply with laws in many countries preventing discrimination against employees with physical limations such as missing eyes or fingers, not to mention the lost productivity when employees can't login due to tinted contacts throwing off a retina scan or a paper cut changing their fingerprint appearance until it heals. Fortunately, the two groups considering these problems put their heads together and found a common solution, using 3D technology to enable authentication using the one body part all users are guaranteed to have - pam_phrenology.so, a new PAM module that uses an array USB attached web cams (or just one if the user is willing to spin their chair during login) to take pictures of the users head from all angles, create a 3D model and compare it to the one in the authentication database. While Mythbusters has shown how easy it can be to fool common fingerprint scanners, we have not yet seen any evidence that people can impersonate the shape of another user's cranium, no matter how long they spend beating their head against the wall to reshape it. This could possibly be extended to group users, using modern versions of some of the older phrenological studies, such as giving all users with long grey beards access to the System Architect role, or automatically placing users with pointy spikes in their hair into an easy use mode. Unfortunately, there are still some unsolved technical challenges we haven't figured out how to overcome. Currently, a visit to the hair salon causes your existing authentication to expire, and some users have found that shaving their heads is the only way to avoid bad hair days becoming bad login days. Reaction to these ideas After gathering all our notes on these ideas from the engineering brainstorming meeting, we took them in to present to our management. Unfortunately, most of their reaction cannot be printed here, and they chose not to accept any of these ideas as they were, but they did have some feedback for us to consider as they sent us back to the drawing board. They strongly suggested our ideas would be better presented if we weren't trying to decipher ink blotches that had been smeared by the condensation when we put our pint glasses on the napkins we were taking notes on, and to that end let us know they would not be approving any more engineering offsites in Irish themed pubs on the Friday of a Saint Patrick's Day weekend. (Hopefully they mean that situation specifically and aren't going to deny the funding for travel to this year's X.Org Developer's Conference just because it happens to be in Bavaria and ending on the Friday of the weekend Oktoberfest starts.) They recommended our research techniques could be improved over just sitting around reading blogs and checking our Facebook, Twitter, and Pinterest accounts, such as considering input from alternate viewpoints on topics such as gamification. They also mentioned that Oracle hadn't fully adopted some of Sun's common practices and we might have to try harder to get those to be accepted now that we are one unified company. So as I said at the beginning, don't pester your sales rep just yet for any of these, since they didn't get approved, but if you have better ideas, pass them on and maybe they'll get into our next batch of planning.

    Read the article

  • How do web-developers do web-design when freelancing?

    - by Gerald Blizz
    So I got my first job recently as junior web-developer. My company creates small/medium sites for wide variety of customers: autobusiness companies, weddign agencies, some sauna websites, etcetc, hope you get my point. They don't do big serious stuff like bank systems or really big systems, it's mostly small/medium-sized websites for startups/medium sized business. My main skills are PHP/MySQL, I also know HTML and a bit of CSS/JS/AJAX. I know that good web-developer must know some backend language (like PHP/Ruby/Python) AND HTML+CSS+JS+AJAX+JQuery combo. However, I was always wondering. In my company we have web-designer. In other serious organisations I often see the same stuff: web-developers who create business-logic and web-designers, who create design. As far as I know, after designers paint design of website they give it to developers either in PSD or sliced way, and developers put it together with logic, but design is NOT created by developers. Such separation seems very good for full-time job, but I am concerned with question how do freelance web-developers do websites? Do most of them just pay freelance designers to create design for them? Or do some people do both? Reason why I ask - I plan to start some freelancing in my free time after I get good at web-development. But I don't want to create websites with great business-logic but poor design. Neither I want to let someone else create a design for me. I like web-development very much and I am doing quite good, I like design aswell, even though I am a bit lost how to study it and get better at it. But I am scared that going in both directions won't let me become expert, it seems like two totally different jobs and getting really good in both seems very hard. But I really want to do both. What should I do? Thank you!

    Read the article

  • Caching WCF javascript proxy on browser

    - by oazabir
    When you use WCF services from Javascript, you have to generate the Javascript proxies by hitting the Service.svc/js. If you have five WCF services, then it means five javascripts to download. As browsers download javascripts synchronously, one after another, it adds latency to page load and slows down page rendering performance. Moreover, the same WCF service proxy is downloaded from every page, because the generated javascript file is not cached on browser. Here is a solution that will ensure the generated Javascript proxies are cached on browser and when there is a hit on the service, it will respond with HTTP 304 if the Service.svc file has not changed. Here’s a Fiddler trace of a page that uses two WCF services. You can see there are two /js hits and they are sequential. Every visit to the same page, even with the same browser session results in making those two hits to /js. Second time when the same page is browsed: You can see everything else is cached, except the WCF javascript proxies. They are never cached because the WCF javascript proxy generator does not produce the necessary caching headers to cache the files on browser. Here’s an HttpModule for IIS and IIS Express which will intercept calls to WCF service proxy. It first checks if the service is changed since the cached version on the browser. If it has not changed then it will return HTTP 304 and not go through the service proxy generation process. Thus it saves some CPU on server. But if the request is for the first time and there’s no cached copy on browser, it will deliver the proxy and also emit the proper cache headers to cache the response on browser. http://www.codeproject.com/Articles/360437/Caching-WCF-javascript-proxy-on-browser Don’t forget to vote.

    Read the article

  • Project Corndog: Viva el caliente perro!

    - by Matt Christian
    During one of my last semesters in college we were required to take a class call Computer Graphics which tried (quite unsuccessfully) to teach us a combination of mathematics, OpenGL, and 3D rendering techniques.  The class itself was horrible, but one little gem of an idea came out of it.  See, the final project in the class was to team up and create some kind of demo or game using techniques we learned in class.  My friend Paul and I teamed up and developed a top down shooter that, given the stringent timeline, was much less of a game and much more of 3D objects floating around a screen. The idea itself however I found clever and unique and decided it was time to spend some time developing a proper version of our idea.  Project Corndog as it is tentatively named, pits you as a freshly fried corndog who broke free from the shackles of fair food slavery in a quest to escape the state fair you were born in.  Obviously it's quite a serious game with undertones of racial prejudice, immoral practices, and cheap food sold at high prices. The game itself is a top down shooter in the style of 1942 (NES).  As a delicious corndog you will have to fight through numerous enemies including hungry babies, carnies, and the corndog serial-killer himself the corndog eating champion!  Other more engaging and frighteningly realistic enemies await as the only thing between you and freedom. Project Corndog is being developed in Visual Studio 2008 with XNA Game Studio 3.1.  It is currently being hosted on Google code and will be made available as an open source engine in the coming months.

    Read the article

  • ASP.NET hosting: better, faster, cheaper

    - by Fabrice Marguerie
    After seven years with webhost4life, it was time to move on. Especially because of all the troubles with webhost4life due to their internal migration to a new hosting environment (the company has been bought out).I've just moved all my websites elsewhere. I'm now using Arvixe and OrcsWeb.I use OrcsWeb for metaSapiens.com. OrcsWeb kindly offers me free ASP.NET hosting because I'm a Microsoft MVP. I'd like to publicly thank OrcsWeb for this, and I invite you to have a look at what they have to offer.I use Arvixe for all my other websites, the major ones being SharpToolbox.com, JavaToolbox.com, AxToolbox.com, Proagora.com, LinqInAction.net, ClairDeBulle.com, and madgeek.com.Moving all my websites wasn't a walk in the park, but it was well worth it. Let's consider what I get with Arvixe:Unlimited diskspaceUnlimited data transferUnlimited domainsDedicated application poolsUnlimited POP3 and IMAP mailboxesUnlimited SQL Server 2008 databasesUnlimited MySQL 5 databases.NET 1.1, 2, 3.5 and 4Full trustIIS 7Daily backups A powerful and easy to use control panelAnd more!All of this for $8 per month. If you don't need all of the above features, you can even get an offer as cheap as $5 per month.You can even get better rates if you use coupon codes, such as TOPHOST (30% discount) or MVCHOSTING (20% discount).All in all, I paid only $134 for two years for a great hosting service!Maybe it's time for you to move too?Disclaimer: the links to OrcsWeb and Arvixe are affiliate links that may bring me some money home if you sign up.

    Read the article

  • Table Variables: an empirical approach.

    - by Phil Factor
    It isn’t entirely a pleasant experience to publish an article only to have it described on Twitter as ‘Horrible’, and to have it criticized on the MVP forum. When this happened to me in the aftermath of publishing my article on Temporary tables recently, I was taken aback, because these critics were experts whose views I respect. What was my crime? It was, I think, to suggest that, despite the obvious quirks, it was best to use Table Variables as a first choice, and to use local Temporary Tables if you hit problems due to these quirks, or if you were doing complex joins using a large number of rows. What are these quirks? Well, table variables have advantages if they are used sensibly, but this requires some awareness by the developer about the potential hazards and how to avoid them. You can be hit by a badly-performing join involving a table variable. Table Variables are a compromise, and this compromise doesn’t always work out well. Explicit indexes aren’t allowed on Table Variables, so one cannot use covering indexes or non-unique indexes. The query optimizer has to make assumptions about the data rather than using column distribution statistics when a table variable is involved in a join, because there aren’t any column-based distribution statistics on a table variable. It assumes a reasonably even distribution of data, and is likely to have little idea of the number of rows in the table variables that are involved in queries. However complex the heuristics that are used might be in determining the best way of executing a SQL query, and they most certainly are, the Query Optimizer is likely to fail occasionally with table variables, under certain circumstances, and produce a Query Execution Plan that is frightful. The experienced developer or DBA will be on the lookout for this sort of problem. In this blog, I’ll be expanding on some of the tests I used when writing my article to illustrate the quirks, and include a subsequent example supplied by Kevin Boles. A simplified example. We’ll start out by illustrating a simple example that shows some of these characteristics. We’ll create two tables filled with random numbers and then see how many matches we get between the two tables. We’ll forget indexes altogether for this example, and use heaps. We’ll try the same Join with two table variables, two table variables with OPTION (RECOMPILE) in the JOIN clause, and with two temporary tables. It is all a bit jerky because of the granularity of the timing that isn’t actually happening at the millisecond level (I used DATETIME). However, you’ll see that the table variable is outperforming the local temporary table up to 10,000 rows. Actually, even without a use of the OPTION (RECOMPILE) hint, it is doing well. What happens when your table size increases? The table variable is, from around 30,000 rows, locked into a very bad execution plan unless you use OPTION (RECOMPILE) to provide the Query Analyser with a decent estimation of the size of the table. However, if it has the OPTION (RECOMPILE), then it is smokin’. Well, up to 120,000 rows, at least. It is performing better than a Temporary table, and in a good linear fashion. What about mixed table joins, where you are joining a temporary table to a table variable? You’d probably expect that the query analyzer would throw up its hands and produce a bad execution plan as if it were a table variable. After all, it knows nothing about the statistics in one of the tables so how could it do any better? Well, it behaves as if it were doing a recompile. And an explicit recompile adds no value at all. (we just go up to 45000 rows since we know the bigger picture now)   Now, if you were new to this, you might be tempted to start drawing conclusions. Beware! We’re dealing with a very complex beast: the Query Optimizer. It can come up with surprises What if we change the query very slightly to insert the results into a Table Variable? We change nothing else and just measure the execution time of the statement as before. Suddenly, the table variable isn’t looking so much better, even taking into account the time involved in doing the table insert. OK, if you haven’t used OPTION (RECOMPILE) then you’re toast. Otherwise, there isn’t much in it between the Table variable and the temporary table. The table variable is faster up to 8000 rows and then not much in it up to 100,000 rows. Past the 8000 row mark, we’ve lost the advantage of the table variable’s speed. Any general rule you may be formulating has just gone for a walk. What we can conclude from this experiment is that if you join two table variables, and can’t use constraints, you’re going to need that Option (RECOMPILE) hint. Count Dracula and the Horror Join. These tables of integers provide a rather unreal example, so let’s try a rather different example, and get stuck into some implicit indexing, by using constraints. What unusual words are contained in the book ‘Dracula’ by Bram Stoker? Here we get a table of all the common words in the English language (60,387 of them) and put them in a table. We put them in a Table Variable with the word as a primary key, a Table Variable Heap and a Table Variable with a primary key. We then take all the distinct words used in the book ‘Dracula’ (7,558 of them). We then create a table variable and insert into it all those uncommon words that are in ‘Dracula’. i.e. all the words in Dracula that aren’t matched in the list of common words. To do this we use a left outer join, where the right-hand value is null. The results show a huge variation, between the sublime and the gorblimey. If both tables contain a Primary Key on the columns we join on, and both are Table Variables, it took 33 Ms. If one table contains a Primary Key, and the other is a heap, and both are Table Variables, it took 46 Ms. If both Table Variables use a unique constraint, then the query takes 36 Ms. If neither table contains a Primary Key and both are Table Variables, it took 116383 Ms. Yes, nearly two minutes!! If both tables contain a Primary Key, one is a Table Variables and the other is a temporary table, it took 113 Ms. If one table contains a Primary Key, and both are Temporary Tables, it took 56 Ms.If both tables are temporary tables and both have primary keys, it took 46 Ms. Here we see table variables which are joined on their primary key again enjoying a  slight performance advantage over temporary tables. Where both tables are table variables and both are heaps, the query suddenly takes nearly two minutes! So what if you have two heaps and you use option Recompile? If you take the rogue query and add the hint, then suddenly, the query drops its time down to 76 Ms. If you add unique indexes, then you've done even better, down to half that time. Here are the text execution plans.So where have we got to? Without drilling down into the minutiae of the execution plans we can begin to create a hypothesis. If you are using table variables, and your tables are relatively small, they are faster than temporary tables, but as the number of rows increases you need to do one of two things: either you need to have a primary key on the column you are using to join on, or else you need to use option (RECOMPILE) If you try to execute a query that is a join, and both tables are table variable heaps, you are asking for trouble, well- slow queries, unless you give the table hint once the number of rows has risen past a point (30,000 in our first example, but this varies considerably according to context). Kevin’s Skew In describing the table-size, I used the term ‘relatively small’. Kevin Boles produced an interesting case where a single-row table variable produces a very poor execution plan when joined to a very, very skewed table. In the original, pasted into my article as a comment, a column consisted of 100000 rows in which the key column was one number (1) . To this was added eight rows with sequential numbers up to 9. When this was joined to a single-tow Table Variable with a key of 2 it produced a bad plan. This problem is unlikely to occur in real usage, and the Query Optimiser team probably never set up a test for it. Actually, the skew can be slightly less extreme than Kevin made it. The following test showed that once the table had 54 sequential rows in the table, then it adopted exactly the same execution plan as for the temporary table and then all was well. Undeniably, real data does occasionally cause problems to the performance of joins in Table Variables due to the extreme skew of the distribution. We've all experienced Perfectly Poisonous Table Variables in real live data. As in Kevin’s example, indexes merely make matters worse, and the OPTION (RECOMPILE) trick does nothing to help. In this case, there is no option but to use a temporary table. However, one has to note that once the slight de-skew had taken place, then the plans were identical across a huge range. Conclusions Where you need to hold intermediate results as part of a process, Table Variables offer a good alternative to temporary tables when used wisely. They can perform faster than a temporary table when the number of rows is not great. For some processing with huge tables, they can perform well when only a clustered index is required, and when the nature of the processing makes an index seek very effective. Table Variables are scoped to the batch or procedure and are unlikely to hang about in the TempDB when they are no longer required. They require no explicit cleanup. Where the number of rows in the table is moderate, you can even use them in joins as ‘Heaps’, unindexed. Beware, however, since, as the number of rows increase, joins on Table Variable heaps can easily become saddled by very poor execution plans, and this must be cured either by adding constraints (UNIQUE or PRIMARY KEY) or by adding the OPTION (RECOMPILE) hint if this is impossible. Occasionally, the way that the data is distributed prevents the efficient use of Table Variables, and this will require using a temporary table instead. Tables Variables require some awareness by the developer about the potential hazards and how to avoid them. If you are not prepared to do any performance monitoring of your code or fine-tuning, and just want to pummel out stuff that ‘just runs’ without considering namby-pamby stuff such as indexes, then stick to Temporary tables. If you are likely to slosh about large numbers of rows in temporary tables without considering the niceties of processing just what is required and no more, then temporary tables provide a safer and less fragile means-to-an-end for you.

    Read the article

  • AceCypher is an Addictive Cypher Slide Puzzle Game

    - by Akemi Iwaya
    Are you ready for a game that will test your logical thinking skills while providing hours of fun? Then you may want to have a look at this awesome cypher slide puzzler! AceCypher is great puzzle game for those times when you only have a few minutes to play or want a fun way to pass the time while relaxing. The overall premise and style of game play for AceCypher is simple. You move individual rows (left, right) or columns (up, down) one space at a time in order to shift the positions of numbers on the game board through ’round-a-bout’ trading. The goal is to make the four numbers in the red square match the code shown in the upper right corner (including positions). Sounds simple so far, right? But the challenge comes from the random boards you will be given to work with…some will not be too hard to solve while others will tax your brain (and patience!) quite well.     

    Read the article

  • SQL Contest – Result of Cartoon Contest

    - by pinaldave
    Earlier we had an excellent contest ran with the help of Embarcadero Technologies. We had two different contests on the same day sponsored by the kind folks at Embarcadero. Here are the details of the winners. 1) Win USD 25 Amazon Gift Cards (10 Units) We had announced that we will award USD 25 Amazon Gift Cards to 10 lucky winners who will download the DB Optimizer between Nov 29 to Dec 8. Here is the name of the winners. Winners will get Amazon Gift Cards USD 25 in the next 5 days of this blog post to their registered email address. If you do not receive the card, do send me email (Pinal at sqlauthority.com) and I will follow up on the details. Name of the winners: Ramdas Narayanan Krishna Uppuluri Donna Kray Santosh Gupta Robert Small Samit Bhatt Bernd Baumanns Rodrigo Oriola Jim Woodin Alfred Sandou 2) Win Star Wars R2-D2 Inflatable R/C We had cartoon contest. If you have not read the cartoon – I suggest you go over this cartoon story one more time. The task was to give the correct answer with some interesting note along with it. We selected a few good quotes and put them together. We later on picked the winner by using random algorithm. The winner gets fantastic Star Wars R2-D2 Inflatable R/C. Name of the winner: Aadhar Joshi. He wins R2-D2. You can read his comment over here. Thank you all for participating in the contest – this was fun – if you have liked it do let me know and we will come up with something new for you next time. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Using textureGrad for anisotropic integration approximation

    - by Amxx
    I'm trying to develop a real time rendering method using real time acquired envmap (cubemap) for lightning. This implies that my envmap can change as often as every frame and I therefore cannot use any method base on precomputation of the envmap (such as convolution with BRDF...) So far my method worked well with Phong BRDF. For specular contribution I direclty read the value in my sampleCube and I use mipmap levels + linear filter for simulating the roughtness of the material considered: int size = textureSize(envmap, 0).x; float specular_level = log2(size * sqrt(3.0)) - 0.5 * log2(ns + 1); vec3 env_specular = ks * specular_color * textureLod(envmap, l_g, specular_level); From this method I would like to upgrade to a microfacet based BRDF. I already have algorithm for evaluating the shape (including anisotropic direction) of the reflection but I cannot manage to read the values I want in my sampleCube. I believe I have to use textureGrad(envmap, l_g, X, Y); with l_g being the reflection direction in global space but I cannot manage to find which values to give to X and Y in order to correctly specify the area I want to consider. What value should I give to X and Y in orther for textureGrad(envmap, l_g, X, Y); to give the same result as textureLod(envmap, l_g, specular_level);

    Read the article

  • Google Sky Map Turns Your Android Phone into a Digital Telescope

    - by ETC
    Whether you’re an astronomy buff or just somebody looking for a perfect “look how sweet my smartphone is!’ application, Google’s Sky Map application for Android phones is a must have app. If all the application did was show you detailed views of the night sky it would be pretty awesome based on that alone. Where Sky Map dazzles, however, is in linking together the GPS and tilt-sensors on your phone to turn your phone into a sky-watching window. Whatever you point the phone at, the screen displays. Want to see what stars are directly above you despite it being the middle of the day? Point the phone up. Curious what people on the opposite side of the word are seeing? Point the phone down and take a peek right through the Earth. Check out the video below to see the application in action: Google Sky Map is free and works wherever Android does. Google Sky Map [AppBrain] Latest Features How-To Geek ETC How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware The Citroen GT – An Awesome Video Game Car Brought to Life [Video] Final Man vs. Machine Round of Jeopardy Unfolds; Watson Dominates Give Chromium-Based Browser Desktop Notifications a Native System Look in Ubuntu Chrome Time Track Is a Simple Task Time Tracker Google Sky Map Turns Your Android Phone into a Digital Telescope Walking Through a Seaside Village Wallpaper

    Read the article

  • Not use CSS definitions for one <FORM>

    - by Svisstack
    I have template from themeforest and i dont want edit css from this template, because i don't have time for it. But i want integrate paypal buttons to my webpage, problem is paypal button use tag for selection payment option. I have overloaded style for tag and this not look like should. How to not use CSS for this element. I dont want use and if i don't must then i dont want edit this CSS;-) This css look wired, i must edit her to solve this problem? What is best solution for this? /*//// - Forms - ////*/ form { margin-bottom:20px; } body.ie7 form, body.ie8 { margin-bottom:40px; } form p { margin-bottom:15px; } form label { float:left; width:140px; margin-top:5px; } form input, form textarea, form select { padding:10px 5px; background:#fff url(../img/bg-input.gif) repeat-x top; border:1px solid #D9D9D9; width:448px; border-radius:3px; -moz-border-radius:3px; -webkit-border-radius:3px; } form input.small { width:35px; } html, body, div, span, object, iframe, h1, h2, h3, h4, h5, h6, p, blockquote, pre, abbr, address, cite, code, del, dfn, em, img, ins, kbd, q, samp, small, strong, sub, sup, var, b, i, dl, dt, dd, ol, ul, li, fieldset, form, label, legend, table, caption, tbody, tfoot, thead, tr, th, td, article, aside, figure, footer, header, hgroup, menu, nav, section, menu, time, mark, audio, video { margin:0; padding:0; border:0; outline:0; font-size:100%; vertical-align:baseline; background:transparent; } Can anyone help me?

    Read the article

  • Web Application: Combining View Layer Between PHP and Javascript-AJAX

    - by wlz
    I'm developing web application using PHP with CodeIgniter MVC framework with a huge real time client-side functionality needs. This is my first time to build large scale of client-side app. So I combine the PHP with a large scale of Javascript modules in one project. As you already know, MVC framework seperate application modules into Model-View-Controller. My concern is about View layer. I could be display the data on the DOM by PHP built-in script tag by load some data on the Controller. Otherwise I could use AJAX to pulled the data -- treat the Controller like a service only -- and display the them by Javascript. Here is some visualization I could put the data directly from Controller: <label>Username</label> <input type="text" id="username" value="<?=$userData['username'];?>"><br /> <label>Date of birth</label> <input type="text" id="dob" value="<?=$userData['dob'];?>"><br /> <label>Address</label> <input type="text" id="address" value="<?=$userData['address'];?>"> Or pull them using AJAX: $.ajax({ type: "POST", url: config.indexURL + "user", dataType: "json", success: function(data) { $('#username').val(data.username); $('#dateOfBirth').val(data.dob); $('#address').val(data.address); } }); So, which approach is better regarding my application has a complex client-side functionality? In the other hand, PHP-CI has a default mechanism to put the data directly from Controller, so why using AJAX?

    Read the article

  • Difference between Detach/Attach and Restore/BackUp a DB

    - by SAMIR BHOGAYTA
    Transact-SQL BACKUP/RESTORE is the normal method for database backup and recovery. Databases can be backed up while online. The backup file size is usually smaller than the database files since only used pages are backed up. Also, in the FULL or BULK_LOGGED recovery model, you can reduce potential data loss by performing transaction log backups. Detaching a database removes the database from SQL Server while leaving the physical database files intact. This allows you to rename or move the physical files and then re-attach. Although one could perform cold backups using this technique, detach/attach isn't really intended to be used as a backup/recovery process. Commonly it is recommended that you use BACKUP/RESTORE for disaster recovery (DR) scenario and copying data from one location to another. But this is not absolute, sometimes for a very large database, if you want to move it from one location to another, backup/restore process may spend a lot of time which you do not like, in this case, detaching/attaching a database is a better way since you can attach a workable database very fast. But you need to aware that detaching a database will bring it offline for a short time and detaching/attaching does not provide DR function. For more information about detaching and attaching databases, you can refer to: Detaching and Attaching Databases http://technet.microsoft.com/en-us/library/ms190794.aspx

    Read the article

  • Which tools you use for development in your company?…Please be exact [closed]

    - by predrag.music
    If you are a professional php/(my/postgre/?)sql/? developer and working in a professional team ... I would like to know which tools you use for development in your company. I do not care which tool is better or worse, but "which tools you use", if it is not a TOP SECRET :) For example, these are just some of the tools i/we use (first those used most (in general)): Pen, paper lots of cofee, cola ... let me think ... mmmm ... yeah more cofee :) All kinds of books (a lots of books) OS: Win / MacOS X Server: Hosted (CentOS )/ At work Mac OS X Dev server: XAMPP / MAMP / LAMP Editor: Notepad++ IDE: Netbeans / Zend Studio / Eclipse Version Control System: Mercurial / SVN FTP: Filezilla mostly / ... Passwords: KeePass js / ajax: jQuery / pure js / jQuery UI Framework:CI / Zend / pure php Database: MySQL / Other ORM: Framework layer db (Not an ORM I know but...) / Doctrine (2) / no ORM Debugging: Xdebug (PHP) / firebug (ajax/js/html/css/...) / framework profiler (stuff) / ... (x) Dreaming: About... Thinking: Not about chaos in ? direction .... n Anything else that comes to mind n+1 Zilion other stuff i know but i can't remember ... 8 some other stuff i (don't) remember i forgot, give up, delete, lost, said to myself never again, i haven't had time stuff, have on computer stuff but can't find or don't even know i have it on my computer at least 2-3 or more times, stuff I said to myself i'll check later and never checked again for all sort of "perfectly justified" reasons (time, memory, wife :), whatever,...), ... what is the reason i'm asking this?:) 8 and beyond looking forward to see a lot of answers ?

    Read the article

  • Can Foswiki be used as a distributed Redmine replacement? [closed]

    - by Tobias Kienzler
    I am quite familiar with and love using git, among other reasons due to its distributed nature. Now I'd like to set up some similarly distributed (FOSS) Project Management software with features similar to what Redmine offers, such as Issue & time tracking, milestones Gantt charts, calendar git integration, maybe some automatic linking of commits and issues Wiki (preferably with Mathjax support) Forum, news, notifications Multiple Projects However, I am looking for a solution that does not require a permanently accesible server, i.e. like in git, each user should have their own copy which can be easily synchronized with others. However it should be possible to not have a copy of every Project on every machine. Since trac uses multiple instances for multiple projects anyway, I was considering using that, but I neither know how well it adapts to simply giting the database itself (which would be be easiest way to handle the distribution due to git being used anyway), nor does it include all of Redmine's feature. After checking http://www.wikimatrix.org for Wikis with integrated tracking system and RCS support, and filtering out seemingly stale project, the choices basically boil down to Foswiki, TWiki and Ikiwiki. The latter doesn't seem to offer as many usability features, and in the TWiki vs Foswiki issue I tend to the latter. Finally, there is Fossil, which starts from the other end by attempting to replace git entirely and tracking itself. I am however not too comfortable with the thought of replacing git, and Fossil's non-SCM features don't seem to be as developed. Now before I invest too much time when someone else might already have tried this, I basically have two questions: Are there crucial features of Project Management software like Redmine that Foswiki does not provide even with all the extensions available? How to set Foswiki up to use git instead of the perl RcsLite?

    Read the article

  • What are benefit/drawbacks of classifying defects during a peer code review

    - by DXM
    About 3 months ago, our engineering group rolled out Review Board to be used for all peer code reviews. Today, I had a discussion with one of the people involved in that process and found out that we are already looking for a replacement (possibly something commercial) because of several missing features. One of the features that is apparently asked by many people is the ability to classify/categorize each code review comment (i.e. is it a style issue, coding convention, resource leak, logic error, crash... whatever). For those teams that regularly practice code review, is this categorization a common practice? Do you do it? have you done it in the past? Is it good/bad? On one hand, it gives the team some more metrics and possibly will indicate more specific areas where developers may potentially need to be trained in (at least that seems to be the argument). Are there other benefits? And on the other hand, and this is my concern, is that it will slow down code review process that much more. As a team lead, I've done a fairly large share of reviews, and I've always liked the ability, to highlight a chunk of code, hammer off a comment and move on as fast as possible. Although I haven't tried it personally, I have a feeling that expanding that combo box every time and scrolling/searching for the right category would feel like something is tripping you. Also if we start keeping metrics on this stuff, my other concern is that valuable code review meeting time will be spent on arguing whether something is a logic error or if it should be classified as a crash.

    Read the article

  • Trying to install Canon iP7250 printer

    - by Chris Burmajster
    I'm trying to install my Canon iP7250 printer on a fresh install of 13:10 64bit. Last time I installed it successfully via http://handytutorial.com/install-canon-printer-driver-for-ubuntu-13-04-12-10-12-04 which entailed adding a repository via the terminal and then updating it. I then opened Synaptic and searched for my printer model and Synaptic then installed everything. This time round I added the repositories but the update failed with the following error: Failed to fetch http://ppa.launchpad.net/michael-gru...amd64/Packages 404 Not Found Failed to fetch http://ppa.launchpad.net/michael-gru...-i386/Packages 404 Not Found Some index files failed to download. They have been ignored, or old ones used instead. An internet search failed to find another way of installing this printer so I'm stuck. My laptop also runs 64 bit 13:10 and the printer works, as I installed it months ago. Is there any way I can take something off that to use on this new machine? Thank you all in advance P.S. I have just found and downloaded the official Canon driver from the Canon website and extracted it, but I don't know how to install it.

    Read the article

  • Access Log Files

    - by Matt Watson
    Some of the simplest things in life make all the difference. For a software developer who is trying to solve an application problem, being able to access log files, windows event viewer, and other details is priceless. But ironically enough, most developers aren't even given access to them. Developers have to escalate the issue to their manager or a system admin to retrieve the needed information. Some companies create workarounds to solve the problem or use third party solutions.Home grown solution to access log filesSome companies roll their own solution to try and solve the problem. These solutions can be great but are not always real time, and don't account for the windows event viewer, config files, server health, and other information that is needed to fix bugs.VPN or FTP access to log file foldersCreate programs to collect log files and move them to a centralized serverModify code to write log files to a centralized placeExpensive solution to access log filesSome companies buy expensive solutions like Splunk or other log management tools. But in a lot of cases that is overkill when all the developers need is the ability to just look at log files, not do analytics on them.There has to be a better solution to access log filesStackify recently came up with a perfect solution to the problem. Their software gives developers remote visibility to all the production servers without allowing them to remote desktop in to the machines. They can get real time access to log files, windows event viewer, config files, and other things that developers need. This allows the entire development team to be more involved in the process of solving application defects.Check out their product to learn morehttp://www.Stackify.com

    Read the article

  • Designing a large database with multiple sources

    - by CatchingMonkey
    I have been tasked with redesigning, or at worst optimising the structure of a database for a data warehouse. Currently, the database has 4 other source databases (which is due to expand to X number of others), all of which have their own data structures, naming conventions etc. At the moment an overnight SSIS package pulls the data from the various source and then for each source coverts the data into a standardised, usable format. These tables are then appended to each other creating a 60m row, 40 column beast!. This table is then used in a variety of ways from an OLAP cube to a web front end. The structure has been in place for a very long time, and the work I have been able to prove the advantages of normalisation, and this is the way I would like to go. The problem for me is that the overnight process takes so long I don't then want to spend additional time normalising the last table into something usable. Can anyone offer any insight or ideas into the best way to restructure or optimise the database efficiently? Edit: All the databases are MS SQL Server 2008 R2 Thanks in advance CM

    Read the article

  • 2012 Oracle Fusion Middleware Innovation Awards for Oracle Exalogic

    - by Sanjeev Sharma
    Companies from around the world were honored for their innovative solutions using Oracle Fusion Middleware. This year’s 27 award winners, representing 11 countries and a wide span of industries, wowed the judges with a range of projects across eight product categories. 4 awards were given out to customers who demonstrated innovative application of Oracle Exalogic for their mission-critical applications.Below is an overview of the 4 businesses that won the Oracle Fusion Middleware Innovation Award for Oracle Exalogic this year. Company: Netshoes About: Leading online retailer of sporting goods in Latin America.Challenges: Rapid business growth resulted in frequent outages and poor response-time of online store-front Conventional ad-hoc approach to horizontal scaling resulted in high CAPEX and OPEX Poor performance and unavailability of online store-front resulted in revenue loss from purchase abandonment Solution: Consolidated ATG Commerce and Oracle WebLogic running on Oracle Exalogic.Business Impact:Reduced abandonment rates resulting in a two-digit increase in online conversion rates translating directly into revenue up-liftCompany: ClaroAbout: Leading communications services provider in Latin America.Challenges: Support business growth over the next 3  - 5 years while maximizing re-use of existing middleware and application investments with minimal effort and risk Solution: Consolidated Oracle Fusion Middleware components (Oracle WebLogic, Oracle SOA Suite, Oracle Tuxedo) and JAVA applications onto Oracle Exalogic and Oracle Exadata. Business Impact:Improved partner SLA’s 7x while improving throughput 5X and response-time 35x for  JAVA applicationsCompany: ULAbout: Leading safety testing and certification organization in the world.Challenges: Transition from being a non-profit to a profit oriented enterprise and grow from a $1B to $5B in annual revenues in the next 5 years Undertake a massive business transformation by aligning change strategy with execution Solution: Consolidated Oracle Applications (E-Business Suite, Siebel, BI, Hyperion) and Oracle Fusion Middleware (AIA, SOA Suite) on Oracle Exalogic and Oracle ExadataBusiness Impact:Reduced financial and operating risk in re-architecting IT services to support new business capabilities supporting 87,000 manufacturersCompany: Ingersoll RandAbout: Leading manufacturer of industrial, climate, residential and security solutions.Challenges: Business continuity risks due to complexity in enforcing consistent operational and financial controls; Re-active business decisions reduced ability to offer differentiation and compete Solution: Consolidated Oracle E-business Suite on Oracle Exalogic and Oracle ExadataBusiness Impact:Service differentiation with faster order provisioning and a shorter lead-to-cash cycle translating into higher customer satisfaction and quicker cash-conversionCheck out the winners of the Oracle Fusion Middleware Innovation awards in other categories here.

    Read the article

  • My Thoughts on Reinventing the Wheel

    - by Matt Christian
    For awhile now I've known that XNA Game Studio contains built-in scene management however I still built my own for each engine.  Obviously it was redundant and probably inefficient due to the amount of searching and such I was required to do.  And even though I knew this, why did I continue to do it? I've always been very detail oriented, probably part of my mild OCD.  But when it comes to technology I believe in both reinventing the wheel and not reinventing it all at the same time.  Here's what I imagine most programmers doing.  When they pick up XNA, they're typically focused on 'I want to make a game with as little code as possible'.  This is great and XNA GS is a great tool, but what will it do for programmers that want to make games with XNA?  If they don't have any prior experience with other tools they will probably not ever learn scene management. So is it better to leverage code and risk not learning valuable techniques, or write it all yourself and fight through the headaches and hours of time you may spend on something already built?

    Read the article

  • What is the kd tree intersection logic?

    - by bobobobo
    I'm trying to figure out how to implement a KD tree. On page 322 of "Real time collision detection" by Ericson The text section is included below in case Google book preview doesn't let you see it the time you click the link text section Relevant section: The basic idea behind intersecting a ray or directed line segment with a k-d tree is straightforward. The line is intersected against the node's splitting plane, and the t value of intersection is computed. If t is within the interval of the line, 0 <= t <= tmax, the line straddles the plane and both children of the tree are recursively descended. If not, only the side containing the segment origin is recursively visited. So here's what I have: (open image in new tab if you can't see the lettering) The logical tree Here the orange ray is going thru the 3d scene. The x's represent intersection with a plane. From the LEFT, the ray hits: The front face of the scene's enclosing cube, The (1) splitting plane The (2.2) splitting plane The right side of the scene's enclosing cube But here's what would happen, naively following Ericson's basic description above: Test against splitting plane (1). Ray hits splitting plane (1), so left and right children of splitting plane (1) are included in next test. Test against splitting plane (2.1). Ray actually hits that plane, (way off to the right) so both children are included in next level of tests. (This is counter-intuitive - shouldn't only the bottom node be included in subsequent tests) Can some one describe what happens when the orange ray goes through the scene correctly?

    Read the article

< Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >