Search Results

Search found 43110 results on 1725 pages for 'noob question'.

Page 380/1725 | < Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >

  • How do NTP Servers Manage to Stay so Accurate?

    - by Akemi Iwaya
    Many of us have had the occasional problem with our computers and other devices retaining accurate time settings, but a quick sync with an NTP server makes all well again. But if our own devices can lose accuracy, how do NTP servers manage to stay so accurate? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. Photo courtesy of LEOL30 (Flickr). The Question SuperUser reader Frank Thornton wants to know how NTP servers are able to remain so accurate: I have noticed that on my servers and other machines, the clocks always drift so that they have to sync up to remain accurate. How do the NTP server clocks keep from drifting and always remain so accurate? How do the NTP servers manage to remain so accurate? The Answer SuperUser contributor Michael Kjorling has the answer for us: NTP servers rely on highly accurate clocks for precision timekeeping. A common time source for central NTP servers are atomic clocks, or GPS receivers (remember that GPS satellites have atomic clocks onboard). These clocks are defined as accurate since they provide a highly exact time reference. There is nothing magical about GPS or atomic clocks that make them tell you exactly what time it is. Because of how atomic clocks work, they are simply very good at, having once been told what time it is, keeping accurate time (since the second is defined in terms of atomic effects). In fact, it is worth noting that GPS time is distinct from the UTC that we are more used to seeing. These atomic clocks are in turn synchronized against International Atomic Time or TAI in order to not only accurately tell the passage of time, but also the time. Once you have an exact time on one system connected to a network like the Internet, it is a matter of protocol engineering enabling transfer of precise times between hosts over an unreliable network. In this regard a Stratum 2 (or farther from the actual time source) NTP server is no different from your desktop system syncing against a set of NTP servers. By the time you have a few accurate times (as obtained from NTP servers or elsewhere) and know the rate of advancement of your local clock (which is easy to determine), you can calculate your local clock’s drift rate relative to the “believed accurate” passage of time. Once locked in, this value can then be used to continuously adjust the local clock to make it report values very close to the accurate passage of time, even if the local real-time clock itself is highly inaccurate. As long as your local clock is not highly erratic, this should allow keeping accurate time for some time even if your upstream time source becomes unavailable for any reason. Some NTP client implementations (probably most ntpd daemon or system service implementations) do this, and others (like ntpd’s companion ntpdate which simply sets the clock once) do not. This is commonly referred to as a drift file because it persistently stores a measure of clock drift, but strictly speaking it does not have to be stored as a specific file on disk. In NTP, Stratum 0 is by definition an accurate time source. Stratum 1 is a system that uses a Stratum 0 time source as its time source (and is thus slightly less accurate than the Stratum 0 time source). Stratum 2 again is slightly less accurate than Stratum 1 because it is syncing its time against the Stratum 1 source and so on. In practice, this loss of accuracy is so small that it is completely negligible in all but the most extreme of cases. Have something to add to the explanation? Sound off in the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.

    Read the article

  • SQL SERVER – Puzzle – Statistics are not Updated but are Created Once

    - by pinaldave
    After having excellent response to my quiz – Why SELECT * throws an error but SELECT COUNT(*) does not?I have decided to ask another puzzling question to all of you. I am running this test on SQL Server 2008 R2. Here is the quick scenario about my setup. Create Table Insert 1000 Records Check the Statistics Now insert 10 times more 10,000 indexes Check the Statistics – it will be NOT updated Note: Auto Update Statistics and Auto Create Statistics for database is TRUE Expected Result – Statistics should be updated – SQL SERVER – When are Statistics Updated – What triggers Statistics to Update Now the question is why the statistics are not updated? The common answer is – we can update the statistics ourselves using UPDATE STATISTICS TableName WITH FULLSCAN, ALL However, the solution I am looking is where statistics should be updated automatically based on algorithm mentioned here. Now the solution is to ____________________. Vinod Kumar is not allowed to take participate over here as he is the one who has helped me to build this puzzle. I will publish the solution on next week. Please leave a comment and if your comment consist valid answer, I will publish with due credit. Here is the script to reproduce the scenario which I mentioned. -- Execution Plans Difference -- Create Sample Database CREATE DATABASE SampleDB GO USE SampleDB GO -- Create Table CREATE TABLE ExecTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Insert One Thousand Records -- INSERT 1 INSERT INTO ExecTable (ID,FirstName,LastName,City) SELECT TOP 1000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%20 = 1 THEN 'New York' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 5 THEN 'San Marino' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 3 THEN 'Los Angeles' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 7 THEN 'La Cinega' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 13 THEN 'San Diego' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 17 THEN 'Las Vegas' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Display statistics of the table - none listed sp_helpstats N'ExecTable', 'ALL' GO -- Select Statement SELECT FirstName, LastName, City FROM ExecTable WHERE City  = 'New York' GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Replace your Statistics over here -- NOTE: Replace your _WA_Sys with stats from above query DBCC SHOW_STATISTICS('ExecTable', _WA_Sys_00000004_7D78A4E7); GO -------------------------------------------------------------- -- Round 2 -- Insert Ten Thousand Records -- INSERT 2 INSERT INTO ExecTable (ID,FirstName,LastName,City) SELECT TOP 10000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%20 = 1 THEN 'New York' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 5 THEN 'San Marino' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 3 THEN 'Los Angeles' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 7 THEN 'La Cinega' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 13 THEN 'San Diego' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 17 THEN 'Las Vegas' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Select Statement SELECT FirstName, LastName, City FROM ExecTable WHERE City  = 'New York' GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Replace your Statistics over here -- NOTE: Replace your _WA_Sys with stats from above query DBCC SHOW_STATISTICS('ExecTable', _WA_Sys_00000004_7D78A4E7); GO -- You will notice that Statistics are still updated with 1000 rows -- Clean up Database DROP TABLE ExecTable GO USE MASTER GO ALTER DATABASE SampleDB SET SINGLE_USER WITH ROLLBACK IMMEDIATE; GO DROP DATABASE SampleDB GO Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Statistics, Statistics

    Read the article

  • Interviews Gone Bad.....Now What Do I Do?

    - by david.talamelli
    We have all done it at some stage of our working careers - you know those times when you leave an interview and then you think to yourself "why didn't I ask that question" or "I can't believe I said that" or "how could I have forgotten to say that". It happens to everyone but how you handle things moving forwards could be critical in helping you land that dream job. There is nothing better than seeing that dream job with the dream company that you are looking to work for advertised (or in some cases getting called by the Recruiter to let you know about that job). The role may seem perfect and it could be just what you are looking for and it is with the right company as well. You have sent in your resume and have subsequently had one, two or maybe three interviews for the role. After each step of the process you get a little bit more excited about the role as you start to think about your work day in your new role/company. Then it happens, you get it: you get The Phone Call to inform you that you have not been successful in securing the position that you have invested so much time and effort into. It can be disappointing to hear this news but what you do next is important in potentially keeping that door open for future opportunities with that company. How you handle yourself in this situation is important: if any of you remember the Choose Your Own Adventure Books do you: Tell the Recruiter (maybe get aggressive) they are wrong in their assessment and that you are the right candidate for the role Switch off and say ok thanks and hang up without engaging in any further dialogue Thank the company for their time and enquire if there may be any other opportunities in the future to explore If you chose the first option - the company in question may consider whether or not to look at you for other opportunities. How you handle yourself in the recruitment process could be an indication of how you would deal with clients/colleagues in your role and the impression that you leave a potential employer may be what sticks in their mind when they think of you (eg: isn't that the person who couldn't handle it when we told him he wasn't right for our role). The second option potentially produces a similar outcome. If you rush to get off the phone, the company may come back to you to talk about other roles when they come up, but you also leave open the potential thought with the company you were only interested in that role and therefore not interested in any other opportunities. Why take the risk of the company thinking that and potentially not getting back to you in the future. By picking the third option, you actively engage with the company and keep the dialogue open for future discussions. Ok, so you didn't get the role you interviewed for - you don't know who else the company may have been interviewing - maybe they found someone who was a better fit, or maybe there were too many boxes you didn't tick to step straight into that specific role. Take a deep breath and keep the company engaged. You are fresh in their mind - take advantage of that fact and let them know that while you respect their decision, that you are still interested in the company and would like to be kept in mind for future roles. Ask if it is ok to keep in touch and when they would like to keep in touch, as long as you are interested let them know you are still interested. You do need to balance that though if you come across as too keen or start stalking people - it could equally damage your brand. Companies normally have more than one open role. New roles are created all the time, circumstances change and hiring people is not a static business, it changes course from everyone's best laid initial plans. If you didn't get that initial role you wanted, keep the door open with that company so that when those new roles do come up or when circumstances do change you have already laid the ground to step into those new positions.

    Read the article

  • When to implement: Together with or after the source product?

    - by Jeremy Oosthuizen
    Somebody recently relayed a prospect's question to me: How hard would it be to implement OUBI after the source product (CC&B, WAM or NMS) has already been implemented? Fact is that MOST non-OUBI Data Warehouse / Business Intelligence implementations take place after the source application(s) are in place and hopefully stable. If an organization decides that they need better reporting and management information, then the logical path (see The Data Warehouse Institute's Data Warehouse Maturity Model) is to a Data Warehouse -- no matter when their last applications were implemented. If there is a pre-built Data Warehouse for their specific application, or even for the desired business process in their industry, they're in luck. Else they have to design and build from scratch, using a toolset. The implementation of a toolset is unlike the implementation of OUBI which, like OBI Apps, contain pre-built ETL routines and user content. Much has been written before about the advantages of that. So, because OUBI is designed specifically for Oracle Utilities transactional products, we often implement them in parallel -- with OUBI lagging a little behind by necessity, like Reporting. Customers know from the start they're going to need the solution, and therefore purchase the products at the same time. My biggest argument FOR a parallel installation/implementation of OUBI with the source product is two-fold: - There could be things (which is the technical term for data elements) that customers figure out they need when implementing OUBI, which are often easier added to the source product's implementation project, than to add later; - OUBI's ETL often points out errors (severe or not) with converted data, which are easier to fix during the source product's implementation project, or it may even be impossible to fix afterwards. The Conversion routines sometimes miss these errors, because the source system can live with the not-quite-perfect converted data. If the data can't be properly extracted, i.e. the proper Dimensions linked to the Facts, then it can't get into OUBI. That means it can't be analyzed effectively along with the rest of the organization's data. Then there is also the throw-away-work argument, which may be significant. The operational / transactional system cannot go live without reports on Day 1. A lot of those reports would be taken care of by the implementation of OUBI. If OUBI is implemented after go-live, those reports STILL have to be built during the source product's implementation project, but they become throw-away after the OUBI implementation. I have sometimes been told that it is better to implement OUBI after the source product, because it cuts down on scope and risk for the source product's implementation project. All I can say to that, is bah humbug. No, seriously, given the arguments above, planning has to include the OUBI implementation and it has to be managed properly -- just like any other implementation. If so, it should not add any risk and it should be included in the scope from the start. The answer to the prospect's question is therefore that it is not that much more difficult; after all, most DW/BI implemenations are done like that. They just have to consider the points above.

    Read the article

  • Opening the Internet Settings Dialog and using Windows Default Network Settings via Code

    - by Rick Strahl
    Ran into a question from a client the other day that asked how to deal with Internet Connection settings for running  HTTP requests. In this case this is an old FoxPro app and it's using WinInet to handle the actual HTTP connection. Another client asked a similar question about using the IE Web Browser control and configuring connection properties. Regardless of platform or tools used to do HTTP connections, you can probably configure custom connection and proxy settings in your application to configure http connection settings manually. However, this is a repetitive process for each application requires you to track system information in your application which is undesirable. Often it's much easier to rely on the system wide proxy settings that Windows provides via the Internet Settings dialog. The dialog is a Control Panel applet (inetcpl.cpl) and is the same dialog that you see when you pop up Internet Explorer's Options dialog: This dialog controls the Windows connection properties that determine how the Windows HTTP stack connects to the Internet and how Proxy's are used if configured. Depending on how the HTTP client is configured - it can typically inherit and use these global settings. Loading the Settings Dialog Programmatically The settings dialog is a Control Panel applet with the name of: inetcpl.cpl and you can use any Shell execution mechanism (Run dialog, ShellExecute API, Process.Start() in .NET etc.) to invoke the dialog. Changes made there are immediately reflected in any applications that use the default connection settings. In .NET you can simply do this to bring up the Internet Settings dialog with the Connection tab enabled: Process.Start("inetcpl.cpl",",4"); In FoxPro you can simply use the RUN command to execute inetcpl.cpl: lcCmd = "inetcpl.cpl ,4" RUN &lcCmd Using the Default Connection/Proxy Settings When using WinInet you specify the Http connect type in the call to InternetOpen() like this (FoxPro code here): hInetConnection=; InternetOpen(THIS.cUserAgent,0,; THIS.chttpproxyname,THIS.chttpproxybypass,0) The second parameter of 0 specifies that the default system proxy settings should be used and it uses the settings from the Internet Settings Connections tab. Other connection options for HTTP connections include 1 - direct (no proxies and ignore system settings), 3 - explicit Proxy specification. In most situations a connection mode setting of 0 should work. In .NET HTTP connections by default are direct connections and so you need to explicitly specify a default proxy or proxy configuration to use. The easiest way to do this is on the application level in the config file: <configuration> <system.net> <defaultProxy> <proxy bypassonlocal="False" autoDetect="True" usesystemdefault="True" /> </defaultProxy> </system.net> </configuration> You can do the same sort of thing in code specifying the proxy explicitly and using System.Net.WebProxy.GetDefaultProxy(). So when making HTTP calls to Web Services or using the HttpWebRequest class you can set the proxy with: StoreService.Proxy = WebProxy.GetDefaultProxy(); All of this is pretty easy to deal with and in my opinion is a way better choice to managing connection settings than having to track this stuff in your own application. Plus if you use default settings, most of the time it's highly likely that the connection settings are already properly configured making further configuration rare.© Rick Strahl, West Wind Technologies, 2005-2011Posted in Windows  HTTP  .NET  FoxPro   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Many Stack Overflow users' pages have no Google PageRank and they are not indexed, why?

    - by Marco Demaio
    If you go to my user page on Stack Overflow and you check it with the Google Toolbar, you can see it has no PageRank at all (this does happen for almost any user page, even people with much higher reputation, the only exceptions seem to be the users in page 1, and some other users they have PR). My user page's Page Rank is not only zero, but not calculated at all. When PR is 0 or less than 1, but calculated the Google bar shows white, but when the PR is not even calculated like in my user page the Google bar shows in grey. I further more discovered that my user page is NOT EVEN INDEXED on Google, simple test is searching on Google for the exact page url: "http://stackoverflow.com/users/260080/marco-demaio" and you will see no result. The question is how can this be??? This is really weird to me because of the following reason: If you search on Google for "Marco Demaio" on Stack Overflow only (you can do this by searching "site:stackoverflow.com Marco Demaio") the search result shows hundreds of 'asking/answering questions' pages where I was 'tagged'!!! Let's check one of these: the 1st one that appears now (shows one of the question I asked). We can be sure this page is indexed in Google because comes out in a search. Moreover, its PR is calculated. It's probably nearly zero. Still, some PR flows there, the PR bar is not grey, but white: The page shown above has got links to my own user page. I checked the source code of the page shown above and the links are not hidden or set with a rel="nofollow", moreover I can't see any meta character excluding the links on the page from being followed. So what's happening? Why Google does not see my user page at all. Did Stack Overflow do something to achieve this? If yes what did they do? Any explanation really appreciates (as always). P.S. obviously I checked also the code of my user page, but I could not find meta tags excluding Google search for the page. P.S. 2 in a desperate adventure I also checked Stack Overflow's robots.txt but it does not seem to exclude user pages. UPDATE 1 following up on some answers, I did some more research. Excluding for a while the PR problem (since PR is not science), and looking only at the user page on Stack Overflow NOT BEING INDEXED problem: pages do not seem to be indexed by Google because of the user reputation, this user for instance has got NOW 200 points less reputation than me and his page is indexed (while mine not). It does not seem even to be connected with months you have been on Stack Overflow, this user (almost my same reputation) has been there for 3 months only and his page is indexed (while mine not and I have been a user for 7 months). It's bizarre! UPDATE February/2011 As of today, the page got indexed by Google at least when you search for "site:stackoverflow.com Marco Demaio" it's the 1st page. The amazing thing is that it has still got NO PageRank at all: Google toolbar states loud and clear "No PageRank information available". It's odd!

    Read the article

  • Introducing functional programming constructs in non-functional programming languages

    - by Giorgio
    This question has been going through my mind quite a lot lately and since I haven't found a convincing answer to it I would like to know if other users of this site have thought about it as well. In the recent years, even though OOP is still the most popular programming paradigm, functional programming is getting a lot of attention. I have only used OOP languages for my work (C++ and Java) but I am trying to learn some FP in my free time because I find it very interesting. So, I started learning Haskell three years ago and Scala last summer. I plan to learn some SML and Caml as well, and to brush up my (little) knowledge of Scheme. Well, a lot of plans (too ambitious?) but I hope I will find the time to learn at least the basics of FP during the next few years. What is important for me is how functional programming works and how / whether I can use it for some real projects. I have already developed small tools in Haskell. In spite of my strong interest for FP, I find it difficult to understand why functional programming constructs are being added to languages like C#, Java, C++, and so on. As a developer interested in FP, I find it more natural to use, say, Scala or Haskell, instead of waiting for the next FP feature to be added to my favourite non-FP language. In other words, why would I want to have only some FP in my originally non-FP language instead of looking for a language that has a better support for FP? For example, why should I be interested to have lambdas in Java if I can switch to Scala where I have much more FP concepts and access all the Java libraries anyway? Similarly: why do some FP in C# instead of using F# (to my knowledge, C# and F# can work together)? Java was designed to be OO. Fine. I can do OOP in Java (and I would like to keep using Java in that way). Scala was designed to support OOP + FP. Fine: I can use a mix of OOP and FP in Scala. Haskell was designed for FP: I can do FP in Haskell. If I need to tune the performance of a particular module, I can interface Haskell with some external routines in C. But why would I want to do OOP with just some basic FP in Java? So, my main point is: why are non-functional programming languages being extended with some functional concept? Shouldn't it be more comfortable (interesting, exciting, productive) to program in a language that has been designed from the very beginning to be functional or multi-paradigm? Don't different programming paradigms integrate better in a language that was designed for it than in a language in which one paradigm was only added later? The first explanation I could think of is that, since FP is a new concept (it isn't new at all, but it is new for many developers), it needs to be introduced gradually. However, I remember my switch from imperative to OOP: when I started to program in C++ (coming from Pascal and C) I really had to rethink the way in which I was coding, and to do it pretty fast. It was not gradual. So, this does not seem to be a good explanation to me. Or can it be that many non-FP programmers are not really interested in understanding and using functional programming, but they find it practically convenient to adopt certain FP-idioms in their non-FP language? IMPORTANT NOTE Just in case (because I have seen several language wars on this site): I mentioned the languages I know better, this question is in no way meant to start comparisons between different programming languages to decide which is better / worse. Also, I am not interested in a comparison of OOP versus FP (pros and cons). The point I am interested in is to understand why FP is being introduced one bit at a time into existing languages that were not designed for it even though there exist languages that were / are specifically designed to support FP.

    Read the article

  • Google, typography, and cognitive fluency for persuasion

    - by Roger Hart
    Cognitive fluency is - roughly - how easy it is to think about something. Mere Exposure (or familiarity) effects are basically about reacting more favourably to things you see a lot. Which is part of why marketers in generic spaces like insipid mass-market lager will spend quite so much money on getting their logo daubed about the place; or that guy at the bus stop starts to look like a dating prospect after a month or two. Recent thinking suggests that exposure effects likely spin off cognitive fluency. We react favourably to things that are easier to think about. I had to give tech support to an older relative recently, and suggested they Google the problem. They were confused. They could not, apparently, Google the problem, because part of it was that their Google toolbar had mysteriously vanished. Once I'd finished trying not to laugh, I started thinking about typography. This is going somewhere, I promise. Google is a ubiquitous brand. Heck, it's a verb, and their recent, jaw-droppingly well constructed Paris advert is more or less about that ubiquity. It trades on Google's integration into any information-seeking behaviour. But, as my tech support encounter suggests, people settle into comfortable patterns of thinking about things. They build schemas, and altering them can take work. Maybe the ubiquity even works to cement that. Alongside their online effort, Google is running billboard campaigns to advertise Chrome, a free product in a crowded space. They are running these ads in some kind of kooky Calibri / Comic Sans hybrid. Now, at first it seems odd that one of the world's more ubiquitous brands needs to run a big print campaign in public places - surely they have all the fluency they need? Well, not so much. Chrome, after all, is not the same as their core product, so there's some basic awareness work to do, and maybe a whole new batch of exposure effect to try and grab. But why the typeface? It's heavily foregrounded, and the ads are extremely textual. Plus, don't we all know that jovial, off-beat fonts look unprofessional, or something? There's a whole bunch of people who want (often rightly) to ban Comic Sans I wonder, though. Are Google trying to subtly disrupt cognitive fluency? There's an interesting paper (pdf) about - among other things - the effects of typography on they way people answer survey questions. Participants given the slightly harder to read question gave more abstract answers. The paper references other work suggesting that generally speaking, less-fluent question framing elicits more considered answers. The Chrome ad typeface is less fluent for print. Reactions may therefore be more considered, abstract, and disruptive. Is that, in fact, what Google need? They have brand ubiquity, but they want here to change accustomed behaviour, to get people to think about changing their browser. Is this actually a very elegant piece of persuasive information design? If you think about their "what is a browser?" vox pop research video, there's certainly a perceptual barrier they're going to have to tackle somehow.

    Read the article

  • Assign highest priority to my local repository

    - by Anwar Shah
    Original question was : "How to assign highest priority to local repository without using sources.list file" I have setup a local repository with packages I downloaded. I use it to avoid downloading the same packages over the Internet, when I need to reinstall my Ubuntu. It is a basic repository, created with apt-ftparchive packages . > Packages. I made this a trusted repository to avoid "unauthenticated repository" warning. (When you have a untrusted repository, apt or synaptic try to download the same packages over the Internet, 'cause it is trusted). I have been using this local repository for at least 1 years. But I have to always put my local repository line at the top of the sources.list file to use this. But this is annoying, since I must open a terminal and do some typing on it every time I reinstall Ubuntu, though there is a better tool software-properties-gtk. I cannot use this tool since it place the source line at the end of `sources.list. And the real problem is that, the apt or synaptic always download a package from the source which is mentioned earlier, without inspecting whether the packages are already available in the local repository. So, I have no choice but to place the local source at the top of sources.list doing terminal (I actually don't hate terminal, but I need a solution) . I have tried this method. But this does not help me. My preference file is this in /etc/apt/preferences.d/local-pin-900 Package: * Pin: release o=Local,n=ubuntu-local Pin-Priority: 900 My release file is this Origin: Local Label: Local-Ubuntu Description: Local Ubuntu Repository Codename: ubuntu-local MD5Sum: ed43222856d18f389c637ac3d7dd6f85 1043412 Packages d41d8cd98f00b204e9800998ecf8427e 0 Sources When I enable the apt-preference, the apt-cache policy correctly shows the preference, e.g. It shows the local repository has the highest priority. But when I do this sudo apt-get install <package-name>, apt tries to download it from Internet. But when I place my local-repo at the top, it installs from local repository. So, My question is - 'Is it possible to force apt to use local repository when the package is available in local repository, without explicitly placing "the local source" at the top of my repository list (e.g sources.list file) ?' Edit: output of apt-cache policy $package_name is as follows nautilus-wipe: Installed: (none) Candidate: 0.1.1-2 Version table: 0.1.1-2 0 500 http://archive.ubuntu.com/ubuntu/ precise/universe i386 Packages 900 file:/media/Main/Linux-Software/Ubuntu/Precise/ Packages It is showing that my local repository has higher preference, though it is not the one which comes first in sources.list file. Here is the output of apt-get install nautilus-wipe Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: nautilus-wipe 0 upgraded, 1 newly installed, 0 to remove and 131 not upgraded. Need to get 30.7 kB of archives. After this operation, 150 kB of additional disk space will be used. 'http://archive.ubuntu.com/ubuntu/pool/universe/n/nautilus-wipe/nautilus-wipe_0.1.1-2_i386.deb' nautilus-wipe_0.1.1-2_i386.deb 30730 MD5Sum:7d497b8dfcefe1c0b51a45f3b0466994 It is still trying to get the file from Internet, though I think it should be happy with the local one.

    Read the article

  • To sample or not to sample...

    - by [email protected]
    Ideally, we would know the exact answer to every question. How many people support presidential candidate A vs. B? How many people suffer from H1N1 in a given state? Does this batch of manufactured widgets have any defective parts? Knowing exact answers is expensive in terms of time and money and, in most cases, is impractical if not impossible. Consider asking every person in a region for their candidate preference, testing every person with flu symptoms for H1N1 (assuming every person reported when they had flu symptoms), or destructively testing widgets to determine if they are "good" (leaving no product to sell). Knowing exact answers, fortunately, isn't necessary or even useful in many situations. Understanding the direction of a trend or statistically significant results may be sufficient to answer the underlying question: who is likely to win the election, have we likely reached a critical threshold for flu, or is this batch of widgets good enough to ship? Statistics help us to answer these questions with a certain degree of confidence. This focuses on how we collect data. In data mining, we focus on the use of data, that is data that has already been collected. In some cases, we may have all the data (all purchases made by all customers), in others the data may have been collected using sampling (voters, their demographics and candidate choice). Building data mining models on all of your data can be expensive in terms of time and hardware resources. Consider a company with 40 million customers. Do we need to mine all 40 million customers to get useful data mining models? The quality of models built on all data may be no better than models built on a relatively small sample. Determining how much is a reasonable amount of data involves experimentation. When starting the model building process on large datasets, it is often more efficient to begin with a small sample, perhaps 1000 - 10,000 cases (records) depending on the algorithm, source data, and hardware. This allows you to see quickly what issues might arise with choice of algorithm, algorithm settings, data quality, and need for further data preparation. Instead of waiting for a model on a large dataset to build only to find that the results don't meet expectations, once you are satisfied with the results on the initial sample, you can  take a larger sample to see if model quality improves, and to get a sense of how the algorithm scales to the particular dataset. If model accuracy or quality continues to improve, consider increasing the sample size. Sampling in data mining is also used to produce a held-aside or test dataset for assessing classification and regression model accuracy. Here, we reserve some of the build data (data that includes known target values) to be used for an honest estimate of model error using data the model has not seen before. This sampling transformation is often called a split because the build data is split into two randomly selected sets, often with 60% of the records being used for model building and 40% for testing. Sampling must be performed with care, as it can adversely affect model quality and usability. Even a truly random sample doesn't guarantee that all values are represented in a given attribute. This is particularly troublesome when the attribute with omitted values is the target. A predictive model that has not seen any examples for a particular target value can never predict that target value! For other attributes, values may consist of a single value (a constant attribute) or all unique values (an identifier attribute), each of which may be excluded during mining. Values from categorical predictor attributes that didn't appear in the training data are not used when testing or scoring datasets. In subsequent posts, we'll talk about three sampling techniques using Oracle Database: simple random sampling without replacement, stratified sampling, and simple random sampling with replacement.

    Read the article

  • SQL SERVER – Solution of Puzzle – Swap Value of Column Without Case Statement

    - by pinaldave
    Earlier this week I asked a question where I asked how to Swap Values of the column without using CASE Statement. Read here: SQL SERVER – A Puzzle – Swap Value of Column Without Case Statement. I have proposed 3 different solutions in the blog posts itself. I had requested the help of the community to come up with alternate solutions and honestly I am stunned and amazed by the qualified entries. I will be not able to cover every single solution which is posted as a comment, however, I would like to for sure cover few interesting entries. However, I am selecting 5 solutions which are different (not necessary they are most optimal or best – just different and interesting). Just for clarity I am involving the original problem statement here. USE tempdb GO CREATE TABLE SimpleTable (ID INT, Gender VARCHAR(10)) GO INSERT INTO SimpleTable (ID, Gender) SELECT 1, 'female' UNION ALL SELECT 2, 'male' UNION ALL SELECT 3, 'male' GO SELECT * FROM SimpleTable GO -- Insert Your Solutions here -- Swap value of Column Gender SELECT * FROM SimpleTable GO DROP TABLE SimpleTable GO Here are the five most interesting and different solutions I have received. Solution by Roji P Thomas UPDATE S SET S.Gender = D.Gender FROM SimpleTable S INNER JOIN SimpleTable D ON S.Gender != D.Gender I really loved the solutions as it is very simple and drives the point home – elegant and will work pretty much for any values (not necessarily restricted by the option in original question ‘male’ or ‘female’). Solution by Aneel CREATE TABLE #temp(id INT, datacolumn CHAR(4)) INSERT INTO #temp VALUES(1,'gent'),(2,'lady'),(3,'lady') DECLARE @value1 CHAR(4), @value2 CHAR(4) SET @value1 = 'lady' SET @value2 = 'gent' UPDATE #temp SET datacolumn = REPLACE(@value1 + @value2,datacolumn,'') Aneel has very interesting solution where he combined both the values and replace the original value. I personally liked this creativity of the solution. Solution by SIJIN KUMAR V P UPDATE SimpleTable SET Gender = RIGHT(('fe'+Gender), DIFFERENCE((Gender),SOUNDEX(Gender))*2) Sijin has amazed me with Difference and Soundex function. I have never visualized that above two functions can resolve the problem. Hats off to you Sijin. Solution by Nikhildas UPDATE St SET St.Gender = t.Gender FROM SimpleTable St CROSS Apply (SELECT DISTINCT gender FROM SimpleTable WHERE St.Gender != Gender) t I was expecting that someone will come up with this solution where they use CROSS APPLY. This is indeed very neat and for sure interesting exercise. If you do not know how CROSS APPLY works this is the time to learn. Solution by mistermagooo UPDATE SimpleTable SET Gender=X.NewGender FROM (VALUES('male','female'),('female','male')) AS X(OldGender,NewGender) WHERE SimpleTable.Gender=X.OldGender As per author this is a slow solution but I love how syntaxes are placed and used here. I love how he used syntax here. I will say this is the most beautifully written solution (not necessarily it is best). Bonus: Solution by Madhivanan Somehow I was confident Madhi – SQL Server MVP will come up with something which I will be compelled to read. He has written a complete blog post on this subject and I encourage all of you to go ahead and read it. Now personally I wanted to list every single comment here. There are some so good that I am just amazed with the creativity. I will write a part of this blog post in future. However, here is the challenge for you. Challenge: Go over 50+ various solutions listed to the simple problem here. Here are my two asks for you. 1) Pick your best solution and list here in the comment. This exercise will for sure teach us one or two things. 2) Write your own solution which is yet not covered already listed 50 solutions. I am confident that there is no end to creativity. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Function, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How Exactly Is One Linux OS “Based On” Another Linux OS?

    - by Jason Fitzpatrick
    When reviewing different flavors of Linux, you’ll frequently come across phrases like “Ubuntu is based on Debian” but what exactly does that mean? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader PLPiper is trying to get a handle on how Linux variants work: I’ve been looking through quite a number of Linux distros recently to get an idea of what’s around, and one phrase that keeps coming up is that “[this OS] is based on [another OS]“. For example: Fedora is based on Red Hat Ubuntu is based on Debian Linux Mint is based on Ubuntu For someone coming from a Mac environment I understand how “OS X is based on Darwin”, however when I look at Linux Distros, I find myself asking “Aren’t they all based on Linux..?” In this context, what exactly does it mean for one Linux OS to be based on another Linux OS? So, what exactly does it mean when we talk about one version of Linux being based off another version? The Answer SuperUser contributor kostix offers a solid overview of the whole system: Linux is a kernel — a (complex) piece of software which works with the hardware and exports a certain Application Programming Interface (API), and binary conventions on how to precisely use it (Application Binary Interface, ABI) available to the “user-space” applications. Debian, RedHat and others are operating systems — complete software environments which consist of the kernel and a set of user-space programs which make the computer useful as they perform sensible tasks (sending/receiving mail, allowing you to browse the Internet, driving a robot etc). Now each such OS, while providing mostly the same software (there are not so many free mail server programs or Internet browsers or desktop environments, for example) differ in approaches to do this and also in their stated goals and release cycles. Quite typically these OSes are called “distributions”. This is, IMO, a somewhat wrong term stemming from the fact you’re technically able to build all the required software by hand and install it on a target machine, so these OSes distribute the packaged software so you either don’t need to build it (Debian, RedHat) or they facilitate such building (Gentoo). They also usually provide an installer which helps to install the OS onto a target machine. Making and supporting an OS is a very complicated task requiring a complex and intricate infrastructure (upload queues, build servers, a bug tracker, and archive servers, mailing list software etc etc etc) and staff. This obviously raises a high barrier for creating a new, from-scratch OS. For instance, Debian provides ca. 37k packages for some five hardware architectures — go figure how much work is put into supporting this stuff. Still, if someone thinks they need to create a new OS for whatever reason, it may be a good idea to use an existing foundation to build on. And this is exactly where OSes based on other OSes come into existence. For instance, Ubuntu builds upon Debian by just importing most packages from it and repackaging only a small subset of them, plus packaging their own, providing their own artwork, default settings, documentation etc. Note that there are variations to this “based on” thing. For instance, Debian fosters the creation of “pure blends” of itself: distributions which use Debian rather directly, and just add a bunch of packages and other stuff only useful for rather small groups of users such as those working in education or medicine or music industry etc. Another twist is that not all these OSes are based on Linux. For instance, Debian also provide FreeBSD and Hurd kernels. They have quite tiny user groups but anyway. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • Exception Handling Frequency/Log Detail

    - by Cyborgx37
    I am working on a fairly complex .NET application that interacts with another application. Many single-line statements are possible culprits for throwing an Exception and there is often nothing I can do to check the state before executing them to prevent these Exceptions. The question is, based on best practices and seasoned experience, how frequently should I lace my code with try/catch blocks? I've listed three examples below, but I'm open to any advice. I'm really hoping to get some pros/cons of various approaches. I can certainly come up with some of my own (greater log granularity for the O-C approach, better performance for the Monolithic approach), so I'm looking for experience over opinion. EDIT: I should add that this application is a batch program. The only "recovery" necessary in most cases is to log the error, clean up gracefully, and quit. So this could be seen to be as much a question of log granularity as exception handling. In my mind's eye I can imagine good reasons for both, so I'm looking for some general advice to help me find an appropriate balance. Monolitich Approach class Program{ public static void Main(){ try{ Step1(); Step2(); Step3(); } catch (Exception e) { Log(e); } finally { CleanUp(); } } public static void Step1(){ ExternalApp.Dangerous1(); ExternalApp.Dangerous2(); } public static void Step2(){ ExternalApp.Dangerous3(); ExternalApp.Dangerous4(); } public static void Step3(){ ExternalApp.Dangerous5(); ExternalApp.Dangerous6(); } } Delegated Approach class Program{ public static void Main(){ try{ Step1(); Step2(); Step3(); } finally { CleanUp(); } } public static void Step1(){ try{ ExternalApp.Dangerous1(); ExternalApp.Dangerous2(); } catch (Exception e) { Log(e); throw; } } public static void Step2(){ try{ ExternalApp.Dangerous3(); ExternalApp.Dangerous4(); } catch (Exception e) { Log(e); throw; } } public static void Step3(){ try{ ExternalApp.Dangerous5(); ExternalApp.Dangerous6(); } catch (Exception e) { Log(e); throw; } } } Obsessive-Compulsive Approach class Program{ public static void Main(){ try{ Step1(); Step2(); Step3(); } finally { CleanUp(); } } public static void Step1(){ try{ ExternalApp.Dangerous1(); } catch (Exception e) { Log(e); throw; } try{ ExternalApp.Dangerous2(); } catch (Exception e) { Log(e); throw; } } public static void Step2(){ try{ ExternalApp.Dangerous3(); } catch (Exception e) { Log(e); throw; } try{ ExternalApp.Dangerous4(); } catch (Exception e) { Log(e); throw; } } public static void Step3(){ try{ ExternalApp.Dangerous5(); } catch (Exception e) { Log(e); throw; } try{ ExternalApp.Dangerous6(); } catch (Exception e) { Log(e); throw; } } } Other approaches welcomed and encouraged. Above are examples only.

    Read the article

  • SQL SERVER – Discard Results After Query Execution – SSMS

    - by pinaldave
    The first thing I do any day is to turn on the computer. Today I woke up and as soon as I turned on the computer I saw a chat message from a friend. He was a bit confused and wanted me to help him. Just as usual I am keeping the relevant conversation in focus and documenting our conversation as chat. Let us call him Ajit. Ajit: Pinal, every time I run a query there is no result displayed in the SSMS but when I run the query in my application it works and returns an appropriate result. Pinal:  Have you tried with different parameters? Ajit: Same thing. However, it works from another computer when I connect to the same server with the same query parameters? Pinal: What? That is new and I believe it is something to do with SSMS and not with the server. Send me screenshot please. Ajit: I believe so, let me send you a screenshot, Pinal: (looking at the screenshot) Oh man, there is no result-tab at all. Ajit: That is what the problem is. It does not have the tab which displays the result. This works just fine from another computer. Pinal: Have you referred Nakul’s blog post – SSMS – Query result options – Discard result after query executes, that talks about setting which can discard the query results after execution. (After a while) Ajit: I think it seems like on the computer where I am running the query my SSMS seems to have the option enabled related to discarding results. I fixed it by following Nakul’s blog post. Pinal: Great! Quite often I get the question what is the importance of the feature. Let us first see how to turn on or turn off this feature in SQL Server Management Studio 2012. In SSMS 2012 go to Tools >> Options >> Query Results > SQL Server >> Results to Grid >> Discard Results After Query Execution. When enabled this option will discard results after the execution. The advantage of disabling the option is that it will improve the performance by using less memory. However the real question is why would someone enable or disable the option. What are the cases when someone wants to run the query but do not care about the result? Matter of the fact, it does not make sense at all to run query and not care about the result. The matter of the fact, I can see quite a few reasons for using this option. I often enable this option when I am doing performance tuning exercise. During performance tuning exercise when I am working with execution plans and do not need results to verify every time or when I am tuning Indexes and its effect on execution plan I do not need the results. In this kind of situations I do keep this option on and discard the results. It always helps me big time as in most of the performance tuning exercise I am dealing with huge amount of the data and dealing with this data can be expensive. Nakul’s has done the experiment here already but I am going to repeat the same again using AdventureWorks Database. Run following T-SQL Script with and without enabling the option to discard the results. USE AdventureWorks2012 GO SELECT * FROM Sales.SalesOrderDetail GO 10 After enabling Discard Results After Query Execution After disabling Discard Results After Query Execution Well, this is indeed a good option when someone is debugging the execution plan or does not want the result to be displayed. Please note that this option does not reduce IO or CPU usage for SQL Server. It just discards the results after execution and a good help for debugging on the development server. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SEO: many stackoverflow users' pages have got no Google PR and they are not indexed, why?

    - by Marco Demaio
    If you go to my user page on Stack Overflow and you check it with the Gogle bar you can see has got no PR at all (this does happen for almost any user page, even people with much higher reputation, the only exceptions seem to be the users in page 1, and some other users they have PR). My user page's Page Rank is not only zero, but not calculated at all. When PR is 0 or less than 1, but calculated the Google bar shows white, but when the PR is not even calculated like in my user page the Google bar shows in grey. I further more discovered that my user page is NOT EVEN INDEXED on Google, simple test is searching on Google for the exact page url: "http://stackoverflow.com/users/260080/marco-demaio" and you will see no result. The question is how can this be??? This is really weird to me because of the following reason: If you search on Google for "Marco Demaio" on stackoverflow site only (you can do this by searching "site:stackoverflow.com Marco Demaio") the search result shows hundreds of 'asking/answering questions' pages where I was 'tagged'!!! Let's check one of these: the 1st one that appears now (shows one of the question I asked). We can be sure this page is indexed in Google because comes out in a search moreover its PR is calculated, it's probably nearly zero, but still some PR flows there, the PR bar is not grey, but white: The page shown above has got links to my own user page. I checked the source code of the page shown above and the links are not hidden or set with a rel="nofollow", moreover I can't see any meta character excluding the links on the page from being followed. So what's happening? Why Google does not see my user page at all. Did stackoverflow do something to achieve this? If yes what did they do? Any explantion really appreciates (as always). P.S. obviously I checked also the code of my user page, but I could not find meta tags excluding Google search for the page. P.S. 2 in a desperate adventure I also checked StackOverflow robots but it does not seem to exclude user pages. UPDATE 1 following up on some answers, I did some more research. Excluding for a while the PR problem (since PR is not science), and looking only at the user page on StackOverflow NOT BEING INDEXED problem: pages do not seem to be indexed by Google because of the user reputation, this user for instance has got NOW 200 points less reputation than me and his page is indexed (while mine not). It does not seem even to be connected with months you have been on Stackoverflow, this user (almost my same reputation) has been there for 3 months only and his page is indexed (while mine not and I have been a user for 7 months). It's bizzarre! UPDATE February/2011 As of today the page got indexed by Google at least when you search for "site:stackoverflow.com Marco Demaio" it's the 1st page. The amazing thing is that it has still got NO PageRank at all: Google toolbar states loud and clear "No PageRank information available". It's odd!

    Read the article

  • Migrating SQL Server Compact Edition (SQL CE) database to SQL Server using Web Matrix

    - by Harish Ranganathan
    One of the things that is keeping us busy is the Web Camps we are delivering across 5 cities.  If you are a reader of this blog, and also attended one of these web camps, there is a good chance that you have seen me since I was there in all the places, so far.  The topics that we cover include Visual Studio 2010 SP1, SQL CE, ASP.NET MVC & HTML5.  Whenever I talk about SQL CE, the immediate response is that, people are wow that Microsoft has shipped a FREE compact edition database, which is an embedded database that can be x-copy deployed.  If you think, well didn’t Microsoft ship SQL Express which is FREE?  The difference is that, SQL Express runs as a service in the machine (if you open SQL Configuration Manager, you can notice that SQL Express is running as a service along with your SQL Server Engine (if you have installed ).  This makes it that, even if you are willing to use SQL Express when you deploy your application, it needs to be installed on the production machine (hosting provider) and it needs to run as a service.  Many hosters don’t allow such services to run on their space. SQL CE comes as a x-Copy deploy-able database with just a few DLLs required to run it on the machine and they don’t even need to be installed in GAC on the production machine.  In fact, if you have Visual Studio 2010 SP1 installed, you can use the “Add Deployable Dependencies” option in Project-Properties and it would detect that SQL CE is something you would probably want to add as a deploy-able dependency for your project.  With that, it bundles the required DLLs as a part of the “_bin_deployableAssemblies” folder.  So your project can be x-Copy deployed and just works fine. However, SQL CE has the limit of 4GB storage space.  Real world applications often require more than just 4GB of data storage and it often turns out that people would like to use SQL CE for development/ramp up stages but would like to migrate to full fledged SQL Server after a while.  So, its only natural that the question arises “How do I move my SQL CE database to SQL Server”  And honestly, it doesn’t come across as a straight forward support.  I was talking to Ambrish Mishra (PM in SQL CE Team, Hyderabad) since I got this question in almost all the places where we talked about SQL CE.   He was kind enough to demonstrate how this can be accomplished using Web Matrix.  Open Web Matrix (Web Matrix can be installed for free from www.microsoft.com/web) and click on “Site from Template” Click on the “Bakery” template (since by default it uses a SQL CE database and has all the required sample data) and click “Ok”. In the project, you can navigate to the Database tab and will be able to find that the Bakery site uses a SQL CE database “bakery.sdf” Select the “bakery.sdf” and you will be able to see the “Migrate” button on the top right Once you click on the “Migrate” button, you will notice that the popup wizard opens up and by default is configured for SQL Express.  You can edit the same to point to your local SQL Server instance, or a remote server. Upon filling in the Server Name, Username and Password, when you click “Ok”, couple of things happen.  1. The database is migrated to SQL Server (local or remote – subject to permissions on remote server).   You can open up SQL Server Management Studio and connect to the server to verify that the “bakery” database exists under “Databases” node. 2. You can also notice that in Web Matrix, when you navigate to the “Files” tab and open up the web.config file, connection string now points to the SQL Server instance (yes, the Migrate button was smart enough to make this change too ) And there it is, your SQL Server Compact Edition database, now migrated to SQL Server!! In a future post, I would explain the steps involved when using Visual Studio. Cheers !!!

    Read the article

  • 2D Rendering with OpenGL ES 2.0 on Android (matrices not working)

    - by TranquilMarmot
    So I'm trying to render two moving quads, each at different locations. My shaders are as simple as possible (vertices are only transformed by the modelview-projection matrix, there's only one color). Whenever I try and render something, I only end up with slivers of color! I've only done work with 3D rendering in OpenGL before so I'm having issues with 2D stuff. Here's my basic rendering loop, simplified a bit (I'm using the Matrix manipulation methods provided by android.opengl.Matrix and program is a custom class I created that just calls GLES20.glUniformMatrix4fv()): Matrix.orthoM(projection, 0, 0, windowWidth, 0, windowHeight, -1, 1); program.setUniformMatrix4f("Projection", projection); At this point, I render the quads (this is repeated for each quad): Matrix.setIdentityM(modelview, 0); Matrix.translateM(modelview, 0, quadX, quadY, 0); program.setUniformMatrix4f("ModelView", modelview); quad.render(); // calls glDrawArrays and all I see is a sliver of the color each quad is! I'm at my wits end here, I've tried everything I can think of and I'm at the point where I'm screaming at my computer and tossing phones across the room. Anybody got any pointers? Am I using ortho wrong? I'm 100% sure I'm rendering everything at a Z value of 0. I tried using frustumM instead of orthoM, which made it so that I could see the quads but they would get totally skewed whenever they got moved, which makes sense if I correctly understand the way frustum works (it's more for 3D rendering, anyway). If it makes any difference, I defined my viewport with GLES20.glViewport(0, 0, windowWidth, windowHeight); Where windowWidth and windowHeight are the same values that are pased to orthoM It might be worth noting that the android.opengl.Matrix methods take in an offset as the second parameter so that multiple matrices can be shoved into one array, so that'w what the first 0 is for For reference, here's my vertex shader code: uniform mat4 ModelView; uniform mat4 Projection; attribute vec4 vPosition; void main() { mat4 mvp = Projection * ModelView; gl_Position = vPosition * mvp; } I tried swapping Projection * ModelView with ModelView * Projection but now I just get some really funky looking shapes... EDIT Okay, I finally figured it out! (Note: Since I'm new here (longtime lurker!) I can't answer my own question for a few hours, so as soon as I can I'll move this into an actual answer to the question) I changed Matrix.orthoM(projection, 0, 0, windowWidth, 0, windowHeight, -1, 1); to float ratio = windowWwidth / windowHeight; Matrix.orthoM(projection, 0, 0, ratio, 0, 1, -1, 1); I then had to scale my projection matrix to make it a lot smaller with Matrix.scaleM(projection, 0, 0.05f, 0.05f, 1.0f);. I then added an offset to the modelview translations to simulate a camera so that I could center on my action (so Matrix.translateM(modelview, 0, quadX, quadY, 0); was changed to Matrix.translateM(modelview, 0, quadX + camX, quadY + camY, 0);) Thanks for the help, all!

    Read the article

  • How can we improve overall Programmer Education & Training?

    - by crosenblum
    Last week, I was just viewing this amazing interview by Kevin Rose of Phillip Rosedale, of Second Life. And they had an amazing discussion about how to find, hire and identify good programmer's, and how hard it is to find good ones. Which has lead me to really think about the way we programmer's learn, are taught. For a majority of us, myself included, we are self-taught. Which is great about being a programmer, anyone can learn and develop skills. But this also means, that there is no real standards of what a good programmer is/are, and what kind of environment's encourage the growth of programming skills. This isn't so much a question, but just a desire in me, to see how we can change the culture of programming, and the manager's of programming, so that education and self-improvement is encouraged. There are a lot of avenue's for continued education, youtube videos, books, conferences, but because of the experiental nature of what we do, it isn't always clear what's important to learn and to master. Let's look at the The Joel 12 Steps. The Joel Test Do you use source control? Can you make a build in one step? Do you make daily builds? Do you have a bug database? Do you fix bugs before writing new code? Do you have an up-to-date schedule? Do you have a spec? Do programmers have quiet working conditions? Do you use the best tools money can buy? Do you have testers? Do new candidates write code during their interview? Do you do hallway usability testing? I think all of these have important value, but because of something I call the Experiential Gap, if a programmer or manager has never experienced any of the negative consequences for not having done items on the list, they will never see the need to do any of them. The Experiental Gap, is my basic theory, that each of us has different jobs and different experiences. So for some of us, that have always worked with dozens of programmer's, source control is a must have. But for people who have always been the only programmer, they can not imagine the need for source control. And it's because of this major flaw in how we learn, that we evaluate people by what best practices they do or not do, and the reason for either can start a flame war. We always evaluate people in our field by what they do, and think "Oh if this guy/gal isn't doing xyz best practice, he/she can't be a good programmer, so let's not waste time or energy talking to them." This is exactly why we have so many programming flame wars, that it becomes, because of the Experiental Gap, we can't imagine people not having made the decisions that we have had to made. So this has lead me to think, that we totally need to rethink how we train, educate and manage programmer's. For example, what percentage of you have had encouragement by your manager's to go to conferences, and even have them pay for it? For me, and a lot of people, this is extremely rare, a lot of us would love to go to conferences, to learn more, but the money ain't there to do that. So the point of this question is really to spark a lot of how can we train, learn and manage better? How can we create a new culture of learning that doesn't insult people for not having the same job experiences. Yes we all have jobs and work to do, but our ability to do our jobs well, depends on our desire, interest and support in improving our mastery of our skills. Right now, I see our culture being rather disorganized, we support the elite, but those tons of us that want to get better, just don't have enough support to learn and improve ourselves. I mean, do we as an industry, want to be perceived as just replaceable cogs? Thank you...

    Read the article

  • SQL SERVER – Updating Data in A Columnstore Index

    - by pinaldave
    So far I have written two articles on Columnstore Indexes, and both of them got very interesting readership. In fact, just recently I got a query on my previous article on Columnstore Index. Read the following two articles to get familiar with the Columnstore Index. They will give you a reference to the question which was asked by a certain reader: SQL SERVER – Fundamentals of Columnstore Index SQL SERVER – How to Ignore Columnstore Index Usage in Query Here is the reader’s question: ” When I tried to update my table after creating the Columnstore index, it gives me an error. What should I do?” When the Columnstore index is created on the table, the table becomes Read-Only table and it does not let any insert/update/delete on the table. The basic understanding is that Columnstore Index will be created on the table that is very huge and holds lots of data. If a table is small enough, there is no need to create a Columnstore index. The regular index should just help it. The reason why Columnstore index was needed is because the table was so big that retrieving the data was taking a really, really long time. Now, updating such a huge table is always a challenge by itself. If the Columnstore Index is created on the table, and the table needs to be updated, you need to know that there are various ways to update it. The easiest way is to disable the Index and enable it. Consider the following code: USE AdventureWorks GO -- Create New Table CREATE TABLE [dbo].[MySalesOrderDetail]( [SalesOrderID] [int] NOT NULL, [SalesOrderDetailID] [int] NOT NULL, [CarrierTrackingNumber] [nvarchar](25) NULL, [OrderQty] [smallint] NOT NULL, [ProductID] [int] NOT NULL, [SpecialOfferID] [int] NOT NULL, [UnitPrice] [money] NOT NULL, [UnitPriceDiscount] [money] NOT NULL, [LineTotal] [numeric](38, 6) NOT NULL, [rowguid] [uniqueidentifier] NOT NULL, [ModifiedDate] [datetime] NOT NULL ) ON [PRIMARY] GO -- Create clustered index CREATE CLUSTERED INDEX [CL_MySalesOrderDetail] ON [dbo].[MySalesOrderDetail] ( [SalesOrderDetailID]) GO -- Create Sample Data Table -- WARNING: This Query may run upto 2-10 minutes based on your systems resources INSERT INTO [dbo].[MySalesOrderDetail] SELECT S1.* FROM Sales.SalesOrderDetail S1 GO 100 -- Create ColumnStore Index CREATE NONCLUSTERED COLUMNSTORE INDEX [IX_MySalesOrderDetail_ColumnStore] ON [MySalesOrderDetail] (UnitPrice, OrderQty, ProductID) GO -- Attempt to Update the table UPDATE [dbo].[MySalesOrderDetail] SET OrderQty = OrderQty +1 WHERE [SalesOrderID] = 43659 GO /* It will throw following error Msg 35330, Level 15, State 1, Line 2 UPDATE statement failed because data cannot be updated in a table with a columnstore index. Consider disabling the columnstore index before issuing the UPDATE statement, then rebuilding the columnstore index after UPDATE is complete. */ A similar error also shows up for Insert/Delete function. Here is the workaround. Disable the Columnstore Index and performance update, enable the Columnstore Index: -- Disable the Columnstore Index ALTER INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail] DISABLE GO -- Attempt to Update the table UPDATE [dbo].[MySalesOrderDetail] SET OrderQty = OrderQty +1 WHERE [SalesOrderID] = 43659 GO -- Rebuild the Columnstore Index ALTER INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail] REBUILD GO This time it will not throw an error while the update of the table goes successfully. Let us do a cleanup of our tables using this code: -- Cleanup DROP INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail] GO TRUNCATE TABLE dbo.MySalesOrderDetail GO DROP TABLE dbo.MySalesOrderDetail GO In the next post we will see how we can use Partition to update the Columnstore Index. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Index, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Merge sort versus quick sort performance

    - by Giorgio
    I have implemented merge sort and quick sort using C (GCC 4.4.3 on Ubuntu 10.04 running on a 4 GB RAM laptop with an Intel DUO CPU at 2GHz) and I wanted to compare the performance of the two algorithms. The prototypes of the sorting functions are: void merge_sort(const char **lines, int start, int end); void quick_sort(const char **lines, int start, int end); i.e. both take an array of pointers to strings and sort the elements with index i : start <= i <= end. I have produced some files containing random strings with length on average 4.5 characters. The test files range from 100 lines to 10000000 lines. I was a bit surprised by the results because, even though I know that merge sort has complexity O(n log(n)) while quick sort is O(n^2), I have often read that on average quick sort should be as fast as merge sort. However, my results are the following. Up to 10000 strings, both algorithms perform equally well. For 10000 strings, both require about 0.007 seconds. For 100000 strings, merge sort is slightly faster with 0.095 s against 0.121 s. For 1000000 strings merge sort takes 1.287 s against 5.233 s of quick sort. For 5000000 strings merge sort takes 7.582 s against 118.240 s of quick sort. For 10000000 strings merge sort takes 16.305 s against 1202.918 s of quick sort. So my question is: are my results as expected, meaning that quick sort is comparable in speed to merge sort for small inputs but, as the size of the input data grows, the fact that its complexity is quadratic will become evident? Here is a sketch of what I did. In the merge sort implementation, the partitioning consists in calling merge sort recursively, i.e. merge_sort(lines, start, (start + end) / 2); merge_sort(lines, 1 + (start + end) / 2, end); Merging of the two sorted sub-array is performed by reading the data from the array lines and writing it to a global temporary array of pointers (this global array is allocate only once). After each merge the pointers are copied back to the original array. So the strings are stored once but I need twice as much memory for the pointers. For quick sort, the partition function chooses the last element of the array to sort as the pivot and scans the previous elements in one loop. After it has produced a partition of the type start ... {elements <= pivot} ... pivotIndex ... {elements > pivot} ... end it calls itself recursively: quick_sort(lines, start, pivotIndex - 1); quick_sort(lines, pivotIndex + 1, end); Note that this quick sort implementation sorts the array in-place and does not require additional memory, therefore it is more memory efficient than the merge sort implementation. So my question is: is there a better way to implement quick sort that is worthwhile trying out? If I improve the quick sort implementation and perform more tests on different data sets (computing the average of the running times on different data sets) can I expect a better performance of quick sort wrt merge sort? EDIT Thank you for your answers. My implementation is in-place and is based on the pseudo-code I have found on wikipedia in Section In-place version: function partition(array, 'left', 'right', 'pivotIndex') where I choose the last element in the range to be sorted as a pivot, i.e. pivotIndex := right. I have checked the code over and over again and it seems correct to me. In order to rule out the case that I am using the wrong implementation I have uploaded the source code on github (in case you would like to take a look at it). Your answers seem to suggest that I am using the wrong test data. I will look into it and try out different test data sets. I will report as soon as I have some results.

    Read the article

  • Design suggestions needed to create a MathBuilder framework

    - by Darf Zon
    Let explain what I'm trying to create. I'm creating a framework, the idea is to provide base classes to generate a math problem. Why do I need this framework? Because at first time, I realized when I create a new math problem I always do the same steps. Configuration settings such range numbers. For example if I'm developing multiplications, in beginner level only generate the first number between 2-5 or in advanced level, the first number will be between 6- 9, for example. Generate problem method. All the time I need to invoke a method like this to generate the problem. This one receives the configuration settings and generate the number according to them. And generate the object with the respective data. Validate the problem. Sometimes the problem generated is not valid. For example, supposed I'm creating fractions in most simplified, if I receive 2/4, the program should detect that this is not valid and must generate another like this one, 1/4. Load the view. All of them, have a custom view (please watch below the images). All of the problems must know how to CHECK if the user result is correct. All of this problems has answers. Some of them just require one answer, anothers may require more than one, so I guess a way to maintain flexibility to the developer has all the answers he wanna used. At the beginning I started using PRISM. Generate modules for each math problem was the idea and load it in the main system. I guess are the most important things of this idea. Let me showing some problems which I create in a WPF standalone program. Here I have a math problem about areas. When I generate the problem a set to the view the object and it draw it. In beginner level, I set in the configuration settings that just load square types. But in advance level, can load triangles and squares randomly. In this another, generate a binary problem like addition, subtraction, multiplication or division. Above just generate a single problem. The idea of this is to show a test o quiz, I mean get a worksheet (this I call as a collection of problems) where the user can answer it. I hope gets the idea with my ugly drawing. How to load this math problems? As I said above, I started using PRISM, and each module contains a math problem kind. This is a snapshot of my first demo. Below show the modules loaded, and center the respective configurations or levels. Until momment, I have no idea to start creating this software. I just know that I need a question | problem class, response class, user class. But I get lost about what properties should have to contain in it. Please give a little hand of this framework. I put much effort on this question, so if any isn't clear, let me know to clarify it.

    Read the article

  • Ongoing confusion about ivars and properties in objective C

    - by Earl Grey
    After almost 8 months being in ios programming, I am again confused about the right approach. Maybe it is not the language but some OOP principle I am confused about. I don't know.. I was trying C# a few years back. There were fields (private variables, private data in an object), there were getters and setters (methods which exposed something to the world) ,and properties which was THE exposed thing. I liked the elegance of the solution, for example there could be a class that would have a property called DailyRevenue...a float...but there was no private variable called dailyRevenue, there was only a field - an array of single transaction revenues...and the getter for DailyRevenue property calculated the revenue transparently. If somehow the internals of daily revenue calculation would change, it would not affect somebody who consumed my DailyRevenue property in any way, since he would be shielded from getter implementation. I understood that sometimes there was , and sometimes there wasn't a 1-1 relationship between fields and properties. depending on the requirements. It seemed ok in my opinion. And that properties are THE way to acces the data in object. I know the difference betweeen private, protected, and public keyword. Now lets get to objectiveC. On what factor should I base my decision about making someting only an ivar or making it as a property? Is the mental model the same as I describe above? I know that ivars are "protected" by default, not "private" asi in c#..But thats ok I think, no big deal for my presnet level of understanding the whole ios development. The point is ivars are not accesible from outside (given i don't make them public..but i won't). The thing that clouds my clear understanding is that I can have IBOutlets from ivars. Why am I seeing internal object data in the UI? *Why is it ok?* On the other hand, if I make an IBOutlet from property, and I do not make it readonly, anybody can change it. Is this ok too? Let's say I have a ParseManager object. This object would use a built in Foundation framework class called NSXMLParser. Obviously my ParseManager will utilize this nsxmlparser's capabilities but will also do some additional work. Now my question is, who should initialize this NSXMLParser object and in which way should I make a reference to it from the ParseManager object, when there is a need to parse something. A) the ParseManager -1) in its default init method (possible here ivar - or - ivar+ppty) -2) with lazyloading in getter (required a ppty here) B) Some other object - who will pass a reference to NSXMLParser object to the ParseManager object. -1) in some custom initializer (initWithParser:(NSXMLPArser *) parser) when creating the ParseManager object.. A1 - the problem is, we create a parser and waste memory while it is not yet needed. However, we can be sure that all methods that are part ot ParserManager object, can use the ivar safely, since it exists. A2 - the problem is, the nsxmlparser is exposed to outside world, although it could be read only. Would we want a parser to be exposed in some scenario? B1 - this could maybe be useful when we would want to use more types of parsers..i dont know... I understand that architectural requirements and and language is not the same. But clearly the two are in relation. How to get out of that mess of my? Please bear with me, I wasn't able to come up with a single ultimate question. And secondly, it's better to not scare me with some superadvanced newspeak that talks about some crazy internals (what the compiler does) and edge cases.

    Read the article

  • Starting to create a MathBuilder framework, help to start creating the design

    - by Darf Zon
    Let explain what I'm trying to create. I'm creating a framework, the idea is to provide base classes to generate a math problem. Why do I need this framework? Because at first time, I realized when I create a new math problem I always do the same steps. Configuration settings such range numbers. For example if I'm developing multiplications, in beginner level only generate the first number between 2-5 or in advanced level, the first number will be between 6- 9, for example. Generate problem method. All the time I need to invoke a method like this to generate the problem. This one receives the configuration settings and generate the number according to them. And generate the object with the respective data. Validate the problem. Sometimes the problem generated is not valid. For example, supposed I'm creating fractions in most simplified, if I receive 2/4, the program should detect that this is not valid and must generate another like this one, 1/4. Load the view. All of them, have a custom view (please watch below the images). All of the problems must know how to CHECK if the user result is correct. All of this problems has answers. Some of them just require one answer, anothers may require more than one, so I guess a way to maintain flexibility to the developer has all the answers he wanna used. At the beginning I started using PRISM. Generate modules for each math problem was the idea and load it in the main system. I guess are the most important things of this idea. Let me showing some problems which I create in a WPF standalone program. Here I have a math problem about areas. When I generate the problem a set to the view the object and it draw it. In beginner level, I set in the configuration settings that just load square types. But in advance level, can load triangles and squares randomly. In this another, generate a binary problem like addition, subtraction, multiplication or division. Above just generate a single problem. The idea of this is to show a test o quiz, I mean get a worksheet (this I call as a collection of problems) where the user can answer it. I hope gets the idea with my ugly drawing. How to load this math problems? As I said above, I started using PRISM, and each module contains a math problem kind. This is a snapshot of my first demo. Below show the modules loaded, and center the respective configurations or levels. Until momment, I have no idea to start creating this software. I just know that I need a question | problem class, response class, user class. But I get lost about what properties should have to contain in it. Please give a little hand of this framework. I put much effort on this question, so if any isn't clear, let me know to clarify it.

    Read the article

  • SQL SERVER – Basic Calculation and PEMDAS Order of Operation

    - by pinaldave
    After thinking a long time, I have decided to write about this blog post. I had no plan to create a blog post about this subject but the amount of conversation this one has created on my Facebook page, I decided to bring up a few of the question and concerns discussed on the Facebook page. There are more than 10,000 comments here so far. There are lots of discussion about what should be the answer. Well, as far as I can tell there is a big debate going on on Facebook, for educational purpose you should go ahead and read some of the comments. They are very interesting and for sure teach some new stuff. Even though some of the comments are clearly wrong they have made some good points and I believe it for sure develops some logic. Here is my take on this subject. I believe the answer is 9 as I follow PEMDAS  Order of Operation. PEMDAS stands for  parentheses, exponents, multiplication, division, addition, subtraction. PEMDAS is commonly known as BODMAS in India. BODMAS stands for Brackets, Orders (ie Powers and Square Roots, etc), Division, Multiplication,  Addition and Subtraction. PEMDAS and BODMAS are almost same and both of them follow the operation order from LEFT to RIGHT. Let us try to simplify above statement using the PEMDAS or BODMAS (whatever you prefer to call). Step 1: 6 ÷ 2 (1+2) (parentheses first) Step 2: = 6 ÷ 2 * (1+2) (adding multiplication sign for further clarification) Step 3: = 6 ÷ 2* (3) (single digit in parentheses – simplify using operator) Step 4: = 6 ÷ 2 * 3 (Remember next Operation should be LEFT to RIGHT) Step 5: = 3 * 3 (because 6 ÷ 2 = 3; remember LEFT to RIGHT) Step 6: = 9 (final answer) Some often find Step 4 confusing and often ended up multiplying 2 and 3 resulting Step 5 to be 6 ÷ 6, this is incorrect because in this case we did not follow the order of LEFT to RIGHT. When we do not follow the order of operation from LEFT to RIGHT we end up with the answer 1 which is incorrect. Let us see what SQL Server returns as a result. I executed following statement in SQL Server Management Studio SELECT 6/2*(1+2) It is clear that SQL Server also thinks that the answer should be 9. Let us go ahead and ask Google what will be the answer of above question in Google I have searched for the following term: 6/2(1+2) The result also says the answer should be 9. If you want a further reference here is a great video which describes why the answer should be 9 and not 1. And here is a fantastic conversation on Google Groups. Well, now what is your take on this subject? You are welcome to share constructive feedback and your answer may be different from my answer. NOTE: A healthy conversation about this subject is indeed encouraged but if there is a single bad word or comment is flaming it will be deleted without any notification (it does not matter how valuable information it contains). Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Upgrading from MVC 1.0 to MVC2 in Visual Studio 2010 and VS2008.

    - by Sam Abraham
    With MVC2 officially released, I was involved in a few conversations regarding the feasibility of upgrading existing MVC 1.0 projects to quickly leverage the newly introduced MVC features. Luckily, Microsoft has proactively addressed this question for both Visual Studio 2008 and 2010 and many online resources discussing the upgrade process are a "Bing/Google Search" away. As I will happen to be speaking about MVC2 and Visual Studio 2010 at the Ft Lauderdale ArcSig .Net User Group Meeting on April 20th 2010 (Check http://www.fladotnet.com for more info.), I decided to include a quick demo on upgrading the NerdDinner project (which I consider the "Hello MVC World" project) from MVC 1.0 to MVC2 using Visual studio 2010 to demonstrate how simple the upgrade process is. In the next few lines, I will be briefly touching on upgrading to MVC2 for Visual Studio 2008 then discussing, in more detail, the upgrade process using Visual Studio 2010 while highlighting the advantage of its multi-targeting support. Using Visual Studio 2008 SP1 For upgrading to MVC2 Using VS2008 SP1, a Microsoft White Paper [1] presents two approaches:  1- Using a provided automated upgrade tool, 2-Manually upgrading the project. I personally prefer using the automated tool although it comes with an "AS IS" disclaimer. For those brave souls, or those who end up with no luck using the tool, detailed manual upgrade steps are also provided as a second option. Backing up the project in question is a must regardless of which route one would take to upgrade. Using Visual Studio 2010 Life is much easier for developers who already adopted Visual Studio 2010. Simply opening the MVC 1.0 solution file brings up the upgrade wizard as shown in figures 1, 2, 3 and 4. As we proceed with the upgrade process, the wizard requests confirmation on whether we choose to upgrade our target framework version to .Net 4.0 or keep the existing .Net 3.5 (Figure 5). VS2010 does a good job with multi-targeting where we can still develop .Net 3.5 applications while leveraging all the new bells and whistles that VS2010 brings to the table (Multi-targeting enables us to develop with as early as .Net 2.0 in VS2010) Figure 1 - Open Solution File Using VS2010   Figure 2 - VS2010 Conversion Wizard Figure 3- Ready To Convert To VS2010 Confirmation Screen Figure 4 - VS2010 Solution Conversion Progress Figure 5 - Confirm Target Framework Upgrade In an attempt to make my demonstration realistic, I decided to opt to keep the project targeted to the .Net 3.5 Framework.  After the successful completion of the conversion process,  a quick sanity check revealed that the NerdDinner project is still targeted to the .Net 3.5 framework as shown in figure 6. Inspecting the Web.Config revealed that the MVC DLL version our code compiles against has been successfully upgraded to 2.0 (Figure 7) and hence we should now be able to leverage the newly introduced features in MVC2 and VS2010 with no effort or time invested on modifying existing code. Figure 6- Confirm Target Framework Remained .Net 3.5  Figure 7 - Confirm MVC DLL Version Has Been Upgraded In Conclusion, Microsoft has empowered developers with the tools necessary to quickly and seamlessly upgrade their MVC solutions to the newly released MVC2. The multi-targeting feature in Visual Studio 2010 enables us to adopt this latest and greatest development tool while supporting development in as early as .Net 2.0. References 1. "Upgrading an ASP.NET MVC 1.0 Application to ASP.NET MVC 2" http://www.asp.net/learn/whitepapers/aspnet-mvc2-upgrade-notes

    Read the article

< Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >