Search Results

Search found 22641 results on 906 pages for 'case'.

Page 362/906 | < Previous Page | 358 359 360 361 362 363 364 365 366 367 368 369  | Next Page >

  • route port 3000 to apache2 alias

    - by user223470
    I have a meteor application running on port 3000. I can successfully connect to the program with www.myurl.com:3000, but would rather connect to it via www.myurl.com/myappname. I started with the instructions on this web site: http://www.andrehonsberg.com/article/deploy-meteorjs-vhosts-ubuntu1204-mongodb-apache-proxy and I have the following Apache configuration file: <VirtualHost *:80> ServerName myurl.com ProxyRequests off <Proxy *> Order deny,allow Allow from all </Proxy> <Location /> ProxyPass http://localhost:3000/ ProxyPassReverse http://localhost:3000/ </Location> </VirtualHost> I do not know how to continue from here to get the program on www.mysite.com/myapp. In other situations, I would use an Alias within the Apache configuration file, but that doesn't seem like the right direction to go in this case. How do I configure Apache to send port 3000 to www.myurl.com/myapp?

    Read the article

  • links for 2011-02-22

    - by Bob Rhubart
    Eleven BI trends for 2011 | ITWeb Business Intelligence (tags: ping.fm) The Buttso Blathers: WebLogic Schema Files Buttso shares a link. (tags: orale weblogic) Cloud Computing & Enterprise Architecture | Open Group Blog "On the first look, it may seem like Enterprise Architecture is irrelevant in a company if your complete IT is running on Cloud Computing, SaaS and outsourcing/offshoring. I was of the same opinion last year. However, it is not the case. In fact, the complexity is going to get multiplied." (tags: opengroup cloud enterprisearchitecture) James Taylor: Change Logging Level for SOA 11g James says: "I’m sure there are many blogs out there that have this solution. But I seem to get asked this question a lot so I thought I would post it here for my convenience. (tags: oracle middleware soa) David Linthicum: The Truth behind Standards, SOA, and Cloud Computing "Most of the standards we've worked on in the world of SOA over the past several years are applicable to the world of cloud computing. Cloud computing is simply a change in platform, and the existing architectural standards we leverage should transfer nicely to the cloud computing space." - David Linthicum (tags: enterprisearchitecture soa cloud) C. Martin Harris, MD: HIMSS11 Update from the Chairman "We cannot allow ourselves to focus exclusively on near term goals. Our real goal is a technology-driven transformation of healthcare that will never stop. A true transformation is a process of lessons learned and applied, that continually open broad new horizons of opportunity." - C. Martin Harris, MD (tags: enterprisearchitecture modernization)

    Read the article

  • Computer Science Degrees and Real-World Experience

    - by Steven Elliott Jr
    Recently, at a family reunion-type event I was asked by a high school student how important it is to get a computer science degree in order to get a job as a programmer in lieu of actual programming experience. The kid has been working with Python and the Blender project as he's into making games and the like; it sounds like he has some decent programming chops. Now, as someone that has gone through a computer science degree my initial response to this question is to say, "You absolutely MUST get a computer science degree in order to get a job as a programmer!" However, as I thought about this I was unsure as to whether my initial reaction was due in part to my own suffering as a CS student or because I feel that this is actually the case. Now, for me, I can say that I rarely use anything that I learned in college, in terms of the extremely hard math, algorithms, etc, etc. but I did come away with a decent attitude and the willingness to work through tough problems. I just don't know what to tell this kid; I feel like I should tell him to do the CS degree but I have hired so many programmers that majored in things like English, Philosophy, and other liberal arts-type degrees, even some that never went to college. In fact my best developer, falls into this latter category. He got started writing software for his church or something and then it took off into a passion. So, while I know this is one of those juicy potential down vote questions, I am just curious as to what everyone else thinks about this topic. Would you tell a high school kid about this? Perhaps if he/she already knows a good deal of programming and loves it he doesn't need a CS degree and could expand his horizons with a liberal arts degree. I know one of the creators of the Django web framework was a American Literature major and he is obviously a pretty gifted developer. Anyway, thanks for the consideration.

    Read the article

  • HDD is not recognized/initialized via USB, only via SATA - is a reformat through USB a bad idea?

    - by Wuschelbeutel Kartoffelhuhn
    I have a 4TB Hitachi HDD that I purchased in Europe (I use it as a backup disk); I use Windows 7. When I connect it to a SATA port, it is recognized in Windows Explorer and gives no problems, even after transferring 3TB at a time or after being on for days. When I connect it via a SATA-to-USB2.0 adapter, it is also recognized, but when I transfer a large amount of data, it will intermittently stop being recognized by Windows Explorer and cancel the transfer. When I connect it via an external enclosure (which is technically a SATA-to-USB3.0 adapter), it does not display at all in Windows Explorer, but Disk Management will show the drive, albeit uninitialized (prompts for format). I only got the external enclosure because I want to backup my files more conveniently (instead of having to open the computer case each time). Do you advise against reformat/initialization via the external enclosure? Can it screw up things in an irrevocable way (Master Boot Record etc.)?

    Read the article

  • Enable hardware virtualization in BIOS?

    - by rhon
    I am running a FOXCONN AM2+ M61PMV with an AMD Athlon II X2 240 Windows 7 ultimate 64 bit. From startup I have hit the del key and the options for enabling the hardware virtualization are not there. I have checked the Microsoft tool that says I can run virtual and i have checked SecurAble, that says yes. But I have an open case w/microsoft (they've been trying for a week [7 tech support people later]) and they're saying that I need to ensure that the hardware is enabled. Where do I go to see? Is there another way besides from the startup?

    Read the article

  • Bug in CDP implementation

    - by Suraj
    We are developing a Linux based ethernet switch which has 6 ports. We are done with CDP protocol. I have connected a Cisco device to port 2. When I quiery for the Cisco device, I get the reply and instead of getting lan1 (port 1 - lan0 .. port 6 = lan5), I always get the interface name as eth0. The same is the case for all the ports. What changes are required to get the correct interface name? I will be very thankful for the information. The snap packet is received in the routine snap_rcv() in the file "linux._2.6.XX/net/802/psnap.c"; Regards, Suraj..

    Read the article

  • What is the best way to configure Apache or AWS to support a Rails multi tenancy application that allows each customer to have their own domain name?

    - by Ryan Arneson
    I'm building a Rails 3 SaaS site that allows for multi-tenancy. When a customer signs up they put in their own domain name, e.g. example.com. I need example.com to point to my SaaS application and serve them their content. My questions are as follows: Do I need to create an Apache vhost for each customer using their own domain? Is there an easier way with CNAME's to just have the customer point to the IP address of my server(s) that then forwards the request onto my application through some catch all vhost? Would I be able to create the CNAME record for the customer so they don't have to do any setup? Would this be a case better suited to Amazon Web Services? Any help or explanation or corrections on my understanding of dns would be appreciated. I'm a developer so the server ops portion of this is a bit cloudy.

    Read the article

  • Why do strace/truss sometimes 'fix' stuck processes?

    - by Emmel
    Sometimes you have a stuck process that's been stuck for a while, and as soon as you go to poke at it with strace/truss just to see what's going on, it gets magically unstuck and continues to run! So from merely 'observing' these programs have some impact in the running of the stuck programs .. what's happening here? Did strace (I guess via ptrace(2)?) send a signal, causing the program to cease blocking, or such? I've seen this several times -- most recently on Linux RHEL 4 (and a Perl script mucking with processes and doing some network IO in that case), but in a few other contexts as well. Unfortunately, I can't reproduce this, as it times to happen ... in times of crisis. But my curiosity remains. :-) Any elucidation appreciated.

    Read the article

  • Why are some seasoned ASP.NET developers defecting to Ruby on Rails?

    - by Tony_Henrich
    Once a while I hear some known ASP.NET developer declaring that they quit developing in .NET and moving to Ruby using Ruby in Rails. The problem is they don't mention exactly the reasons. They use words like RoR is 'easier', 'better' & 'faster'. That really doesn't say much to me. Anyone care to do faithful comparison using code samples, case studies ..etc or from personal experience in using both? Try to convince me to throw away all my years of learning C#, the .NET Framework using a powerful IDE (Visual Studio). Does RoR save you hours a week in development time? What are the major pain points in .NET that compels one to move away from it? This question is NOT about a pure RoR vs ASP.NET (MVC) comparison. It's about the compelling technical reasons (getting bored does not count!) to switch over after using a platform for several years and start with a new language and platform. (prefer this to be a wiki)

    Read the article

  • Computing pixel's screen position in a vertex shader: right or wrong?

    - by cubrman
    I am building a deferred rendering engine and I have a question. The article I took the sample code from suggested computing screen position of the pixel as follows: VertexShaderFunction() { ... output.Position = mul(worldViewProj, input.Position); output.ScreenPosition = output.Position; } PixelShaderFunction() { input.ScreenPosition.xy /= input.ScreenPosition.w; float2 TexCoord = 0.5f * (float2(input.ScreenPosition.x,-input.ScreenPosition.y) + 1); ... } The question is what if I compute the position in the vertex shader (which should optimize the performance as VSF is launched significantly less number of times than PSF) would I get the per-vertex lighting insted. Here is how I want to do this: VertexShaderFunction() { ... output.Position = mul(worldViewProj, input.Position); output.ScreenPosition.xy = output.Position / output.Position.w; } PixelShaderFunction() { float2 TexCoord = 0.5f * (float2(input.ScreenPosition.x,-input.ScreenPosition.y) + 1); ... } What exactly happens with the data I pass from VS to PS? How exactly is it interpolated? Will it give me the right per-pixel result in this case? I tried launching the game both ways and saw no visual difference. Is my assumption right? Thanks. P.S. I am optimizing the point light shader, so I actually pass a sphere geometry into the VS.

    Read the article

  • As a solo programmer, of what use can Gerrit be?

    - by s.d
    Disclaimer: I'm aware of the questions How do I review my own code? and What advantages do continuous integration tools offer on a solo project?. I do feel that this question aims at a different set of answers though, as it is about a specific software, and not about general musings about the use of this or that practice or tech stack. I'm a solo programmer for an Eclipse RCP-based project. To be able to control myself and the quality of my software, I'm in the process of setting up a CSI stack. In this, I follow the Eclipse Common Build Infrastructure guidelines, with the exception that I will use Jenkins rather than Hudson. Sticking to these guidelines will hopefully help me in getting used to the Eclipse way of doing things, and I hope to be able to contribute some code to Eclipse in the future. However, CBI includes Gerrit, a code review software. While I think it's really helpful for teams, and I will employ it as soon as the team grows to at least two developers, I'm wondering how I could use it while I'm still solo. Question: Is there a use case for Gerrit that takes into account solo developers? Notes: I can imagine reviewing code myself after a certain amount of time to gain a little distance to it. This does complicate the workflow though, as code will only be built once it's been passed through code review. This might prove to be a "trap" as I might be tempted to quickly push bad code through Gerrit just to get it built. I'm very interested in hearing how other solo devs have made use of Gerrit.

    Read the article

  • Attempting to dual boot Ubuntu and Windows 7 on Sony Vaio with Insyde H2O BIOS

    - by zach
    My situation is the same that is addressed here Sony VAIO with Insyde H2O EFI bios will not boot into GRUB EFI and here http://www.hackourlife.com/sony-vaio-with-insyde-h2o-efi-bios-ubuntu-12-04-dual-boot I tried to install Ubuntu 12.04 from the Live CD alongside my current Windows 7. I have to switch my BIOS to legacy mode in order to boot from CD. If I were to do a normal installation and remain in legacy mode, the BIOS will display "operating system not found". If I switch back then the BIOS just boots to windows. To solve the problem, I tried following the steps in the previous two articles. My drive is partitioned as: sda1 FAT32 Location of Windows EFI files (flagged as boot in Ubuntu install) sda2 unknown sda3 NFTS Windows C: sda4 ext4 Ubuntu root sda5 swap sda6 ext4 Ubuntu home I was a little confused by the requirement in the second article to "be careful to install Grub bootloader in /dev/sda3" In my case, the relevant partition is sda1. I have tried three things: setting the sda1 mount point as /boot, as /boot/efi, and as the special reserved grub partition. In each install I indicated that grub should be installed in sda1. After each install I reboot to the live CD and look in the sda1. I see EFI/Boot and EFI/Windows, but no EFI/Ubuntu and consequently no grubx64.efi. I understand the recommended procedure of moving grubx64.efi into the EFI/Boot directory and replacing the present bootx64.efi file, but I see no grubx64.efi and I don't know where it should be.

    Read the article

  • Join us at the Gartner MDM Summit in LA on April 4-5

    - by Dain C. Hansen
    This year, we're proud to announce that Oracle is a Platinum sponsor of the Gartner Master Data Management Summit this April 4 – 5, 2012 in Los Angeles. The event will be a follow-on event from the Gartner BI Summit, so if you already attending that event, stick around on Wednesday and Thursday and don't miss it. Especially, don't miss our key session at 9:30 AM on Thursday April 4th, "Brace for Impact: Key Trends in Master Data Management" with Ford Goodman and Dain Hansen. Master Data Management helps organizations perform better by creating a single coherent version of customers, products and suppliers. But how do you get started? And if you've laid a foundation, how do you become world-class? Designed to address all MDM maturity levels, Gartner Master Data Management Summit delivers the tools, technologies and best practices to help you take control of your master data and dramatically improve business performance. Attendees will have the opportunity to meet with Oracle experts in a variety of sessions, including demonstrations during the showcase receptions. Oracle Customer Case Study and Solution Provider Session Oracle Solution Showcase Receptions Oracle Face-to-Face Meetings Hot topics to be covered: Forecasting key trends shaping MDM Building a business-driven MDM program Assessing MDM maturity Creating the MDM organization Evolving to multidomain MDM Learn more about this event, or to register, click here

    Read the article

  • What does the email header "X-CAA-SPAM" refer to?

    - by lotri
    I've got an application that sends out notification emails to users of the application (this is not spam; the information in these emails is solicited and useful, and is also a feature turned off by default and must be enabled by the user). The app is still in beta, and one of our testers reports that the notification emails are going to his junk mail folder in Outlook 2003. This is the only reported case of this, but I asked him to send me the email headers from the message, and I noticed that there is a header there labeled "X-CAA-SPAM" with a value of 00000 . I'm a programmer, so I'm fairly green in the world of successful automated emails - does anyone know if this header is the culprit? If not, any suggestions?

    Read the article

  • Have programmers at your work not taken up or been averse to an offer of a second monitor?

    - by Chris Knight
    I'm putting together a business case for the developers in my company to get a second monitor. After my own experiences and research, this seems a no-brainer to me in terms of increasing productivity and morale/happiness. One question which has niggled me is if I should be pushing to get all developers onto a second monitor or let folk opt-in (i.e. they get one if they want one). Thoughts on this are welcome, but my specific question relates to a snippet on this site: But when the IT manager at Thibeault's company asked other employees if they wanted dual monitors last year, few jumped at the offer. Blinded by my own pre-judgement, this surprised me. Has anyone else experienced this? I fully appreciate that some people prefer a single larger monitor, but my general experience of researching the web suggests that most programmers prefer a dual (or more) setup. I'm guessing this should be tempered with the thought that those developers who contribute to such discussions might not be considered your average developer who might not care one way or the other. Anyway, if you have experienced the above have you tried to sell the concept of dual monitors to the masses? If everyone just got 2 monitors regardless if they wanted it or not, were there adverse reactions or negative effects? UPDATE: The developers are on a mixture of 17", 22", or 24" single monitors. The desks should be able to accommodate dual 22" monitors as I am proposing, though this will take some getting used to I imagine.

    Read the article

  • OSX Reset Home Permissions and ACLs - how long should it take?

    - by andyface
    Having screwwed up my permissions by trying to have two users accessing one home folder I'm now going through the process of reseting my main user home permissions via the OSX install disk utilities, which seems to be taking a while. Does this process take a while to do, though I assume it depends on how many files I have in the folder, which in my case is a good few 100 GBs. At what point should I be concerned that it may have got stuck and thus reset my computer and try again? I assume, though not sure that if the little circle indicator is still moving then it's not completely frozen, but as there's no progress bar or details I'm not sure how true that is.

    Read the article

  • Is there a way to create a copy-on-write copy of a directory?

    - by BCS
    I'm thinking of a situation where I would have something that creates a copy of a directory, tweaks a few files, and then does some processing on the result. This wold be done fairly often, maybe a few dozen times a day. (The exact use case is testing patch submissions; dupe the code, patch it, build/test/report/etc.) What I'm looking for could be done by creating a new directory structure and populating it with hard links from the origonal. However this only works if all the tools you use delete and recreate files rather than edit them in place. Is there a way to have the file system do copy-on-write for a file? Note: I'm aware that many FSs use COW at a block level (all updates are done via writes to new blocks) but this is not what I want.

    Read the article

  • Sendmail is refusing connection after configuring SMTP relay

    - by coder
    I'm setting up sendmail on my home computer to use with my webserver. I've set it to use my SMTP server provided by my hosting company. If I use the following command, it works sendmail -Am -t -v and then I enter the to and from emails. But if I try the following, it does not work. sendmail -v [email protected] < test.txt The TO email is the same as in the earlier command, but in this case I haven't specified a FROM e-mail, which I think is the problem. My guess is that it's sending the mail from user@localhost causing the smtp server to reject it. If so, how do I make it send from [email protected]?

    Read the article

  • Database OR Array

    - by rezoner
    What is the exact point of using external database system if I have simple relations (95% querries are dependant on ID). I am storing users and their stats. Why would I use external database if I can have neat constructions like: db.users[32] = something Array of 500K users is not that big effort for RAM Pros are: no problematic asynchronity (instant results) easy export/import dealing with database like with a native object LITERALLY ps. and considerations: Would it be faster or slower to do collection[3] than db.query("select ... I am going to store it as a file/s There is only ONE application/process accessing this data, and the code is executed line by line - please don't elaborate about locking. Please don't answer with database propositions but why to use external DB over native array/object - I have experience in a few databases - that's not the case. What I am building is a client/gateway/server(s) game. Gateway deals with all users data, processing, authenticating, writing statistics e.t.c No other part of software needs to access directly to this data/database.

    Read the article

  • In a multi-domain forest, what EXACTLY happens when some, but not all, of the Infrastructure Masters are on Global Catalogs?

    - by MDMarra
    There are plenty of TechNet articles, like this one that say that phantom object don't get updated if an Infrastructure Master is also a Global Catalog, but other than that there isn't a lot of in depth information on what actually happens in this configuration. Imagine a configuration like this: |--------------| | example.com | | | | dedicated IM | |--------------| | | | |-------------------| | child.example.com | | | | IM on a GC | |-------------------| Where child has two DCs that are both global catalogs, meaning that the Infrastructure Master role is on a GC. And, example has three DCs with the Infrastructure Master role on a DC that is not a GC. I understand that it's usually best to just make everything a GC and not have to worry about this sort of thing, but assuming that's not the case - what is the exact error behavior that can be expected from a setup like this, and which domain(s) would this behavior manifest in? The child or the parent?

    Read the article

  • Is Joel Test really a good gauging tool?

    - by henry
    I just learned about Joel Test. I have been computer programmer for 22 years, but somehow never heard about it before. I consider my best job so far to be this small investment managing company with 30 employees and only 3 people in IT department. I am no longer with them but I had being working there for 5 years – my longest streak with any given company. To my surprise they scored extremely poor on Joel Test. The only two questions I would answer “yes” are #4: Do you have a bug database? And #9: Do you use the best tools money can buy? Everything else is either “sometimes” or straight “no”. Here is what I liked about the company however: a) Good pay, they bragged about it to my face and I bragged about it to their face, so it was almost like a family environment. b) I always knew big picture. When writing a code to solve particular problem there were no ambiguity about the business nature of that problem. Even though we did not always had written specs we could ask business users a question anytime, often yelling it across the floor. I could even talk to executives any time I felt like doing it: no appointment necessary. c) Immediate feedback. Once we implement a solution and make business users happy they immediately let us know that, we (programmers) become heroes of the moment. d) No red tape. I could always buy any tools I deem necessary, and design solutions the way my professional judgment dictates. e) Flexibility. If I had mid-day dental appointment that is near my house rather than near the office, I would send email to the company: "FYI: I work from home today". As long as one of 3 IT guys was on the floor (to help traders in case their monitors go dark) they did not care where 2 others are. So the question thus becomes how valuable Joel Test is? Why bother with it?

    Read the article

  • OBJECT_Name parameters and dbid

    - by steveh99999
    If you've been using SQL Server for a long time, you may have been used to using the OBJECT_NAME system function in the past - especially useful when converting table IDs into table names when querying sysobjects and sysindexes..... However, if you're an old-school DBA  - did you know since SQL 2005 service pack 2 it  accepts a  second parameter ? database_id.. For example, this can be used to summarize some useful information from sys.dm_exec_query_stats. When reviewing SQL Server performance - it can be useful to look at the most heavily used stored procedures rather than inefficient less frequently used procedures.  Here's a query to summarize performance data on the most-heavily used stored procedures across all databases on a server  :-SELECT TOP 20 DENSE_RANK() OVER (ORDER BY SUM(execution_count) DESC) AS rank, OBJECT_NAME(qt.objectid, qt.dbid) AS 'proc name', (CASE WHEN qt.dbid = 32767 THEN 'mssqlresource' ELSE DB_NAME(qt.dbid) END ) AS 'Database', OBJECT_SCHEMA_NAME(qt.objectid,qt.dbid) AS 'schema', SUM(execution_count) AS 'TotalExecutions',SUM(total_worker_time) AS 'TotalCPUTimeMS', SUM(total_elapsed_time) AS 'TotalRunTimeMS', SUM(total_logical_reads) AS 'TotalLogicalReads',SUM(total_logical_writes) AS 'TotalLogicalWrites', MIN(creation_time) AS 'earliestPlan', MAX(last_execution_time) AS 'lastExecutionTime' FROM sys.dm_exec_query_stats qs CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt WHERE OBJECT_NAME(qt.objectid, qt.dbid) IS NOT NULL GROUP BY OBJECT_NAME(qt.objectid, qt.dbid),qt.dbid,OBJECT_SCHEMA_NAME(qt.objectid,qt.dbid)      

    Read the article

  • How to disable Apache http compression (mod_deflate) when SSL stream is compressed

    - by Mohammad Ali
    I found that Goggle Chrome supports ssl compression and Firefox should support it soon. I'm trying to configure Apache to to disable http compression if the ssl compression is used to prevent CPU overhead with the configuration option: SetEnvIf SSL_COMPRESS_METHOD DEFLATE no-gzip While the custom log (using %{SSL_COMPRESS_METHOD}x) shows that the ssl layer compression method is DEFLATE, the above option did not work and the http response content is still being compressed by Apache. I had to use the option: BrowserMatchNoCase ".Chrome." no-gzip' I prefer if there are more general method in case other browsers supports ssl compression or some has a version of chrome that does not have ssl compression.

    Read the article

  • Table Variables: an empirical approach.

    - by Phil Factor
    It isn’t entirely a pleasant experience to publish an article only to have it described on Twitter as ‘Horrible’, and to have it criticized on the MVP forum. When this happened to me in the aftermath of publishing my article on Temporary tables recently, I was taken aback, because these critics were experts whose views I respect. What was my crime? It was, I think, to suggest that, despite the obvious quirks, it was best to use Table Variables as a first choice, and to use local Temporary Tables if you hit problems due to these quirks, or if you were doing complex joins using a large number of rows. What are these quirks? Well, table variables have advantages if they are used sensibly, but this requires some awareness by the developer about the potential hazards and how to avoid them. You can be hit by a badly-performing join involving a table variable. Table Variables are a compromise, and this compromise doesn’t always work out well. Explicit indexes aren’t allowed on Table Variables, so one cannot use covering indexes or non-unique indexes. The query optimizer has to make assumptions about the data rather than using column distribution statistics when a table variable is involved in a join, because there aren’t any column-based distribution statistics on a table variable. It assumes a reasonably even distribution of data, and is likely to have little idea of the number of rows in the table variables that are involved in queries. However complex the heuristics that are used might be in determining the best way of executing a SQL query, and they most certainly are, the Query Optimizer is likely to fail occasionally with table variables, under certain circumstances, and produce a Query Execution Plan that is frightful. The experienced developer or DBA will be on the lookout for this sort of problem. In this blog, I’ll be expanding on some of the tests I used when writing my article to illustrate the quirks, and include a subsequent example supplied by Kevin Boles. A simplified example. We’ll start out by illustrating a simple example that shows some of these characteristics. We’ll create two tables filled with random numbers and then see how many matches we get between the two tables. We’ll forget indexes altogether for this example, and use heaps. We’ll try the same Join with two table variables, two table variables with OPTION (RECOMPILE) in the JOIN clause, and with two temporary tables. It is all a bit jerky because of the granularity of the timing that isn’t actually happening at the millisecond level (I used DATETIME). However, you’ll see that the table variable is outperforming the local temporary table up to 10,000 rows. Actually, even without a use of the OPTION (RECOMPILE) hint, it is doing well. What happens when your table size increases? The table variable is, from around 30,000 rows, locked into a very bad execution plan unless you use OPTION (RECOMPILE) to provide the Query Analyser with a decent estimation of the size of the table. However, if it has the OPTION (RECOMPILE), then it is smokin’. Well, up to 120,000 rows, at least. It is performing better than a Temporary table, and in a good linear fashion. What about mixed table joins, where you are joining a temporary table to a table variable? You’d probably expect that the query analyzer would throw up its hands and produce a bad execution plan as if it were a table variable. After all, it knows nothing about the statistics in one of the tables so how could it do any better? Well, it behaves as if it were doing a recompile. And an explicit recompile adds no value at all. (we just go up to 45000 rows since we know the bigger picture now)   Now, if you were new to this, you might be tempted to start drawing conclusions. Beware! We’re dealing with a very complex beast: the Query Optimizer. It can come up with surprises What if we change the query very slightly to insert the results into a Table Variable? We change nothing else and just measure the execution time of the statement as before. Suddenly, the table variable isn’t looking so much better, even taking into account the time involved in doing the table insert. OK, if you haven’t used OPTION (RECOMPILE) then you’re toast. Otherwise, there isn’t much in it between the Table variable and the temporary table. The table variable is faster up to 8000 rows and then not much in it up to 100,000 rows. Past the 8000 row mark, we’ve lost the advantage of the table variable’s speed. Any general rule you may be formulating has just gone for a walk. What we can conclude from this experiment is that if you join two table variables, and can’t use constraints, you’re going to need that Option (RECOMPILE) hint. Count Dracula and the Horror Join. These tables of integers provide a rather unreal example, so let’s try a rather different example, and get stuck into some implicit indexing, by using constraints. What unusual words are contained in the book ‘Dracula’ by Bram Stoker? Here we get a table of all the common words in the English language (60,387 of them) and put them in a table. We put them in a Table Variable with the word as a primary key, a Table Variable Heap and a Table Variable with a primary key. We then take all the distinct words used in the book ‘Dracula’ (7,558 of them). We then create a table variable and insert into it all those uncommon words that are in ‘Dracula’. i.e. all the words in Dracula that aren’t matched in the list of common words. To do this we use a left outer join, where the right-hand value is null. The results show a huge variation, between the sublime and the gorblimey. If both tables contain a Primary Key on the columns we join on, and both are Table Variables, it took 33 Ms. If one table contains a Primary Key, and the other is a heap, and both are Table Variables, it took 46 Ms. If both Table Variables use a unique constraint, then the query takes 36 Ms. If neither table contains a Primary Key and both are Table Variables, it took 116383 Ms. Yes, nearly two minutes!! If both tables contain a Primary Key, one is a Table Variables and the other is a temporary table, it took 113 Ms. If one table contains a Primary Key, and both are Temporary Tables, it took 56 Ms.If both tables are temporary tables and both have primary keys, it took 46 Ms. Here we see table variables which are joined on their primary key again enjoying a  slight performance advantage over temporary tables. Where both tables are table variables and both are heaps, the query suddenly takes nearly two minutes! So what if you have two heaps and you use option Recompile? If you take the rogue query and add the hint, then suddenly, the query drops its time down to 76 Ms. If you add unique indexes, then you've done even better, down to half that time. Here are the text execution plans.So where have we got to? Without drilling down into the minutiae of the execution plans we can begin to create a hypothesis. If you are using table variables, and your tables are relatively small, they are faster than temporary tables, but as the number of rows increases you need to do one of two things: either you need to have a primary key on the column you are using to join on, or else you need to use option (RECOMPILE) If you try to execute a query that is a join, and both tables are table variable heaps, you are asking for trouble, well- slow queries, unless you give the table hint once the number of rows has risen past a point (30,000 in our first example, but this varies considerably according to context). Kevin’s Skew In describing the table-size, I used the term ‘relatively small’. Kevin Boles produced an interesting case where a single-row table variable produces a very poor execution plan when joined to a very, very skewed table. In the original, pasted into my article as a comment, a column consisted of 100000 rows in which the key column was one number (1) . To this was added eight rows with sequential numbers up to 9. When this was joined to a single-tow Table Variable with a key of 2 it produced a bad plan. This problem is unlikely to occur in real usage, and the Query Optimiser team probably never set up a test for it. Actually, the skew can be slightly less extreme than Kevin made it. The following test showed that once the table had 54 sequential rows in the table, then it adopted exactly the same execution plan as for the temporary table and then all was well. Undeniably, real data does occasionally cause problems to the performance of joins in Table Variables due to the extreme skew of the distribution. We've all experienced Perfectly Poisonous Table Variables in real live data. As in Kevin’s example, indexes merely make matters worse, and the OPTION (RECOMPILE) trick does nothing to help. In this case, there is no option but to use a temporary table. However, one has to note that once the slight de-skew had taken place, then the plans were identical across a huge range. Conclusions Where you need to hold intermediate results as part of a process, Table Variables offer a good alternative to temporary tables when used wisely. They can perform faster than a temporary table when the number of rows is not great. For some processing with huge tables, they can perform well when only a clustered index is required, and when the nature of the processing makes an index seek very effective. Table Variables are scoped to the batch or procedure and are unlikely to hang about in the TempDB when they are no longer required. They require no explicit cleanup. Where the number of rows in the table is moderate, you can even use them in joins as ‘Heaps’, unindexed. Beware, however, since, as the number of rows increase, joins on Table Variable heaps can easily become saddled by very poor execution plans, and this must be cured either by adding constraints (UNIQUE or PRIMARY KEY) or by adding the OPTION (RECOMPILE) hint if this is impossible. Occasionally, the way that the data is distributed prevents the efficient use of Table Variables, and this will require using a temporary table instead. Tables Variables require some awareness by the developer about the potential hazards and how to avoid them. If you are not prepared to do any performance monitoring of your code or fine-tuning, and just want to pummel out stuff that ‘just runs’ without considering namby-pamby stuff such as indexes, then stick to Temporary tables. If you are likely to slosh about large numbers of rows in temporary tables without considering the niceties of processing just what is required and no more, then temporary tables provide a safer and less fragile means-to-an-end for you.

    Read the article

  • Naming methods that do the same thing but return different types

    - by Konstantin Ð.
    Let's assume that I'm extending a graphical file chooser class (JFileChooser). This class has methods which display the file chooser dialog and return a status signature in the form of an int: APPROVE_OPTION if the user selects a file and hits Open /Save, CANCEL_OPTION if the user hits Cancel, and ERROR_OPTION if something goes wrong. These methods are called showDialog(). I find this cumbersome, so I decide to make another method that returns a File object: in the case of APPROVE_OPTION, it returns the file selected by the user; otherwise, it returns null. This is where I run into a problem: would it be okay for me to keep the showDialog() name, even though methods with that name — and a different return type — already exist? To top it off, my method takes an additional parameter: a File which denotes in which directory the file chooser should start. My question to you: Is it okay to call a method the same name as a superclass method if they return different types? Or would that be confusing to API users? (If so, what other name could I use?) Alternatively, should I keep the name and change the return type so it matches that of the other methods? public int showDialog(Component parent, String approveButtonText) // Superclass method public File showDialog(Component parent, File location) // My method

    Read the article

< Previous Page | 358 359 360 361 362 363 364 365 366 367 368 369  | Next Page >