Search Results

Search found 24201 results on 969 pages for 'andrew case'.

Page 427/969 | < Previous Page | 423 424 425 426 427 428 429 430 431 432 433 434  | Next Page >

  • Why are some seasoned ASP.NET developers defecting to Ruby on Rails?

    - by Tony_Henrich
    Once a while I hear some known ASP.NET developer declaring that they quit developing in .NET and moving to Ruby using Ruby in Rails. The problem is they don't mention exactly the reasons. They use words like RoR is 'easier', 'better' & 'faster'. That really doesn't say much to me. Anyone care to do faithful comparison using code samples, case studies ..etc or from personal experience in using both? Try to convince me to throw away all my years of learning C#, the .NET Framework using a powerful IDE (Visual Studio). Does RoR save you hours a week in development time? What are the major pain points in .NET that compels one to move away from it? This question is NOT about a pure RoR vs ASP.NET (MVC) comparison. It's about the compelling technical reasons (getting bored does not count!) to switch over after using a platform for several years and start with a new language and platform. (prefer this to be a wiki)

    Read the article

  • Why do strace/truss sometimes 'fix' stuck processes?

    - by Emmel
    Sometimes you have a stuck process that's been stuck for a while, and as soon as you go to poke at it with strace/truss just to see what's going on, it gets magically unstuck and continues to run! So from merely 'observing' these programs have some impact in the running of the stuck programs .. what's happening here? Did strace (I guess via ptrace(2)?) send a signal, causing the program to cease blocking, or such? I've seen this several times -- most recently on Linux RHEL 4 (and a Perl script mucking with processes and doing some network IO in that case), but in a few other contexts as well. Unfortunately, I can't reproduce this, as it times to happen ... in times of crisis. But my curiosity remains. :-) Any elucidation appreciated.

    Read the article

  • Join us at the Gartner MDM Summit in LA on April 4-5

    - by Dain C. Hansen
    This year, we're proud to announce that Oracle is a Platinum sponsor of the Gartner Master Data Management Summit this April 4 – 5, 2012 in Los Angeles. The event will be a follow-on event from the Gartner BI Summit, so if you already attending that event, stick around on Wednesday and Thursday and don't miss it. Especially, don't miss our key session at 9:30 AM on Thursday April 4th, "Brace for Impact: Key Trends in Master Data Management" with Ford Goodman and Dain Hansen. Master Data Management helps organizations perform better by creating a single coherent version of customers, products and suppliers. But how do you get started? And if you've laid a foundation, how do you become world-class? Designed to address all MDM maturity levels, Gartner Master Data Management Summit delivers the tools, technologies and best practices to help you take control of your master data and dramatically improve business performance. Attendees will have the opportunity to meet with Oracle experts in a variety of sessions, including demonstrations during the showcase receptions. Oracle Customer Case Study and Solution Provider Session Oracle Solution Showcase Receptions Oracle Face-to-Face Meetings Hot topics to be covered: Forecasting key trends shaping MDM Building a business-driven MDM program Assessing MDM maturity Creating the MDM organization Evolving to multidomain MDM Learn more about this event, or to register, click here

    Read the article

  • What does the email header "X-CAA-SPAM" refer to?

    - by lotri
    I've got an application that sends out notification emails to users of the application (this is not spam; the information in these emails is solicited and useful, and is also a feature turned off by default and must be enabled by the user). The app is still in beta, and one of our testers reports that the notification emails are going to his junk mail folder in Outlook 2003. This is the only reported case of this, but I asked him to send me the email headers from the message, and I noticed that there is a header there labeled "X-CAA-SPAM" with a value of 00000 . I'm a programmer, so I'm fairly green in the world of successful automated emails - does anyone know if this header is the culprit? If not, any suggestions?

    Read the article

  • Have programmers at your work not taken up or been averse to an offer of a second monitor?

    - by Chris Knight
    I'm putting together a business case for the developers in my company to get a second monitor. After my own experiences and research, this seems a no-brainer to me in terms of increasing productivity and morale/happiness. One question which has niggled me is if I should be pushing to get all developers onto a second monitor or let folk opt-in (i.e. they get one if they want one). Thoughts on this are welcome, but my specific question relates to a snippet on this site: But when the IT manager at Thibeault's company asked other employees if they wanted dual monitors last year, few jumped at the offer. Blinded by my own pre-judgement, this surprised me. Has anyone else experienced this? I fully appreciate that some people prefer a single larger monitor, but my general experience of researching the web suggests that most programmers prefer a dual (or more) setup. I'm guessing this should be tempered with the thought that those developers who contribute to such discussions might not be considered your average developer who might not care one way or the other. Anyway, if you have experienced the above have you tried to sell the concept of dual monitors to the masses? If everyone just got 2 monitors regardless if they wanted it or not, were there adverse reactions or negative effects? UPDATE: The developers are on a mixture of 17", 22", or 24" single monitors. The desks should be able to accommodate dual 22" monitors as I am proposing, though this will take some getting used to I imagine.

    Read the article

  • As a solo programmer, of what use can Gerrit be?

    - by s.d
    Disclaimer: I'm aware of the questions How do I review my own code? and What advantages do continuous integration tools offer on a solo project?. I do feel that this question aims at a different set of answers though, as it is about a specific software, and not about general musings about the use of this or that practice or tech stack. I'm a solo programmer for an Eclipse RCP-based project. To be able to control myself and the quality of my software, I'm in the process of setting up a CSI stack. In this, I follow the Eclipse Common Build Infrastructure guidelines, with the exception that I will use Jenkins rather than Hudson. Sticking to these guidelines will hopefully help me in getting used to the Eclipse way of doing things, and I hope to be able to contribute some code to Eclipse in the future. However, CBI includes Gerrit, a code review software. While I think it's really helpful for teams, and I will employ it as soon as the team grows to at least two developers, I'm wondering how I could use it while I'm still solo. Question: Is there a use case for Gerrit that takes into account solo developers? Notes: I can imagine reviewing code myself after a certain amount of time to gain a little distance to it. This does complicate the workflow though, as code will only be built once it's been passed through code review. This might prove to be a "trap" as I might be tempted to quickly push bad code through Gerrit just to get it built. I'm very interested in hearing how other solo devs have made use of Gerrit.

    Read the article

  • Attempting to dual boot Ubuntu and Windows 7 on Sony Vaio with Insyde H2O BIOS

    - by zach
    My situation is the same that is addressed here Sony VAIO with Insyde H2O EFI bios will not boot into GRUB EFI and here http://www.hackourlife.com/sony-vaio-with-insyde-h2o-efi-bios-ubuntu-12-04-dual-boot I tried to install Ubuntu 12.04 from the Live CD alongside my current Windows 7. I have to switch my BIOS to legacy mode in order to boot from CD. If I were to do a normal installation and remain in legacy mode, the BIOS will display "operating system not found". If I switch back then the BIOS just boots to windows. To solve the problem, I tried following the steps in the previous two articles. My drive is partitioned as: sda1 FAT32 Location of Windows EFI files (flagged as boot in Ubuntu install) sda2 unknown sda3 NFTS Windows C: sda4 ext4 Ubuntu root sda5 swap sda6 ext4 Ubuntu home I was a little confused by the requirement in the second article to "be careful to install Grub bootloader in /dev/sda3" In my case, the relevant partition is sda1. I have tried three things: setting the sda1 mount point as /boot, as /boot/efi, and as the special reserved grub partition. In each install I indicated that grub should be installed in sda1. After each install I reboot to the live CD and look in the sda1. I see EFI/Boot and EFI/Windows, but no EFI/Ubuntu and consequently no grubx64.efi. I understand the recommended procedure of moving grubx64.efi into the EFI/Boot directory and replacing the present bootx64.efi file, but I see no grubx64.efi and I don't know where it should be.

    Read the article

  • Sendmail is refusing connection after configuring SMTP relay

    - by coder
    I'm setting up sendmail on my home computer to use with my webserver. I've set it to use my SMTP server provided by my hosting company. If I use the following command, it works sendmail -Am -t -v and then I enter the to and from emails. But if I try the following, it does not work. sendmail -v [email protected] < test.txt The TO email is the same as in the earlier command, but in this case I haven't specified a FROM e-mail, which I think is the problem. My guess is that it's sending the mail from user@localhost causing the smtp server to reject it. If so, how do I make it send from [email protected]?

    Read the article

  • Is there a way to create a copy-on-write copy of a directory?

    - by BCS
    I'm thinking of a situation where I would have something that creates a copy of a directory, tweaks a few files, and then does some processing on the result. This wold be done fairly often, maybe a few dozen times a day. (The exact use case is testing patch submissions; dupe the code, patch it, build/test/report/etc.) What I'm looking for could be done by creating a new directory structure and populating it with hard links from the origonal. However this only works if all the tools you use delete and recreate files rather than edit them in place. Is there a way to have the file system do copy-on-write for a file? Note: I'm aware that many FSs use COW at a block level (all updates are done via writes to new blocks) but this is not what I want.

    Read the article

  • Database OR Array

    - by rezoner
    What is the exact point of using external database system if I have simple relations (95% querries are dependant on ID). I am storing users and their stats. Why would I use external database if I can have neat constructions like: db.users[32] = something Array of 500K users is not that big effort for RAM Pros are: no problematic asynchronity (instant results) easy export/import dealing with database like with a native object LITERALLY ps. and considerations: Would it be faster or slower to do collection[3] than db.query("select ... I am going to store it as a file/s There is only ONE application/process accessing this data, and the code is executed line by line - please don't elaborate about locking. Please don't answer with database propositions but why to use external DB over native array/object - I have experience in a few databases - that's not the case. What I am building is a client/gateway/server(s) game. Gateway deals with all users data, processing, authenticating, writing statistics e.t.c No other part of software needs to access directly to this data/database.

    Read the article

  • OSX Reset Home Permissions and ACLs - how long should it take?

    - by andyface
    Having screwwed up my permissions by trying to have two users accessing one home folder I'm now going through the process of reseting my main user home permissions via the OSX install disk utilities, which seems to be taking a while. Does this process take a while to do, though I assume it depends on how many files I have in the folder, which in my case is a good few 100 GBs. At what point should I be concerned that it may have got stuck and thus reset my computer and try again? I assume, though not sure that if the little circle indicator is still moving then it's not completely frozen, but as there's no progress bar or details I'm not sure how true that is.

    Read the article

  • Is Joel Test really a good gauging tool?

    - by henry
    I just learned about Joel Test. I have been computer programmer for 22 years, but somehow never heard about it before. I consider my best job so far to be this small investment managing company with 30 employees and only 3 people in IT department. I am no longer with them but I had being working there for 5 years – my longest streak with any given company. To my surprise they scored extremely poor on Joel Test. The only two questions I would answer “yes” are #4: Do you have a bug database? And #9: Do you use the best tools money can buy? Everything else is either “sometimes” or straight “no”. Here is what I liked about the company however: a) Good pay, they bragged about it to my face and I bragged about it to their face, so it was almost like a family environment. b) I always knew big picture. When writing a code to solve particular problem there were no ambiguity about the business nature of that problem. Even though we did not always had written specs we could ask business users a question anytime, often yelling it across the floor. I could even talk to executives any time I felt like doing it: no appointment necessary. c) Immediate feedback. Once we implement a solution and make business users happy they immediately let us know that, we (programmers) become heroes of the moment. d) No red tape. I could always buy any tools I deem necessary, and design solutions the way my professional judgment dictates. e) Flexibility. If I had mid-day dental appointment that is near my house rather than near the office, I would send email to the company: "FYI: I work from home today". As long as one of 3 IT guys was on the floor (to help traders in case their monitors go dark) they did not care where 2 others are. So the question thus becomes how valuable Joel Test is? Why bother with it?

    Read the article

  • In a multi-domain forest, what EXACTLY happens when some, but not all, of the Infrastructure Masters are on Global Catalogs?

    - by MDMarra
    There are plenty of TechNet articles, like this one that say that phantom object don't get updated if an Infrastructure Master is also a Global Catalog, but other than that there isn't a lot of in depth information on what actually happens in this configuration. Imagine a configuration like this: |--------------| | example.com | | | | dedicated IM | |--------------| | | | |-------------------| | child.example.com | | | | IM on a GC | |-------------------| Where child has two DCs that are both global catalogs, meaning that the Infrastructure Master role is on a GC. And, example has three DCs with the Infrastructure Master role on a DC that is not a GC. I understand that it's usually best to just make everything a GC and not have to worry about this sort of thing, but assuming that's not the case - what is the exact error behavior that can be expected from a setup like this, and which domain(s) would this behavior manifest in? The child or the parent?

    Read the article

  • Naming methods that do the same thing but return different types

    - by Konstantin Ð.
    Let's assume that I'm extending a graphical file chooser class (JFileChooser). This class has methods which display the file chooser dialog and return a status signature in the form of an int: APPROVE_OPTION if the user selects a file and hits Open /Save, CANCEL_OPTION if the user hits Cancel, and ERROR_OPTION if something goes wrong. These methods are called showDialog(). I find this cumbersome, so I decide to make another method that returns a File object: in the case of APPROVE_OPTION, it returns the file selected by the user; otherwise, it returns null. This is where I run into a problem: would it be okay for me to keep the showDialog() name, even though methods with that name — and a different return type — already exist? To top it off, my method takes an additional parameter: a File which denotes in which directory the file chooser should start. My question to you: Is it okay to call a method the same name as a superclass method if they return different types? Or would that be confusing to API users? (If so, what other name could I use?) Alternatively, should I keep the name and change the return type so it matches that of the other methods? public int showDialog(Component parent, String approveButtonText) // Superclass method public File showDialog(Component parent, File location) // My method

    Read the article

  • OBJECT_Name parameters and dbid

    - by steveh99999
    If you've been using SQL Server for a long time, you may have been used to using the OBJECT_NAME system function in the past - especially useful when converting table IDs into table names when querying sysobjects and sysindexes..... However, if you're an old-school DBA  - did you know since SQL 2005 service pack 2 it  accepts a  second parameter ? database_id.. For example, this can be used to summarize some useful information from sys.dm_exec_query_stats. When reviewing SQL Server performance - it can be useful to look at the most heavily used stored procedures rather than inefficient less frequently used procedures.  Here's a query to summarize performance data on the most-heavily used stored procedures across all databases on a server  :-SELECT TOP 20 DENSE_RANK() OVER (ORDER BY SUM(execution_count) DESC) AS rank, OBJECT_NAME(qt.objectid, qt.dbid) AS 'proc name', (CASE WHEN qt.dbid = 32767 THEN 'mssqlresource' ELSE DB_NAME(qt.dbid) END ) AS 'Database', OBJECT_SCHEMA_NAME(qt.objectid,qt.dbid) AS 'schema', SUM(execution_count) AS 'TotalExecutions',SUM(total_worker_time) AS 'TotalCPUTimeMS', SUM(total_elapsed_time) AS 'TotalRunTimeMS', SUM(total_logical_reads) AS 'TotalLogicalReads',SUM(total_logical_writes) AS 'TotalLogicalWrites', MIN(creation_time) AS 'earliestPlan', MAX(last_execution_time) AS 'lastExecutionTime' FROM sys.dm_exec_query_stats qs CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt WHERE OBJECT_NAME(qt.objectid, qt.dbid) IS NOT NULL GROUP BY OBJECT_NAME(qt.objectid, qt.dbid),qt.dbid,OBJECT_SCHEMA_NAME(qt.objectid,qt.dbid)      

    Read the article

  • Alert for Forms customers running Oracle Forms 10g

    - by Grant Ronald
    Doesn’t time fly!  While you might have been happily running your Forms 10g applications for about 5 years or so now, the end of premier support is creeping up and you need to start planning for a move to Oracle Forms 11g. The premier support end date in December 31st 2011 and this is documented in Note: 1290974.1 available from MOS. So how much of an impact is this going to be?  Maybe not as much as you think.  While Forms 11g is based on WebLogic Server (WLS),10g was based  on OC4J.  That in itself doesn’t impact Forms much.  In most case it will simply be a recompile of your Forms source files and redeploy on WLS 11g. The Forms builder is the same in 11g as in 10g although its currently not available as a separate download from the main middleware bundle.  You can also look at a WLS Basic license option which means you shouldn’t have to shell out on upgrading to a WLS Suite license option. So what’s the proof in this being a relatively straightforward upgrade?  Well, we’ve had a big uptake of Forms 11g already (which has itself been out for over 2 years).  Read about BT Expedite’s upgrade where “The upgrade of Forms from Oracle Forms 10g to Oracle Forms 11g was relatively simple and on the whole, was just a recompilation”.  Or AMEC where “This has been one of the easiest Forms conversions we’ve ever done and it was a simple recompile in all cases” So if you are on 10g (or even earlier versions) I’d strongly consider starting your planning for an upgrade to 11g now. As always, if you have any questions about this you can post on the OTN forums.

    Read the article

  • How to disable Apache http compression (mod_deflate) when SSL stream is compressed

    - by Mohammad Ali
    I found that Goggle Chrome supports ssl compression and Firefox should support it soon. I'm trying to configure Apache to to disable http compression if the ssl compression is used to prevent CPU overhead with the configuration option: SetEnvIf SSL_COMPRESS_METHOD DEFLATE no-gzip While the custom log (using %{SSL_COMPRESS_METHOD}x) shows that the ssl layer compression method is DEFLATE, the above option did not work and the http response content is still being compressed by Apache. I had to use the option: BrowserMatchNoCase ".Chrome." no-gzip' I prefer if there are more general method in case other browsers supports ssl compression or some has a version of chrome that does not have ssl compression.

    Read the article

  • Manage Sending 2010 Documents to the Web with Office Upload Center

    - by Mysticgeek
    One of the main new features being touted in Office 2010 is the ability to upload documents to the Web for sharing and collaboration. Today we look at using Office Upload Center to help manage your uploaded documents. Microsoft Office Upload Center  When you upload an Office 2010 document to the web, a handy tool to manage them is the Office Upload Center. It’s a way to see what is being uploaded or what might have failed to reach the servers. It lets you know if a document failed to upload for some reason. In this case it looks like the incorrect credentials were entered when signing into Windows Live. Click on the Resolve button to get a list of actions you can take to get things corrected.   You can access the Upload Center from the icon which appears on the System Tray when uploading documents. Right-click the icon to control notifications, pause uploads, and access its settings. In the Settings section you can choose how Upload Center displays notifications, select the number of days to keep files in Cache, and delete currently cached files. If you find yourself uploading several documents to the web during the day, the Office Upload Center is a nice feature for managing them. Similar Articles Productive Geek Tips How To Upload Office 2010 Documents to Web Apps Technical PreviewStore, Edit, and Share Documents with Microsoft Web AppsHow To Rip a Music CD in Windows 7 Media CenterKeep Your Office 2007 Documents Readily Available the Easy WayMake Excel 2007 Always Save in Excel 2003 Format TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Greate Image Viewing and Management with Zoner Photo Studio Free Windows Media Player Plus! – Cool WMP Enhancer Get Your Team’s World Cup Schedule In Google Calendar Backup Drivers With Driver Magician TubeSort: YouTube Playlist Organizer XPS file format & XPS Viewer Explained

    Read the article

  • Recent resources on Entity Framework 4

    - by Eric Nelson
    I just posted on the bits you need to install to explore all the features of Entity Framework 4 with the Visual Studio 2010 RC. I’ve also had a quick look (March 12th 2010) to see what new resources are out there on EF4. They appear a little thin on the ground – but there are some gems. The following all caught my attention: Julie Lerman has published 2 How-to-videos on EF4 on pluralsight.com. You need to create a free guest pass to watch them. Getting Started with Entity Framework 4.0 – Session given at Cairo CodeCamp 2010 . This includes ppt and demos. Entity Framework 4 providers – read through the comments What’s new with Entity Framework in Visual Studio 2010 RC Extending the design surface of EF4 using the Extension Starter Kit Persistence Ignorance and EF4 on geekSpeak on channel 9 (poor audio IMHO – I gave up) First of a series of posts on EF4 How to stop your dba having a heart attack with EF4 from Simon Sabin in the UK. This includes ppt and demos. And the biggy. You no longer have to depend on SQL Profiler to keep an eye on the generated SQL. There is now a commercial profiler for Entity Framework.  I am yet to try it – but I listened to a .NET rocks podcast which made it sound great. It is “hidden” in a session on DSLs in Boo –> Oren Eini on creating DSLs in Boo. This is a much richer experience than you would get from SQL Profiler – matching the SQL to the .NET code. And finally a momentous #fail to … drum roll… the Visual Studio 2010 and .NET Framework 4 Training Kit Feb release. This just contains one ppt on EF4 – and it is not even a good one. Real shame. P.S. I will update the 101 EF4 Resources with the above … but post devweek in case I find some more goodies. Related Links 101 EF4 Resources

    Read the article

  • windows 7 64bit rc with SSD saving problem

    - by Tariq
    Hi All, I have this problem where, in any application, 95% of the time when i try to save it pops up with a "Save As" dialogue. And if i try to select the original file i get an error popup "The operation could not be completed". I have Windows 7 RC 64bit with a X-25M SSD. I have upgraded the firmware for the SSD just in case and still the problem persists. Has anyone come across this before or know the problem? Or is there some indication i should be looking for in the event logs? Thanks

    Read the article

  • Ubuntu: Take actions when system temperature gets too high

    - by Josh
    One of the CPU fans on my Compaq Presario laptop running Ubuntu 9.10 seems to have bit the dust. The fan is deep within the case and I intend to replace the laptop in the next 6 months so it's not worth replacing it. I have the laptop on a cooling pad and most of the time the system is fine, CPU temps around 90°-110°F. Occasionally, however, I'm seeing random lockups which I believe is due to the system overheating. How can I configure the system to: Lower the CPU speed when the temperature reaches a certain level? (I.E. 110°F) Shutdown the system when the tempature reaches a critical level? (And what would that be? 130°?)

    Read the article

  • Table Variables: an empirical approach.

    - by Phil Factor
    It isn’t entirely a pleasant experience to publish an article only to have it described on Twitter as ‘Horrible’, and to have it criticized on the MVP forum. When this happened to me in the aftermath of publishing my article on Temporary tables recently, I was taken aback, because these critics were experts whose views I respect. What was my crime? It was, I think, to suggest that, despite the obvious quirks, it was best to use Table Variables as a first choice, and to use local Temporary Tables if you hit problems due to these quirks, or if you were doing complex joins using a large number of rows. What are these quirks? Well, table variables have advantages if they are used sensibly, but this requires some awareness by the developer about the potential hazards and how to avoid them. You can be hit by a badly-performing join involving a table variable. Table Variables are a compromise, and this compromise doesn’t always work out well. Explicit indexes aren’t allowed on Table Variables, so one cannot use covering indexes or non-unique indexes. The query optimizer has to make assumptions about the data rather than using column distribution statistics when a table variable is involved in a join, because there aren’t any column-based distribution statistics on a table variable. It assumes a reasonably even distribution of data, and is likely to have little idea of the number of rows in the table variables that are involved in queries. However complex the heuristics that are used might be in determining the best way of executing a SQL query, and they most certainly are, the Query Optimizer is likely to fail occasionally with table variables, under certain circumstances, and produce a Query Execution Plan that is frightful. The experienced developer or DBA will be on the lookout for this sort of problem. In this blog, I’ll be expanding on some of the tests I used when writing my article to illustrate the quirks, and include a subsequent example supplied by Kevin Boles. A simplified example. We’ll start out by illustrating a simple example that shows some of these characteristics. We’ll create two tables filled with random numbers and then see how many matches we get between the two tables. We’ll forget indexes altogether for this example, and use heaps. We’ll try the same Join with two table variables, two table variables with OPTION (RECOMPILE) in the JOIN clause, and with two temporary tables. It is all a bit jerky because of the granularity of the timing that isn’t actually happening at the millisecond level (I used DATETIME). However, you’ll see that the table variable is outperforming the local temporary table up to 10,000 rows. Actually, even without a use of the OPTION (RECOMPILE) hint, it is doing well. What happens when your table size increases? The table variable is, from around 30,000 rows, locked into a very bad execution plan unless you use OPTION (RECOMPILE) to provide the Query Analyser with a decent estimation of the size of the table. However, if it has the OPTION (RECOMPILE), then it is smokin’. Well, up to 120,000 rows, at least. It is performing better than a Temporary table, and in a good linear fashion. What about mixed table joins, where you are joining a temporary table to a table variable? You’d probably expect that the query analyzer would throw up its hands and produce a bad execution plan as if it were a table variable. After all, it knows nothing about the statistics in one of the tables so how could it do any better? Well, it behaves as if it were doing a recompile. And an explicit recompile adds no value at all. (we just go up to 45000 rows since we know the bigger picture now)   Now, if you were new to this, you might be tempted to start drawing conclusions. Beware! We’re dealing with a very complex beast: the Query Optimizer. It can come up with surprises What if we change the query very slightly to insert the results into a Table Variable? We change nothing else and just measure the execution time of the statement as before. Suddenly, the table variable isn’t looking so much better, even taking into account the time involved in doing the table insert. OK, if you haven’t used OPTION (RECOMPILE) then you’re toast. Otherwise, there isn’t much in it between the Table variable and the temporary table. The table variable is faster up to 8000 rows and then not much in it up to 100,000 rows. Past the 8000 row mark, we’ve lost the advantage of the table variable’s speed. Any general rule you may be formulating has just gone for a walk. What we can conclude from this experiment is that if you join two table variables, and can’t use constraints, you’re going to need that Option (RECOMPILE) hint. Count Dracula and the Horror Join. These tables of integers provide a rather unreal example, so let’s try a rather different example, and get stuck into some implicit indexing, by using constraints. What unusual words are contained in the book ‘Dracula’ by Bram Stoker? Here we get a table of all the common words in the English language (60,387 of them) and put them in a table. We put them in a Table Variable with the word as a primary key, a Table Variable Heap and a Table Variable with a primary key. We then take all the distinct words used in the book ‘Dracula’ (7,558 of them). We then create a table variable and insert into it all those uncommon words that are in ‘Dracula’. i.e. all the words in Dracula that aren’t matched in the list of common words. To do this we use a left outer join, where the right-hand value is null. The results show a huge variation, between the sublime and the gorblimey. If both tables contain a Primary Key on the columns we join on, and both are Table Variables, it took 33 Ms. If one table contains a Primary Key, and the other is a heap, and both are Table Variables, it took 46 Ms. If both Table Variables use a unique constraint, then the query takes 36 Ms. If neither table contains a Primary Key and both are Table Variables, it took 116383 Ms. Yes, nearly two minutes!! If both tables contain a Primary Key, one is a Table Variables and the other is a temporary table, it took 113 Ms. If one table contains a Primary Key, and both are Temporary Tables, it took 56 Ms.If both tables are temporary tables and both have primary keys, it took 46 Ms. Here we see table variables which are joined on their primary key again enjoying a  slight performance advantage over temporary tables. Where both tables are table variables and both are heaps, the query suddenly takes nearly two minutes! So what if you have two heaps and you use option Recompile? If you take the rogue query and add the hint, then suddenly, the query drops its time down to 76 Ms. If you add unique indexes, then you've done even better, down to half that time. Here are the text execution plans.So where have we got to? Without drilling down into the minutiae of the execution plans we can begin to create a hypothesis. If you are using table variables, and your tables are relatively small, they are faster than temporary tables, but as the number of rows increases you need to do one of two things: either you need to have a primary key on the column you are using to join on, or else you need to use option (RECOMPILE) If you try to execute a query that is a join, and both tables are table variable heaps, you are asking for trouble, well- slow queries, unless you give the table hint once the number of rows has risen past a point (30,000 in our first example, but this varies considerably according to context). Kevin’s Skew In describing the table-size, I used the term ‘relatively small’. Kevin Boles produced an interesting case where a single-row table variable produces a very poor execution plan when joined to a very, very skewed table. In the original, pasted into my article as a comment, a column consisted of 100000 rows in which the key column was one number (1) . To this was added eight rows with sequential numbers up to 9. When this was joined to a single-tow Table Variable with a key of 2 it produced a bad plan. This problem is unlikely to occur in real usage, and the Query Optimiser team probably never set up a test for it. Actually, the skew can be slightly less extreme than Kevin made it. The following test showed that once the table had 54 sequential rows in the table, then it adopted exactly the same execution plan as for the temporary table and then all was well. Undeniably, real data does occasionally cause problems to the performance of joins in Table Variables due to the extreme skew of the distribution. We've all experienced Perfectly Poisonous Table Variables in real live data. As in Kevin’s example, indexes merely make matters worse, and the OPTION (RECOMPILE) trick does nothing to help. In this case, there is no option but to use a temporary table. However, one has to note that once the slight de-skew had taken place, then the plans were identical across a huge range. Conclusions Where you need to hold intermediate results as part of a process, Table Variables offer a good alternative to temporary tables when used wisely. They can perform faster than a temporary table when the number of rows is not great. For some processing with huge tables, they can perform well when only a clustered index is required, and when the nature of the processing makes an index seek very effective. Table Variables are scoped to the batch or procedure and are unlikely to hang about in the TempDB when they are no longer required. They require no explicit cleanup. Where the number of rows in the table is moderate, you can even use them in joins as ‘Heaps’, unindexed. Beware, however, since, as the number of rows increase, joins on Table Variable heaps can easily become saddled by very poor execution plans, and this must be cured either by adding constraints (UNIQUE or PRIMARY KEY) or by adding the OPTION (RECOMPILE) hint if this is impossible. Occasionally, the way that the data is distributed prevents the efficient use of Table Variables, and this will require using a temporary table instead. Tables Variables require some awareness by the developer about the potential hazards and how to avoid them. If you are not prepared to do any performance monitoring of your code or fine-tuning, and just want to pummel out stuff that ‘just runs’ without considering namby-pamby stuff such as indexes, then stick to Temporary tables. If you are likely to slosh about large numbers of rows in temporary tables without considering the niceties of processing just what is required and no more, then temporary tables provide a safer and less fragile means-to-an-end for you.

    Read the article

  • Joining an Active Directory domain using netdom

    - by Cheezo
    I have a simple script to join an AD domain and rename the computer. When I execute these commands directly on the CLI, it works fine. When I execute the same via batch file, I get an error saying The network path was not found I am running as Administrator with full privileges. I have googled around microsoft forums but my case is unique because it works from the CLI and not from the batch file netdom join %%computername%% /domain:OPSCODEDEMO.COM /userd:Administrator /passwordd:xxx netdom renamecomputer %%computername%% /NewName:%hostname% /Force The environment is Windows 2k8 R2 SP1 running on Ninefold Cloud (Xenserver).

    Read the article

  • Tips for debugging Samba performance?

    - by j-g-faustus
    Samba gives me 24 MB/s read and 44 MB/s write, while ftp gives 97 and 112 MB/s under the same circumstances. The documentation says that Generally, you should find that Samba performs similarly to ftp at raw transfer speed. In my case it clearly doesn't. Where can I find tips on how to debug Samba performance? Or alternatively tips for replacing Samba with something else? (I can't use ftp, unfortunately, as I need something that can be used with rsync/rsnapshot.) More details: Both computers are running Ubuntu 10.10 (using Samba because I have a Mac as well) The Samba share is on a local home network, mounted as $ mount ... //server.local/share/ on /mnt/share type cifs (rw,mand) Samba performance was tested by copying (cp) a single file of ~4GB to and from the share, using time for timing and calculating transfer speed by hand. ftp performance are the numbers from the ftp client for get/put of the same file. iperf gives network speed ~900 Mbits/s bonnie++ gives disk speeds 200 MB/s on both sides for block reads as well as block writes Tried changing the parameters suggested in the performance tuning HOWTO (read/write raw, read size, socket options), most of them made little to no difference. (The one that made a difference caused write speed to drop 50%.)

    Read the article

  • creating proper vpn tunnel, when both LANs have the same addressing

    - by meta
    I was following this tutorial http://wiki.debian.org/OpenVPN#TLS-enabled_VPN and this one http://users.telenet.be/mydotcom/howto/linux/openvpn.htm to create openvpn connection to my remote LAN. But both examples assumed that both LANs have different addresses (ie 192.168.10.0/24 and 192.168.20.0/24, check out this image i.stack.imgur.com/2eUSm.png). Unfortunately in my case both local and remote lan have 192.168.1.0/24 addresses. I am able to connect directly on the openvpn server (I can ping it and log in with ssh), but I can't see other devices on the remote LAN (not mentioning accessing them via browser which was the point from the first place). And don't know if the addressing issue may be the reason of that? If not - how to define routes, so I could ping other devices in remote LAN?

    Read the article

< Previous Page | 423 424 425 426 427 428 429 430 431 432 433 434  | Next Page >