Search Results

Search found 1341 results on 54 pages for 'rob goodwin'.

Page 12/54 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • simple collision detection

    - by Rob
    Imagine 2 squares sitting side by side, both level with the ground: http://img19.imageshack.us/img19/8085/sqaures2.jpg A simple way to detect if one is hitting the other is to compare the location of each side. They are touching if ALL of the following are NOT true: The right square's left side is to the right of the left square's right side. The right square's right side is to the left of the left square's left side. The right square's bottom side is above the left square's top side. The right square's top side is below the left square's bottom side. If any of those are true, the squares are not touching. If all of those are false, the squares are touching. But consider a case like this, where one square is at a 45 degree angle: http://img189.imageshack.us/img189/4236/squaresb.jpg Is there an equally simple way to determine if those squares are touching?

    Read the article

  • When will microsoft release IE9? [closed]

    - by Rob McKinnon
    I was one of those people early on to try their IE9 beta, and it was terribly buggy. It still does function horribly. IMO any windows release after 5(2k,nt,xp) absolutely sux the life out of my resources compared to RPM linux(opensuse), until at least service pack 2. MS is trying to push HTML5/CSS3 and they cannot pass the Acid 3 test. I am wondering when IE9 will become functional. I am a big supported of MS applications. I have a great amount of adoration for IIS7 because they have support for CGI/PHP. Is IE9 going to be released before 2012?

    Read the article

  • Script/tool to import series of snapshots, each being a new edition, into GIT, populating source tree?

    - by Rob
    I've developed code locally and taken a fairly regular snapshot whenever I reach a significant point in development, e.g. a working build. So I have a long-ish list of about 40 folders, each folder being a snapshot e.g. in ascending date YYYYMMDD order, e.g.:- 20100523 20100614 20100721 20100722 20100809 20100901 20101001 20101003 20101104 20101119 20101203 20101218 20110102 I'm looking for a script to import each of these snapshots into GIT. The end result being that the latest code is the same as the last snapshot, and other editions are accessible and are as numbered. Some other requirements: that the latest edition is not cumulative of the previous snapshots, i.e., files that appeared in older snapshots but which don't appear in later ones (e.g. due to refactoring etc.) should not appear in the latest edition of the code. meanwhile, there should be continuity between files that do persist between snapshots. I would like GIT to know that there are previous editions of these files and not treat them as brand new files within each edition. Some background about my aim: I need to formally revision control this work rather than keep local private snapshot copies. I plan to release this work as open source, so version controlling would be highly recommended I am evaluating some of the current popular version control systems (Subversion and GIT) BUT I definitely need a working solution in GIT as well as subversion. I'm not looking to be persuaded to use one particular tool, I need a solution for each tool I am considering. (I haved posted an answer separately for each tool so separate camps of folks who have expertise in GIT and Subversion will be able to give focused answers on one or the other). The same but separate question for Subversion: Script/tool to import series of snapshots, each being a new revision, into Subversion, populating source tree?

    Read the article

  • Augmenting functionality of subclasses without code duplication in C++

    - by Rob W
    I have to add common functionality to some classes that share the same superclass, preferably without bloating the superclass. The simplified inheritance chain looks like this: Element -> HTMLElement -> HTMLAnchorElement Element -> SVGElement -> SVGAlement The default doSomething() method on Element is no-op by default, but there are some subclasses that need an actual implementation that requires some extra overridden methods and instance members. I cannot put a full implementation of doSomething() in Element because 1) it is only relevant for some of the subclasses, 2) its implementation has a performance impact and 3) it depends on a method that could be overridden by a class in the inheritance chain between the superclass and a subclass, e.g. SVGElement in my example. Especially because of the third point, I wanted to solve the problem using a template class, as follows (it is a kind of decorator for classes): struct Element { virtual void doSomething() {} }; // T should be an instance of Element template<class T> struct AugmentedElement : public T { // doSomething is expensive and uses T virtual void doSomething() override {} // Used by doSomething virtual bool shouldDoSomething() = 0; }; class SVGElement : public Element { /* ... */ }; class SVGAElement : public AugmentedElement<SVGElement> { // some non-trivial check bool shouldDoSomething() { /* ... */ return true; } }; // Similarly for HTMLAElement and others I looked around (in the existing (huge) codebase and on the internet), but didn't find any similar code snippets, let alone an evaluation of the effectiveness and pitfalls of this approach. Is my design the right way to go, or is there a better way to add common functionality to some subclasses of a given superclass?

    Read the article

  • IPv6 tunnels - any easy way to turn them on and off?

    - by Rob Hoare
    I've set up a tunnelbroker.net (Hurricane Electric) IPv6 tunnel from my laptop running 12.04. Works fine, and allows me to test the dual-stack configuration on my remote webservers etc. until native IPv6 is available on my ISP. However, there are times when I don't want the tunnel. For example if I'm accessing something that requires an IPv4 address in my own country rather than the Tunnelbroker tunnel endpoint, or if I'm away from the local IPv4 tunnel endpoint, or if I simply want to test without IPv6. Is there a simple way to disable and then re-enable the IPv6 tunnel, without rebooting? For context, here's what's in my /etc/network/interfaces (NNN replaces numbers): auto he-ipv6 iface he-ipv6 inet6 v4tunnel endpoint 216.218.NNN.NNN address 2001:470:NNN:NNN::2 netmask 64 up ip -6 route add default dev he-ipv6 down ip -6 route del default dev he-ipv6 Is there a network manager application (gui or command line) to selectively enable/disable parts of /etc/network/interfaces, or IPv6 in general? I found even by commenting out that out (and reloading networking) it's tough to get the IPv6 to go away. A "tunnel on/off" button in networking would be great, like using a VPN.

    Read the article

  • Broken links in content reports when tracking subdomains with Google Analytics

    - by Rob Sobers
    I have a tracking code that I use on my main site and my blog, which is on a subdomain: www.example.com blog.example.com I have a single profile in Google Analytics. I use advanced segments to look at traffic to the main site vs. traffic to the blog. Problem 1: When I'm browsing my content reports under Standard Reporting, the "Page" column doesn't show the top-level or sub-domain, so I can't differentiate www.example.com/index.html from blog.example.com/index.html easily. According to the docs, this filter is supposed to make GA prepend the hostname to the page URL in your content reports, but it doesn't seem to work. Problem 2: When I click on the little "Open in new window" icon next to a given page in a content report line, it always assumes the page lives on www.example.com, so I get 404s when the page is actually on blog.example.com. Is there a good solution for these subdomain tracking problems?

    Read the article

  • Fetching Latitude and Longitude Co-ordinates for Addresses using PowerShell

    - by Rob Farley
    Regular readers of my blog (at sqlblog.com – please let me know if you’re reading this elsewhere) may be aware that I’ve been doing more and more with spatial data recently. With the now-available SQL Server 2008 R2 Reporting Services including maps, it’s a topic that interests many people. Interestingly though, although many people have plenty of addresses in their various databases (whether they be CRM systems, HR systems or whatever), my experience shows that many people do not store the latitude...(read more)

    Read the article

  • PowerShell a constant in a changing world

    - by Rob Addis
    I've been programming for about 20 years now some of my friends have been at it for over 30. I have read many, many manuals and yes it's not my favourite past time. So 10 years ago I made a promise to myself to try and only learn about products which have long life times. I immediately gave up programming GUIs and concentrated on back end development as I decided that these products (Oracle, MQ Series, SQL Server, BizTalk and later WCF, WF) have longer life times and smaller incremental changes than front end products.10 years ago I had no idea how good a decision that would turn out to be. There have been so many different Microsoft products for the front end in that time; multiple versions of Windows Forms, FrontPage, Html, Javascript, ASP.net, Silverlight, SharePoint, WPF and now hopefully a stayer Metro.I remember being at a Microsoft conference in 2006 when Martin Fowler told a crowd of developers (I'm paraphrasing) "If you don't like change then you're in the wrong business!". Well I've been in the business for 20 years and yes I'm a little resistant to change. I like my investment in reading manuals and getting certified to be time well spent!Over the last 2 years I have been writing A LOT of PowerShell script, I think there is a good chance this product will still be around and be used for new development in 10 years, learning it is a good investment.

    Read the article

  • Using RegEx's in Multi-Channel Funnels in Google Analytics

    - by Rob H
    For some reason, I can't get my multi-channel funnel which utilizes RegEx's in the path steps to function -- it keeps coming back with no data. There are a few variables which may be holding things up, but I can't figure out the origin of the problem, nor a solution. Here's the situation: The funnel is tracking conversions, defined as when a user completes 4 steps to signup Steps are not "required" Default URL is set to https://example.com There is a 302 redirect set up on our site that leads from http://example.com to https://example.com Within the funnel, steps switch from non-secure pages (unless browser is set to secure browsing), to secure pages once the user moves from the landing page to the second page of the sign-up process (account placeholder has been created) URL at that point contains the variable of publisher number within (but not at the end) the URL My RegEx's are all properly written as tested on rubular.com

    Read the article

  • Cleaning a dataset of song data - what sort of problem is this?

    - by Rob Lourens
    I have a set of data about songs. Each entry is a line of text which includes the artist name, song title, and some extra text. Some entries are only "extra text". My goal is to resolve as many of these as possible to songs on Spotify using their web API. My strategy so far has been to search for the entry via the API - if there are no results, apply a transformation such as "remove all text between ( )" and search again. I have a list of heuristics and I've had reasonable success with this but as the code gets more and more convoluted I keep thinking there must be a more generic and consistent way. I don't know where to look - any suggestions for what to try, topics to study, buzzwords to google?

    Read the article

  • What is an effective way to convert a shared memory-mapped system to another data access model?

    - by Rob Jones
    I have a code base that is designed around shared memory. Each process that needs to access the memory maps it into its own address space. The data structures in the shared memory are directly accessed, that is, there is no API. For example: Assume the following: typedef struct { int x; int y; struct { int a; int b; } z; } myStruct; myStruct s; Then a process might access this structure as: myStruct *s = mapGlobalMem(); And use it as: int tmpX = s->x; The majority of the information in the global structure is configuration information that is set once and read many times. I would like to store this information in a database and develop an API to access the database. The problem is, these references are sprinkled throughout the code. I need a way to parse the code and identify global structure references that will need to be refactored. I've looked into using ANTLR to create a parser that will identify references to a small set of structures and enter them into a custom symbol table. I could then use this symbol table to identify which source files need to be refactored. It looks like a promising approach. What other approaches are there? Of course, I'm looking for a programmatic approach. There are far too many source files to examine each one visually. This is all ordinary ANSI C. Nothing else.

    Read the article

  • Is nesting types considered bad practice?

    - by Rob Z
    As noted by the title, is nesting types (e.g. enumerated types or structures in a class) considered bad practice or not? When you run Code Analysis in Visual Studio it returns the following message which implies it is: Warning 34 CA1034 : Microsoft.Design : Do not nest type 'ClassName.StructueName'. Alternatively, change its accessibility so that it is not externally visible. However, when I follow the recommendation of the Code Analysis I find that there tend to be a lot of structures and enumerated types floating around in the application that might only apply to a single class or would only be used with that class. As such, would it be appropriate to nest the type sin that case, or is there a better way of doing it?

    Read the article

  • Is There A Security Risk With Users That Are Also Groups?

    - by Rob P.
    I know a little about users and groups; in the past I might have had a group like 'DBAS' or 'ADMINS' and I'd add individual users to each group... But I was surprised to learn I could add users to other users - as if they were groups. For example if my /etc/group contained the following: user1:x:12501: user2:x:12502:user1 admin:x:123:user2,jim,bob Since user2 is a member of the admin group, and user1 is a member of user2 - is user1 effectively an admin? If the admin group is in the sudoers file, can user1 use it as well? I've tried to simulate this and I haven't been able to do so as user1...but I'm not sure it's impossible. EDIT: SORRY - updated error in question.

    Read the article

  • Is it time to add IPv6 access to my websites?

    - by Rob Hoare
    I have several dedicated servers and VPS servers, and some of those are at companies that have provided me with native IPv6 blocks (in addition to the IPv4 IP addresses). Does it currently make sense to point an AAAA record to an IPv6 address on my server, in addition to the A record pointing to the IPv4 address? This would be for (for example) the www subdomain. (the networking and web server software would be set up on the server to respond appropriately). A while ago I read that a small percentage of users (1 in a thousand?) would have slow or no access if a subdomain had both A and AAAA records because their networking software asked for one and got the other. Is that still the case, will adding an AAAA record inconvenience some users, or is the percentage already smaller and falling? In other words, is now the time to get around to adding native IPv6 support for a busy website aimed at the general public, or is it still too early?

    Read the article

  • The new Google Analytics - what new useful features have you found?

    - by Rob
    If you don't know already a new version of Google Analytics has just come out. On first initial views it doesn't seem like much of an improvement on the previous version. There's lot's linking to Google's social stats but I'm yet to see the value of that. Also it doesn't seem to make the best use of the important data, it's tending to push referral sites, keywords to the back and bring the less important data to the front. Is that a sign of things to come??? One feature I did find interesting was the visitors flow as it shows the visitors path through your site. What new features have you found useful/interesting?

    Read the article

  • Texture displays on Android emulator but not on device

    - by Rob
    I have written a simple UI which takes an image (256x256) and maps it to a rectangle. This works perfectly on the emulator however on the phone the texture does not show, I see only a white rectangle. This is my code: public void onSurfaceCreated(GL10 gl, EGLConfig config) { byteBuffer = ByteBuffer.allocateDirect(shape.length * 4); byteBuffer.order(ByteOrder.nativeOrder()); vertexBuffer = byteBuffer.asFloatBuffer(); vertexBuffer.put(cardshape); vertexBuffer.position(0); byteBuffer = ByteBuffer.allocateDirect(shape.length * 4); byteBuffer.order(ByteOrder.nativeOrder()); textureBuffer = byteBuffer.asFloatBuffer(); textureBuffer.put(textureshape); textureBuffer.position(0); // Set the background color to black ( rgba ). gl.glClearColor(0.0f, 0.0f, 0.0f, 0.5f); // Enable Smooth Shading, default not really needed. gl.glShadeModel(GL10.GL_SMOOTH); // Depth buffer setup. gl.glClearDepthf(1.0f); // Enables depth testing. gl.glEnable(GL10.GL_DEPTH_TEST); // The type of depth testing to do. gl.glDepthFunc(GL10.GL_LEQUAL); // Really nice perspective calculations. gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT, GL10.GL_NICEST); gl.glEnable(GL10.GL_TEXTURE_2D); loadGLTexture(gl); } public void onDrawFrame(GL10 gl) { gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); gl.glDisable(GL10.GL_DEPTH_TEST); gl.glMatrixMode(GL10.GL_PROJECTION); // Select Projection gl.glPushMatrix(); // Push The Matrix gl.glLoadIdentity(); // Reset The Matrix gl.glOrthof(0f, 480f, 0f, 800f, -1f, 1f); gl.glMatrixMode(GL10.GL_MODELVIEW); // Select Modelview Matrix gl.glPushMatrix(); // Push The Matrix gl.glLoadIdentity(); // Reset The Matrix gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY); gl.glLoadIdentity(); gl.glTranslatef(card.x, card.y, 0.0f); gl.glBindTexture(GL10.GL_TEXTURE_2D, texture[0]); //activates texture to be used now gl.glVertexPointer(2, GL10.GL_FLOAT, 0, vertexBuffer); gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer); gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, 4); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY); } public void onSurfaceChanged(GL10 gl, int width, int height) { // Sets the current view port to the new size. gl.glViewport(0, 0, width, height); // Select the projection matrix gl.glMatrixMode(GL10.GL_PROJECTION); // Reset the projection matrix gl.glLoadIdentity(); // Calculate the aspect ratio of the window GLU.gluPerspective(gl, 45.0f, (float) width / (float) height, 0.1f, 100.0f); // Select the modelview matrix gl.glMatrixMode(GL10.GL_MODELVIEW); // Reset the modelview matrix gl.glLoadIdentity(); } public int[] texture = new int[1]; public void loadGLTexture(GL10 gl) { // loading texture Bitmap bitmap; bitmap = BitmapFactory.decodeResource(context.getResources(), R.drawable.image); // generate one texture pointer gl.glGenTextures(0, texture, 0); //adds texture id to texture array // ...and bind it to our array gl.glBindTexture(GL10.GL_TEXTURE_2D, texture[0]); //activates texture to be used now // create nearest filtered texture gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_NEAREST); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); // Use Android GLUtils to specify a two-dimensional texture image from our bitmap GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0); // Clean up bitmap.recycle(); } As per many other similar issues and resolutions on the web i have tried setting the minsdkversion is 3, loading the bitmap via an input stream bitmap = BitmapFactory.decodeStream(is), setting BitmapFactory.Options.inScaled to false, putting the images in the nodpi folder and putting them in the raw folder.. all of which didn't help. I'm not really sure what else to try..

    Read the article

  • EJIE usage of Oracle WebLogic Server and Oracle Coherence

    - by rob.misek
    Watch Mike Lehmann, Senior Director of Product Management from Oracle and Oscar Guadilla, Senior Architect from EJIE, Basque Government's IT Company, discuss EJIE's implementation of Oracle WebLogic Server and Oracle Coherence. Hear EJIE's history with Oracle WebLogic Server, how and why they are using it for its web application platform, common services, file services, and intranet and the benefits they are gleaning. In addition, hear how EJIE is using WebLogic JMS for document management common service integration in its Eco-government project. Watch from the beginning or jump to the details of their Coherence usage (10:15)

    Read the article

  • How to perform simple collision detection?

    - by Rob
    Imagine two squares sitting side by side, both level with the ground like so: A simple way to detect if one is hitting the other is to compare the location of each side. They are touching if all of the following are false: The right square's left side is to the right of the left square's right side. The right square's right side is to the left of the left square's left side. The right square's bottom side is above the left square's top side. The right square's top side is below the left square's bottom side. If any of those are true, the squares are not touching. But consider a case like this, where one square is at a 45 degree angle: Is there an equally simple way to determine if those squares are touching?

    Read the article

  • SQL Saturday #89 in Atlanta!

    - by Most Valuable Yak (Rob Volk)
    (Yeah yeah, technically it's in Alpharetta, but it's close enough.) Saturday…Saturday….Saturday…. September 17th.  TWO THOUSAND ELEVEN! OK, it's not a tractor pull, but it's even better:  FREE SQL SERVER TRAINING!  They have a bunch of great speakers lined up, and for some reason, me.  (Protip: be good friends with the program committee, have sufficient bribe funds, and if all else fails, lots of alcohol, drugs and a camera.  Ba-ZING!  You too can speak at SQL Saturday!) I will be presenting Revenge: The SQL! in a new and improved SQL Saturday themed presentation.  Actually, it's the same ol' presentation, I just updated the slide theme to match the new SQL Saturday website design.  (Yeah guys, thanks for changing that a month ago.  So much for coasting on the old format.) Of course, you have your choice of three other SQL Saturdays in other cities that day, but come on, you really want to go to this one. #sqlsat89 #sqlsaturday #sqlkilt #sqlpass

    Read the article

  • the application compiz has closed unexpectedly

    - by Rob
    i'm a newbie to linux os, i have recently installed the latest version of ubuntu 12.10 desktop 32bit on a hp-compaq d530 desktop pc, but after a restart login screen works fine, once logged in after 20/30secs i get a window prompt with the message " the application compiz has closed unexpectedly " i have send the report and after that i am stuck with just the wallpaper / screensaver. if any one can help me out would be grateful thanks

    Read the article

  • SQL Saturday #220 Atlanta May 2013!

    - by Most Valuable Yak (Rob Volk)
    If you love SQL Server training and are near the Atlanta area, or just love us so much you're willing to travel here, please come join us for: SQL SATURDAY #220! The main event is Saturday, May 18.  The event is free, with a $10.00 lunch fee.  The main page has more details here: http://www.sqlsaturday.com/220/eventhome.aspx We are also offering pre-conference sessions on Friday, May 17, by 5 world-renowned presenters: Denny Cherry: SQL Server Security Register! Site Twitter Adam Machanic: Surfing the Multicore Wave: Processors, Parallelism, and Performance Register! Site Twitter Stacia Misner: Languages of BI Register! Site Twitter Bill Pearson: Practical Self-Service BI with PowerPivot for Excel Register! Site Twitter Eddie Wuerch: The DBA Skills Upgrade Toolkit Register! Site Twitter         We have an early bird registration price of $119 until noon EST Friday, March 22.  After that the price goes to $149, a STEAL when you compare it to the PASS Summit price. :) Please click on the links to register and for more information.  You can also follow the hash tag #SQLSatATL on Twitter for more news about this event. Can't wait to see you all there!

    Read the article

  • Summit reflections

    - by Rob Farley
    So far, my three PASS Summit experiences have been notably different to each other. My first, I wasn’t on the board and I gave two regular sessions and a Lightning Talk in which I told jokes. My second, I was a board advisor, and I delivered a precon, a spotlight and a Lightning Talk in which I sang. My third (last week), I was a full board director, and I didn’t present at all. Let’s not talk about next year. I’m not sure there are many options left. This year, I noticed that a lot more people recognised me and said hello. I guess that’s potentially because of the singing last year, but could also be because board elections can bring a fair bit of attention, and because of the effort I’ve put in through things like 24HOP... Yeah, ok. It’d be the singing. My approach was very different though. I was watching things through different eyes. I looked for the things that seemed to be working and the things that didn’t. I had staff there again, and was curious to know how their things were working out. I knew a lot more about what was going on behind the scenes to make various things happen, and although very little about the Summit was actually my responsibility (based on not having that portfolio), my perspective had moved considerably. Before the Summit started, Board Members had been given notebooks – an idea Tom (who heads up PASS’ marketing) had come up with after being inspired by seeing Bill walk around with a notebook. The plan was to take notes about feedback we got from people. It was a good thing, and the notebook forms a nice pair with the SQLBits one I got a couple of years ago when I last spoke there. I think one of the biggest impacts of this was that during the first keynote, Bill told everyone present about the notebooks. This set a tone of “we’re listening”, and a number of people were definitely keen to tell us things that would cause us to pull out our notebooks. PASSTV was a new thing this year. Justin, the host, featured on the couch and talked a lot of people about a lot of things, including me (he talked to me about a lot of things, I don’t think he talked to a lot people about me). Reaching people through online methods is something which interests me a lot – it has huge potential, and I love the idea of being able to broadcast to people who are unable to attend in person. I’m keen to see how this medium can be developed over time. People who know me will know that I’m a keen advocate of certification – I've been SQL certified since version 6.5, and have even been involved in creating exams. However, I don’t believe in studying for exams. I think training is worthwhile for learning new skills, but the goal should be on learning those skills, not on passing an exam. Exams should be for proving that the skills are there, not a goal in themselves. The PASS Summit is an excellent place to take exams though, and with an attitude of professional development throughout the event, why not? So I did. I wasn’t expecting to take one, but I was persuaded and took the MCM Knowledge Exam. I hadn’t even looked at the syllabus, but tried it anyway. I was very tired, and even fell asleep at one point during it. I’ll find out my result at some point in the future – the Prometric site just says “Tested” at the moment. As I said, it wasn’t something I was expecting to do, but it was good to have something unexpected during the week. Of course it was good to catch up with old friends and make new ones. I feel like every time I’m in the US I see things develop a bit more, with more and more people knowing who I am, who my staff are, and recognising the LobsterPot brand. I missed being a presenter, but I definitely enjoyed seeing many friends on the list of presenters. I won’t try to list them, because there are so many these days that people might feel sad if I don’t mention them. For those that I managed to see, I was pleased to see that the majority of them have lifted their presentation skills since I last saw them, and I happily told them as much. One person who I will mention was Paul White, who travelled from New Zealand to his first PASS Summit. He gave two sessions (a regular session and a half-day), packed large rooms of people, and had everyone buzzing with enthusiasm. I spoke to him after the event, and he told me that his expectations were blown away. Paul isn’t normally a fan of crowds, and the thought of 4000 people would have been scary. But he told me he had no idea that people would welcome him so well, be so friendly and so down to earth. He’s seen the significance of the SQL Server community, and says he’ll be back. It’ll be good to see him there. Will you be there too?

    Read the article

  • SQL 2014 does data the way developers want

    - by Rob Farley
    A post I’ve been meaning to write for a while, good that it fits with this month’s T-SQL Tuesday, hosted by Joey D’Antoni (@jdanton) Ever since I got into databases, I’ve been a fan. I studied Pure Maths at university (as well as Computer Science), and am very comfortable with Set Theory, which undergirds relational database concepts. But I’ve also spent a long time as a developer, and appreciate that that databases don’t exactly fit within the stuff I learned in my first year of uni, particularly the “Algorithms and Data Structures” subject, in which we studied concepts like linked lists. Writing in languages like C, we used pointers to quickly move around data, without a database in sight. Of course, if we had a power failure all this data was lost, as it was only persisted in RAM. Perhaps it’s why I’m a fan of database internals, of indexes, latches, execution plans, and so on – the developer in me wants to be reassured that we’re getting to the data as efficiently as possible. Back when SQL Server 2005 was approaching, one of the big stories was around CLR. Many were saying that T-SQL stored procedures would be a thing of the past because we now had CLR, and that obviously going to be much faster than using the abstracted T-SQL. Around the same time, we were seeing technologies like Linq-to-SQL produce poor T-SQL equivalents, and developers had had a gutful. They wanted to move away from T-SQL, having lost trust in it. I was never one of those developers, because I’d looked under the covers and knew that despite being abstracted, T-SQL was still a good way of getting to data. It worked for me, appealing to both my Set Theory side and my Developer side. CLR hasn’t exactly become the default option for stored procedures, although there are plenty of situations where it can be useful for getting faster performance. SQL Server 2014 is different though, through Hekaton – its In-Memory OLTP environment. When you create a table using Hekaton (that is, a memory-optimized one), the table you create is the kind of thing you’d’ve made as a developer. It creates code in C leveraging structs and pointers and arrays, which it compiles into fast code. When you insert data into it, it creates a new instance of a struct in memory, and adds it to an array. When the insert is committed, a small write is made to the transaction to make sure it’s durable, but none of the locking and latching behaviour that typifies transactional systems is needed. Indexes are done using hashes and using bw-trees (which avoid locking through the use of pointers) and by handling each updates as a delete-and-insert. This is data the way that developers do it when they’re coding for performance – the way I was taught at university before I learned about databases. Being done in C, it compiles to very quick code, and although these tables don’t support every feature that regular SQL tables do, this is still an excellent direction that has been taken. @rob_farley

    Read the article

  • Is it considered poor programming to do this with xna components?

    - by Rob
    I created my own Menu System that is event driven. In order to have a loading screen and multithreaded loading to work, I devised this sort of implementation: //Let's check if the game is done loading. if (_game != null) { _gameLoaded = _game.DoneLoading; } //This means the game is loading still, //therefore the loading screen should be active. if (!_gameLoaded && _gameActive) { _gameScreenList[2].UpdateMenu(); } //The loading screen was selected. if (_gameScreenList[2].CurrentState == GameScreen.State.Shown && !_gameActive) { Components.Add(_game = new ParadoxGame(this)); _game.Initialize(); //Initializes the Game so that the loading can begin. _gameActive = true; } In the XNA Game Component that contains the actual game, in the LoadContent method I simply created a new Thread that calls another method ThreadLoad that has all the actual loading. I also have a boolean variable called DoneLoading in the XNA Game Component that is set to true at the end of the ThreadLoad. I am wondering if this is a poor implementation.

    Read the article

  • Using Google Webmaster & Analytics, what data to look at to improve website performance?

    - by Rob
    Using data from Google Analytics and Webmaster tools, what data should I be looking at to improve my websites performance? I want to improve the SEO, usability and just general performance of my website. EDIT: It's a portfolio website that we've done the initial SEO for, also optimised all images etc and made the site as fast as possible. What kind of things should I be looking out for in the analytics and webmaster data to improve performance for both the SEO and each individual page.

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >