Search Results

Search found 8715 results on 349 pages for 'bad sectors'.

Page 244/349 | < Previous Page | 240 241 242 243 244 245 246 247 248 249 250 251  | Next Page >

  • What shall I include in a 10 week web technologies course?

    - by Iain
    In September I will be teaching a university module on web technologies. This session will be available to 1st year (freshman) students who don't necessarily have any programming knowledge or know how the web works. In the 2nd semester I will be teaching Flash, which is my specialism, so I know exactly what I am going to teach, but in the 1st semester I will be teaching them web standards technologies - HTML, CSS, JS, jQuery, PHP and MySQL. Where I need advice is how to proportion the emphasis for each part, and which parts of each technology to cover. Another real issue I'm struggling with is how much of the bad old ways should I teach them? Do they need to know about bold as well as strong, etc. UPDATE: based, on your feedback I will only be teaching the latest version of everything - CSS3, HTML5 etc. I'm not sure exactly how long the semester will be but I'm guessing about 10-12 weeks. Each session is a 2 hour lab. Obviously there's only so much I can cover in that time and it will be up to the students to go a research this stuff properly on W3 schools etc. My ideas so far were: Lesson 0 - Course intro and overview of the current tech landscape. What is out there, what will we be learning, what won't we. What is a web server, URL etc. Looking at different example websites and discussing how they work. Lesson 1 - HTML basics (head, body, title, img, table, a, lists, h1, strong etc) Lesson 2 - CSS for styling and layout - fonts, webfonts, float etc Lesson 3 - Intro to programming JS (variables, loops, conditionals, functions) Lesson 4 - more JS programming fundamentals, DOM manipulation Lesson 5 - jQuery - making things fly about and look cool Lesson 6 - XML and Ajax Lesson 7 - PHP basics - syntax, server-side principles Lesson 8 - PHP and MySQL - forms, logins, saving user info Lesson 9 - don't know Lesson 10 - don't know Please let me know if you think this is the right order, what have I missed, how to use any spare sessions etc. Thanks :) UPDATE BASED ON RESPONSES: Thanks for all your responses - some great stuff. To be absolutely clear, this is not a computer science course, it is a practical module on a creative technology course. The emphasis definitely has to be on making cool things work rather than understanding how the backbone of the internet works. That can come later, if the students are interested. At the end of the module I would like the students to be able to produce a web page or pages that does something cool, using some or all of the technologies I cover. Many of these topics are of course far beyond the scope of a 2 hour session, however I do not have the option of reducing the syllabus, I will just have to explain what the technology does and encourage the student to research it in their own time.

    Read the article

  • How do we provide valid time estimates during Sprint Planning without doing "too much" design?

    - by Michael Edenfield
    My team is getting up to speed with Scrum, but most of us are more familiar with non-agile or "pseudo-"agile methodologies. The part that is the biggest hurdle for us is running an efficient Sprint Planning meeting where we break our backlog items into tasks, and estimate hours. (I'm using the terminology from the VS2010 Scrum Template; apologies if I use the wrong word somewhere.) When we try to figure out how long a task is going to take, we often fall into the trap of designing the feature at the code level -- table layout, interfaces, etc -- in order to figure out how long that's going to take. I'm pretty sure this is not the appropriate place to be doing that kind of design. We should be scheduling tasks for these design meetings during the sprint. However, we are having trouble figuring out how else to come up with meaningful estimates for the tasks. Are there any practical habits/techniques/etc. for making a judgement call about how long a feature is going to take, without knowing how you plan to implement it? If our time estimates are going to change significantly once the design has been completed, how can we properly budget our Sprint backlog ahead of time? EDIT: Just to clarify, since some of the comments/answers are very valid but I think addressing the wrong question. We know that what we're doing is not right, and that we should be building time into the sprint for this design. Conceptually all of the developers understand that. We also also bringing in a team member with Scrum experience to keep us on track if we start going off into the weeds. The problem is that, without going through this design process, we are finding it difficult to provide concrete time estimates for anything. We are constantly saying things like "well if we design it this way it might take 8 hours but if we end up having to do this other way instead that will take about 32 but it might not be as bad once we start trying to write it...". I also assume that this process will get better once we have some historical velocity to work from, but many of the technologies and architectural patterns we are using are new to us. But if potentially-wildly-wrong estimates are just a natural part of adapting this process then we will just need to recondition ourselves to accept that :)

    Read the article

  • Lenovo Thinkpad X1 Carbon support

    - by Robottinosino
    I am considering selling my Mac to get money towards a Lenovo Thinkpad X1 because what I really want to do is to be running an Ubuntu system all the time. Is this machine completely supported in Ubuntu, with no tiny little feature missing just because I am "going Linux"? Optional user story section, skip to the question below if you don't have time: I have a friend who bought a "works on Ubuntu" system a year ago and has hated the fact ever since: battery lasts less than if he boots in Windows (which he despises) and he ascribes that to "no good OS/harware integration and support for advanced chipset power management features", odd behaviour on suspend/resume/hibernate (says: "when it works 90% of the time and the other 10% it makes you lose your work is as good as broken - 90% is the same as 0% he says), some occasional graphics card glitches he can perfectly well live with and has almost grown affectionate to, and finally, and that is what would make him undo his choice if he could, bad "input device drivers". He says: trackpoint and trackpad just "feel different", "so much better" on Windows and that was impossible to know from the website brochure. That story makes me very doubtful... but I want to abandon this "walled garden" of prison that is my Mac and go Ubuntu all the way, no doubt about that! My dilemma at this time is just: "I don't want to live with those eternal frustrations for sure"! Here's a directly answerable phrasing of my question: Is the Lenovo Thinkpad X1 supported on Ubuntu? Yes/no, which version? Which hardware features are not supported? Provide a list Optionally: sort the list in descending order of frustration from your experience Optionally: mention if there are acceptable workarounds to the "out-of-the-box" condition described in the earlier points and whether this ameliorates frustration at least to "tolerable" levels Comment: the Ubuntu hardware certification page is so not-for-end-users it's unreal. Whoa. What would make it end-user friendly is: Link to "buy here and you'll be just fine, this is the right configuration for you, it'll work as long as you press BUY on that page and don't browse further" Remove mentions of may and might not work. Just tell it straight: press buy here and you will get a working system with the exception of A, B, C (so that I can decide whether the philosophical "freedom pleasure" I get from escaping an Apple world is enough to off-balance the loss, for instance, of Bluetooth capabilities (something that I of course use on my Mac) but "could" lose to use free (as in freedom) software The certification page fails to dispel doubts in me as an end-user. I don't feel "eased into Ubuntu", I feel "partially informed".

    Read the article

  • How to Remove Extensions From, and Force the Trailing Slash at the End of URLs?

    - by Kronbernkzion
    Example of current file structure: example.com/foo.php example.com/bar.html example.com/directory/ example.com/directory/foo.php example.com/directory/bar.html example.com/cgi-bin/directory/foo.cgi I would like to remove HTML, PHP and CGI extensions from, and then force the trailing slash at the end of URLs. So, it could look like this: example.com/foo/ example.com/bar/ example.com/directory/ example.com/directory/foo/ example.com/directory/bar/ example.com/cgi-bin/directory/foo/ I am very frustrated because I've searched for 17 hours straight for solution and visited more than a few hundred pages on various blogs and forums. I'm not joking. So I think I've done my research. Here is the code that sits in my .htaccess file right now: RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME}\.html -f RewriteRule ^(([^/]+/)*[^./]+)/$ $1.html RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !(\.[a-zA-Z0-9]|/)$ RewriteRule (.*)$ /$1/ [R=301,L] As you can see, this code only removes .html (and I'm not very happy with it because I think it could be done a lot simpler). I can remove the extension from PHP files when I rename them to .html through .htaccess, but that's not what I want. I want to remove it straight. This is the first thing I don't know how to do. The second thing is actually very annoying. My .htaccess file with code above, adds .html/ to every string entered after example.com/directory/foo/. So if I enter example.com/directory/foo/bar (obviously /bar doesn't exist since foo is a file), instead of just displaying message that page is not found, it converts it to example.com/directory/foo/bar.html/, then searches for a file for a few seconds and then displays the not found message. This, of course, is bad behavior. So, once again, I need the code in .htaccess to do the following things: Remove .html extension Remove .php extension Remove .cgi extension Force the trailing slash at the end of URLs Requests should behave correctly (no adding trailing slashes or extensions to strings if file or directory doesn't exist on server) Code should be as simple as possible I would very much appreciate any help. And to first person that gives me the solution, I'll send two $50 iTunes Store gift cards for US store. If this offends anyone, I am truly sorry and I apologize. Thanks in advance.

    Read the article

  • Why do "Joke" programming languages exist? [closed]

    - by ThePlan
    First of all please be aware this post contains some abusive language but I hope it will not bother anyone. I apologize for the bad language but that's what the name is. As I've been doing documentation on existing programming languages attempting to make a complete list of them I stumbled across terrible programming languages, which were clearly not made for actual use and implementation due to their insane difficulty. Languages such as Brainfu*k and LOLCODE or Whitespace are fool languages because they have no real use. For example, a "Hello world" program written in BrainFu*k. Taken from Wikipedia: The following program prints "Hello World!" and a newline to the screen: +++++ +++++ initialize counter (cell #0) to 10 [ use loop to set the next four cells to 70/100/30/10 > +++++ ++ add 7 to cell #1 > +++++ +++++ add 10 to cell #2 > +++ add 3 to cell #3 > + add 1 to cell #4 <<<< - decrement counter (cell #0) ] > ++ . print 'H' > + . print 'e' +++++ ++ . print 'l' . print 'l' +++ . print 'o' > ++ . print ' ' << +++++ +++++ +++++ . print 'W' > . print 'o' +++ . print 'r' ----- - . print 'l' ----- --- . print 'd' > + . print '!' > . print '\n' or another example taken from LOLCODE language: HAI CAN HAS STDIO? PLZ OPEN FILE "LOLCATS.TXT"? AWSUM THX VISIBLE FILE O NOES INVISIBLE "ERROR!" KTHXBYE These languages are very difficult to learn/read/work with. My question is - Why do they exist? What is the purpose of them? Also, is there an official "name" for these type of languages?

    Read the article

  • Support ARMv7 instruction set in Windows Embedded Compact applications

    - by Valter Minute
    On of the most interesting new features of Windows Embedded Compact 7 is support for the ARMv5, ARMv6 and ARMv7 instruction sets instead of the ARMv4 “generic” support provided by the previous releases. This means that code build for Windows Embedded Compact 7 can leverage features (like the FPU unit for ARMv6 and v7) and instructions of the recent ARM cores and improve their performances. Those improvements are noticeable in graphics, floating point calculation and data processing. The ARMv7 instruction set is supported by the latest Cortex-A8, A9 and A15 processor families. Those processor are currently used in tablets, smartphones, in-car navigation systems and provide a great amount of processing power and a low amount of electric power making them very interesting for portable device but also for any kind of device that requires a rich user interface, processing power, connectivity and has to keep its power consumption low. The bad news is that the compiler provided with Visual Studio 2008 does not provide support for ARMv7, building native applications using just the ARMv4 instruction set. Porting a Visual Studio “Smart Device” native C/C++ project to Platform Builder is not easy and you’ll lack many of the features that the VS2008 application development environment provides. You’ll also need access to the BSP and OSDesign configuration for your device to be able to build and debug your application inside Platform Builder and this may prevent independent software vendors from using the new compiler to improve their applications performances. Adeneo Embedded now provides a whitepaper and a Visual Studio plug-in that allows usage of the new ARMv7 enabled compiler to build applications inside Visual Studio 2008. I worked on the whitepaper and the tools, with the help of my colleagues and now the results can be downloaded from Adeneo Embedded’s website: http://www.adeneo-embedded.com/OS-Technologies/Windows-Embedded (Click on the “WEC7 ARMv7 Whitepaper tab to access the download links, free registration required) A very basic benchmark showed a very good performance improvement in integer and floating-point operations. Obviously your mileage may vary and we can’t promise the same amount of improvement on any application, but with a small effort on your side (even smaller if you use the plug-in) you can try on your own application. ARMv7 support is provided using Platform Builder’s compiler and VS2008 application debugger is not able to debut ARMv7 code, so you may need to put in place some workaround like keeping ARMv4 code for debugging etc.

    Read the article

  • From Co-op to fulltime help with salary negotation [closed]

    - by Peter
    Hey I'm a coop student that worked at a particular medium size printing company for 8 months. I had a good time it was lax, sometimes insufficiently challenging but none the less I learned a whole lot. I stuck with them for another 5 months (including this month) at the same rate I was paid then, doing testing work, tool development, taking care of emergencies when the lead developers were away, and other smaller projects and now bigger projects and problem handling (bad printer output etc.). I know their website inside out (ecommerce), and I know their printing software inside out and have made many changes to them both without a hitch. I have also done a lot of refactoring of the existing code base which as far as Im concerned, I believe am the only one to do those sorts of restructuring even though there is constant talk about it. I guess the unit testing paid off and lets me see the value in modularity if even a tad more. Never the less I have faith in my skill and the restructuring I did turned out better than I had imagined . Now the problem is that I finish school next month and so I asked for a full time spot the month after. They have been expanding and have hired a new guy a few months after my coop spot, and just now they hired a new guy to deal with the CRM application. The lead developer who wrote all of the software had left 5 months ago so it was up to all of us to learn what he had done over 4 years (including db, networking). So now I'm afraid that if I assert myself for a salary similar to the other guys, which I believe I am certainly on par with, that I would be seen as ingrateful. It's hard to flip a switch and say, hey double my pay, although when I'm working with their bread and butter (printers) and writing new features, refactoring the whole application for extensibility. I love it regardless of pay. I also feel maybe I'm replaceeble, although nobody knows the website better than myself and the lead web dev (not by a long shot), and nobody knows the printer software/drivers better than myself. I just thought they would have brought up a raise earlier on, and now it feels like they don't value my work. I'm also tired of worrying about it. I think my question is, well what do I do next?

    Read the article

  • Getting a virus is *very* annoying

    - by bconlon
    I spent most of yesterday removing an annoying virus from my PC. I feel slightly foolish for getting one in the first place, but after so many years I guess I was always going to eventually succumb. I was also a little surprised at the failure of various tools at removing it. The virus would redirect the browser to websites including ‘licosearch’, ‘hugosearch’ and ‘facebook’, and the disk would be thrashing away infecting dlls in some way. I had the full up to date version of McAfee installed. This identified that there was an issue in some dlls on the system and was able to ‘fix’ them. But they kept getting re-infected. So I installed Microsoft Security Essentials and this too was able to identify and ‘fix’ the infected dlls. The system scans take forever and I really expected better results. I also tried Malwarebytes, Hitman Pro, AVG and Sophos to no avail. Eventually I thought I’d investigate myself. It turned out that on reboot, the virus would start 3 instances of Firefox.exe which I’m guessing would do bad things including infecting as many dlls on the system as possible. I removed Firefox and the virus cleverly then launched 3 instances of Chrome! So I uninstalled Chrome and yes, it then started to launch 3 instances of iexplore.exe. If I’m honest, by this stage I was just seeing if it would be able to use any of the browsers! As it was starting these on reboot, I looked in my User Startup folder and there was a <randomly named>.exe and several log files. I deleted these and rebooted. When I looked they had been recreated. So I then looked in the registry Run and RunOnce entries: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Run. Sure enough there were entries to run a file in C:\Program Files\<random name folder>\<random name file>.exe. I deleted this and rebooted and it was fixed. I also looked in the event log and found a warning that Winlogon had failed to start the file C:\Program Files\<random name folder>\<random name file>.exe So I also checked HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon and this entry had also been changed. Finally I ran a full system scan to clean up any infected dlls. I hope it’s gone for good!  #

    Read the article

  • C Minishell Command Expansion Printing Gibberish

    - by Optimus_Pwn
    I'm writing a unix minishell in C, and am at the point where I'm adding command expansion. What I mean by this is that I can nest commands in other commands, for example: $> echo hello $(echo world! ... $(echo and stuff)) hello world! ... and stuff I think I have it working mostly, however it isn't marking the end of the expanded string correctly, for example if I do: $> echo a $(echo b $(echo c)) a b c $> echo d $(echo e) d e c See it prints the c, even though I didn't ask it to. Here is my code: msh.c - http://pastebin.com/sd6DZYwB expand.c - http://pastebin.com/uLqvFGPw I have a more code, but there's a lot of it, and these are the parts that I'm having trouble with at the moment. I'll try to tell you the basic way I'm doing this. Main is in msh.c, here it gets a line of input from either the commandline or a shellfile, and then calls processline (char *line, int outFD, int waitFlag), where line is the line we just got, outFD is the file descriptor of the output file, and waitFlag tells us whether or not we should wait if we fork. When we call this from main we do it like this: processline (buffer, 1, 1); In processline, we allocate a new line: char expanded_line[EXPANDEDLEN]; We then call expand, in expand.c: expand(line, expanded_line, EXPANDEDLEN); In expand, we copy the characters literally from line to expanded_line until we find a $(, which then calls: static int expCmdOutput(char *orig, char *new, int *oldl_ind, int *newl_ind) orig is line, and new is expanded line. oldl_ind and newl_ind are the current positions in the line and expanded line, respectively. Then we pipe, and recursively call processline, passing it the nested command(for example, if we had "echo a $(echo b)", we would pass processline "echo b"). This is where I get confused, each time expand is called, is it allocating a new chunk of memory EXPANDEDLEN long? If so, this is bad because I'll run out of stack room really quickly(in the case of a hugely nested commandline input). In expand I insert a null character at the end of the expanded string, so why is it printing past it? If you guys need any more code, or explanations, just ask. Secondly, I put the code in pastebin because there's a ton of it, and in my experience people don't like it when I fill up several pages with code. Thanks.

    Read the article

  • MOSSLover Lives On&hellip;

    - by MOSSLover
    A while back, maybe 6 months, I got some bad news about 2010.  Microsoft was removing Office from the MOSS equivalent of 2010, so basically my alias would be obsolete the second 2010 caught on in the community.  I thought about it for some time.  I had some discussions with friends in the community.  I even noticed that the MOSSMan changed his twitter id.  I started my blog around a WSS 3.0 project when I worked for LRS in there St. Louis Office in February/March 2007.  So I think it’s fitting to keep the name, because my community involvement centers around 2007.  My first ever speaking ordeal was at the Kansas City Office Geeks meeting in November of 2007 on Disaster Recovery where about three people attended.  The first user group meeting I ever attended was around the month of June 2007 at the KC .Net User Group about two weeks after my braces were installed.  It’s definitely fitting to say that 2007 paved the way for everything that happened in the past 2/2 1/2 years.  If anyone asks what MOSSLover means I added a description on twitter and I also added my name.  I added my name for other reasons, because I’m sick of people thinking I am the guy in the photo.  Also, I’d like people to recognize me for who I am.  Everyone should expect less of the hat in the upcoming year and more of my hair.  I’ve taken a vow to wear the hat less and less this year.  I am sick of buying hats, plus I want to move forward to gain more self confidence.  The hat does not really help.  I will still wear a t-shirt and jeans in most of my presentations.  That is who I am and it will not change any time soon.  If you expect to see me in a skirt good luck with that as it won’t be happening unless I am forced at gun point.  I hope you guys have a good weekend.  Later all… Technorati Tags: MOSSLover,Cardinal's Hat,Becky Isserman

    Read the article

  • Link instead of Attaching

    - by Daniel Moth
    With email storage not being an issue in many companies (I think I currently have 25GB of storage on my email account, I don’t even think about storage), this encourages bad behaviors such as liberally attaching office documents to emails instead of sharing a link to the document in SharePoint or SkyDrive or some file share etc. Attaching a file admittedly has its usage scenarios too, but it should not be the default. I thought I'd list the reasons why sharing a link can be better than attaching files directly. In no particular order: Better Review. It allows multiple recipients to review the file and their comments are aggregated into a single document. The alternative is everyone having to detach the document, add their comments, then send back to you, and then you have to collate. Wirth the alternative, you also potentially miss out on recipients reading comments from other recipients. Always up to date. The attachment becomes a fork instead of an always up to date document. For example, you send the email on Thursday, I only open it on Tuesday: between those days you could have made updates that now I am missing because you decided to share a link instead of an attachment. Better bookmarking. When I need to find that document you shared, you are forcing me to search through my email (I may not even be running outlook), instead of opening the link which I have bookmarked in my browser or my collection of links in my OneNote or from the recent/pinned links of the office app on my task bar, etc. Can control access. If someone accidentally or naively forwards your link to someone outside your group/org who you’d prefer not to have access to it, the location of the document can be protected with specific access control. Can add more recipients. If someone adds people to the email thread in outlook, your attachment doesn't get re-attached - instead, the person added is left without the attachment unless someone remembers to re-attach it. If it was a link, they are immediately caught up without further actions. Enable Discovery. If you put it on a share, I may be able to discover other cool stuff that lives alongside that document. Save on storage. So this doesn't apply to me given my opening statement, but if in your company you do have such limitations, attaching files eats up storage on all recipients accounts and will also get "lost" when those people archive email (and lose completely at some point if they follow the company retention policy). Like I said, attachments do have their place, but they should be an explicit choice for explicit reasons rather than the default. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Running a program on boot without login, using the screen

    - by configurator
    Preface: I have a server running on an old laptop. The screen is always on with a login prompt, but because its keyboard is in pretty bad shape, I use it exclusively via ssh. The screen is in a good position, though; I want to use it to display a clock and some stats about what my server is doing. I have scripts to display all those things, but I want to always show them on the monitor screen. My question is, how do I get my script (called HUD) to run on /dev/tty1, instead of the login prompt. Hopefully, it should be possible to accept keyboard input as well as display its output, so that it can use the keyboard to show more info where needed in a future version. I'd also like tty2 etc. to remain active as login screens, in face I actually do need to login locally. For a start, I tried creating a script that I can run from ssh to start the HUD. It goes something like this: ( flock -n 9 watch --interval 0.2 --precise --color --notitle --exec /path/to/script & disown ) 9> /var/lock/hud > /dev/tty1 2> /dev/tty1 < /dev/tty1 (I had to use & disown instead of nohup because nohup recognized the tty and redirects output to nohup.out instead.) This sort-of works. However, it has a few issues: It doesn't steal the terminal's keyboard input, so you can't ctrl+c to get out of it (nor change the script to actually use the keyboard input), and if you press enter it show it and scrolls the display, never refreshing it correctly afterwards. Oddly, if I disconnect the ssh session which created it, it stops working and shows a message: exec: No such file or directory. If I reconnect to ssh, it resumes functioning properly. It feels hackish. Is there a better way to do this? How?

    Read the article

  • PASS Summit for SQL Starters

    - by Davide Mauri
    I’ve received a buch of emails from PASS Summit “First Timers” that are also somehow new to SQL Server (for “somehow” I mean people with less than 6 month experience but with some basic knowledge of SQL Server engine) or are catching up from SQL Server 2000. The common question regards the session one should not miss to have a broad view of the entire SQL Server platform have some insight into some specific areas of SQL Server Given that I’m on (semi-)vacantion and that I have more free time (not true, I have to prepare slides & demos for several conferences, PASS Summit  - Building the Agile Data Warehouse with SQL Server 2012 - and PASS 24H - Agile Data Warehousing with SQL Server 2012 - among them…but let’s pretend it to be true), I’ve decided to make a post to answer to this common questions. Of course this is my personal point of view and given the fact that the number and quality of session that will be delivered at PASS Summit is so high that is very difficoult to make a choice, fell free to jump into the discussion and leave your feedback or – even better – answer with another post. I’m sure it will be very helpful to all the SQL Server beginners out there. I’ve imposed to myself to choose 6 session at maximum for each Track. Why 6? Because it’s the maximum number of session you can follow in one day, and given that all the session will be on the Summit DVD, they are the answer to the following question: “If I have one day to spend in training, which session I should watch?”. Of course a Summit is not like a Course so a lot of very basics concept of well-established technologies won’t be found here. Analysis Services, Integration Services, MDX are not part of the Summit this time (at least for the basic part of them). Enough with that, let’s start with the session list ideal to have a good Overview of all the SQL Server Platform: Geospatial Data Types in SQL Server 2012 Inside Unstructured Data: SQL Server 2012 FileTable and Semantic Search XQuery and XML in SQL Server: Common Problems and Best Practice Solutions Microsoft's Big Play for Big Data Dashboards: When to Choose Which MSBI Tool Microsoft BI End-User Tools 360° for what concern Database Development, I recommend the following sessions Understanding Transaction Isolation Levels What to Look for in Execution Plans Improve Query Performance by Fixing Bad Parameter Sniffing A Window into Your Data: Using SQL Window Functions Practical Uses and Optimization of New T-SQL Features in SQL Server 2012 Taking MERGE Beyond the Basics For Business Intelligence Information Delivery Analyzing SSAS Data with Excel Building Compelling Power View Reports Managed Self-Service BI PowerPivot 101  SharePoint for Business Intelligence The Best Microsoft BI Tools You've Never Heard Of and for Business Intelligence Architecture & Development BI Power Hour Building a Tabular Model Database Enterprise Information Management: Bringing Together SSIS, DQS, and MDS SSIS Design Patterns Storing Columnstore Indexes Hadoop and Its Ecosystem Components in Action Beside the listed sessions, First Timers should also take a look the the page PASS set up for them: http://www.sqlpass.org/summit/2012/Connect/FirstTimers.aspx See you at PASS Summit!

    Read the article

  • Software Architecture Analysis Method (SAAM)

    Software Architecture Analysis Method (SAAM) is a methodology used to determine how specific application quality attributes were achieved and how possible changes in the future will affect quality attributes based on hypothetical cases studies. Common quality attributes that can be utilized by this methodology include modifiability, robustness, portability, and extensibility. Quality Attribute: Application Modifiability The Modifiability quality attribute refers to how easy it changing the system in the future will be. This to me is a very open-ended attribute because a business could decide to transform a Point of Sale (POS) system in to a Lead Tracking system overnight. (Yes, this did actually happen to me) In order for SAAM to be properly applied for checking this attribute specific hypothetical case studies need to be created and review for the modifiability attribute due to the fact that various scenarios would return various results based on the amount of changes. In the case of the POS change out a payment gateway or adding an additional payment would have scored very high in comparison to changing the system over to a lead management system. I personally would evaluate this quality attribute based on the S.O.I.L.D Principles of software design. I have found from my experience the use of S.O.I.L.D in software design allows for the adoption of changes within a system. Quality Attribute: Application Robustness The Robustness quality attribute refers to how an application handles the unexpected. The unexpected can be defined but is not limited to anything not anticipated in the originating design of the system. For example: Bad Data, Limited to no network connectivity, invalid permissions, or any unexpected application exceptions. I would personally evaluate this quality attribute based on how the system handled the exceptions. Robustness Considerations Did the system stop or did it handle the unexpected error? Did the system log the unexpected error for future debugging? What message did the user receive about the error? Quality Attribute: Application Portability The Portability quality attribute refers to the ease of porting an application to run in a new operating system or device. For example, It is much easier to alter an ASP.net website to be accessible by a PC, Mac, IPhone, Android Phone, Mini PC, or Table in comparison to desktop application written in VB.net because a lot more work would be involved to get the desktop app to the point where it would be viable to port the application over to the various environments and devices. I would personally evaluate this quality attribute based on each new environment for which the hypothetical case study identifies. I would pay particular attention to the following items. Portability Considerations Hardware Dependencies Operating System Dependencies Data Source Dependencies Network Dependencies and Availabilities  Quality Attribute: Application Extensibility The Extensibility quality attribute refers to the ease of adding new features to an existing application without impacting existing functionality. I would personally evaluate this quality attribute based on each new environment for the following Extensibility  Considerations Hard coded Variables versus Configurable variables Application Documentation (External Documents and Codebase Documentation.) The use of Solid Design Principles

    Read the article

  • How to make sprint planning fun

    - by Jacob Spire
    Not only are our sprint planning meetings not fun, they're downright dreadful. The meetings are tedious, and boring, and take forever (a day, but it feels like a lot longer). The developers complain about it, and dread upcoming plannings. Our routine is pretty standard (user story inserted into sprint backlog by priority story is taken apart to tasks tasks are estimated in hours repeat), and I can't figure out what we're doing wrong. How can we make the meetings more enjoyable? ... Some more details, in response to requests for more information: Why are the backlog items not inserted and prioritized before sprint kickoff? User stories are indeed prioritized; we have no idea how long they'll take until we break them down into tasks! From the (excellent) answers here, I see that maybe we shouldn't estimate tasks at all, only the user stories. The reason we estimate tasks (and not stories) is because we've been getting story-estimates terribly wrong -- but I guess that's the subject for an altogether different question. Why are developers complaining? Meetings are long. Meetings are monotonous. Story after story, task after task, struggling (yes, struggling) to estimate how long it will take and what it involves. Estimating tasks makes user-story-estimation seem pointless. The longer the meeting, the less focus in the room. The less focused colleagues are, the longer the meeting takes. A recursive hate-spiral develops. We've considered splitting the meeting into two days in order to keep people focused, but the developers wouldn't hear of it. One day of planning is bad enough; now we'll have two?! Part of our problem is that we go into very small detail (in order to get more accurate estimations). But when we estimate roughly, we go way off the mark! To sum up the question: What are we doing wrong? What additional ways are there to make the meeting generally more enjoyable?

    Read the article

  • Designing Snake AI

    - by Ronald
    I'm new to this gamedev stackechange but have used the math and cs sites before. So, I'm in a competition to create AI for a snake that will compete with 3 other snakes in 5 minute rounds where the rules are much like the traditional Nokia snake game except that there are 4 snakes, the board is 50x50 and there are a number of small obstacles on the field. Like the Nokia game, your snake grows when you get to the fruit and if you crash into yourself, another snake or the wall you die. The game runs with a 50ms delay between moves and the server sends the new game state every 50ms which the code must analyze and what not and output the next move. The winner is the snake who had the longest length at any point in the game. Tie breakers are decided by kills. So far what I have done is implemented an A* graph search from each snake to determine if my snake is the closest to the apple and if it is, it goes for the apple. Otherwise, I made a neat little algorithm to determine the emptiest area of the board, which my snake goes for, to anticipate the next apple. Other than this I have some small survivability checks to ensure my snake isn't walking into a trap that it can't get out and if it does get stuck, I have something to give it a better chance of getting out. ... Anyway, I've tested my snake on a test server and it does quite well. Generally, my strategy of only going for the apple when its a sure thing and finding space when its not makes it grow faster than any other snakes (some snakes do a similar thing but often just go to the middle or a corner) sometimes it wins these trial games but is more often than not beaten by the same snake who seems to have the edge on survivability(my snake grows quicker but then dies somehow and this other snake just plods slowly along and wins on consistency. So I was wondering about any ideas anyone has to try and improve my snake. Or maybe ideas at a new approach to take. My functions and classes are good so changes that might seem drastic shouldn't be too bad. I encourage all ideas. Any thoughts ??

    Read the article

  • kubuntu: wireless manager can't find any networks

    - by timuçin
    I just installed Kubuntu 12.10 and as I knew my wireless driver would not be recognized, I installed it manually through here. I suppose I shall mention that I also had to do this. The driver worked just fine until I restarted my system. Since then I can't even start over. My wireless card is realtek 8723 The oly lead I have is this piece of log: 11/14/12 07:35:21 AM timucin-W150ER NetworkManager[989] <info> wpa_supplicant started 11/14/12 07:35:21 AM timucin-W150ER NetworkManager[989] <info> (wlan0) supports 4 scan SSIDs 11/14/12 07:35:21 AM timucin-W150ER NetworkManager[989] <info> (wlan0): supplicant interface state: starting -> ready 11/14/12 07:35:21 AM timucin-W150ER NetworkManager[989] <info> (wlan0): device state change: unavailable -> disconnected (reason 'supplicant-available') [20 30 42] 11/14/12 07:35:21 AM timucin-W150ER NetworkManager[989] <warn> Trying to remove a non-existant call id. 11/14/12 07:35:21 AM timucin-W150ER NetworkManager[989] <info> (wlan0): supplicant interface state: ready -> inactive 11/14/12 07:35:21 AM timucin-W150ER NetworkManager[989] <info> (wlan0) supports 4 scan SSIDs I feel really bad that I have been trying so many things and even while I am trying them I get a problem at every step. I am really frustrated. You can the time in the logs, I didn't get up early. Just needed to share my feelings. Even the ethernet cable I have use is short for God's sake, I have had to sit at this massively uncomfortable posture for hours. edit: I discovered something interesting. When I boot my windows I see that my wireless manager says "disconnected" now. However when I start ubuntu after windows and modprobe rtl8723e wireless works just fine. Then again, when I restart ubuntu I am back where I have started. So now I have to start windows, restart my machine and do the modprobe thing to see wireless networks.

    Read the article

  • e2fsck extremly slow, although enough memory exists

    - by kaefert
    I've got this external USB-Disk: kaefert@blechmobil:~$ lsusb -s 2:3 Bus 002 Device 003: ID 0bc2:3320 Seagate RSS LLC As can be seen in this dmesg output, there are some problems that prevents that disk from beeing mounted: kaefert@blechmobil:~$ dmesg | grep sdb [ 114.474342] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.475089] sd 5:0:0:0: [sdb] Write Protect is off [ 114.475092] sd 5:0:0:0: [sdb] Mode Sense: 43 00 00 00 [ 114.475959] sd 5:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 114.477093] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.501649] sdb: sdb1 [ 114.502717] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.504354] sd 5:0:0:0: [sdb] Attached SCSI disk [ 116.804408] EXT4-fs (sdb1): ext4_check_descriptors: Checksum for group 3976 failed (47397!=61519) [ 116.804413] EXT4-fs (sdb1): group descriptors corrupted! So I went and fired up my favorite partition manager - gparted, and told it to verify and repair the partition sdb1. This made gparted call e2fsck (version 1.42.4 (12-Jun-2012)) e2fsck -f -y -v /dev/sdb1 Although gparted called e2fsck with the "-v" option, sadly it doesn't show me the output of my e2fsck process (bugreport https://bugzilla.gnome.org/show_bug.cgi?id=467925 ) I started this whole thing on Sunday (2012-11-04_2200) evening, so about 48 hours ago, this is what htop says about it now (2012-11-06-1900): PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 3704 root 39 19 1560M 1166M 768 R 98.0 19.5 42h56:43 e2fsck -f -y -v /dev/sdb1 Now I found a few posts on the internet that discuss e2fsck running slow, for example: http://gparted-forum.surf4.info/viewtopic.php?id=13613 where they write that its a good idea to see if the disk is just that slow because maybe its damaged, and I think these outputs tell me that this is not the case in my case: kaefert@blechmobil:~$ sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 3562 MB in 2.00 seconds = 1783.29 MB/sec Timing buffered disk reads: 82 MB in 3.01 seconds = 27.26 MB/sec kaefert@blechmobil:~$ sudo hdparm /dev/sdb /dev/sdb: multcount = 0 (off) readonly = 0 (off) readahead = 256 (on) geometry = 364801/255/63, sectors = 5860533160, start = 0 However, although I can read quickly from that disk, this disk speed doesn't seem to be used by e2fsck, considering tools like gkrellm or iotop or this: kaefert@blechmobil:~$ iostat -x Linux 3.2.0-2-amd64 (blechmobil) 2012-11-06 _x86_64_ (2 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 14,24 47,81 14,63 0,95 0,00 22,37 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0,59 8,29 2,42 5,14 43,17 160,17 53,75 0,30 39,80 8,72 54,42 3,95 2,99 sdb 137,54 5,48 9,23 0,20 587,07 22,73 129,35 0,07 7,70 7,51 16,18 2,17 2,04 Now I researched a little bit on how to find out what e2fsck is doing with all that processor time, and I found the tool strace, which gives me this: kaefert@blechmobil:~$ sudo strace -p3704 lseek(4, 41026998272, SEEK_SET) = 41026998272 write(4, "\212\354K[_\361\3nl\212\245\352\255jR\303\354\312Yv\334p\253r\217\265\3567\325\257\3766"..., 4096) = 4096 lseek(4, 48404766720, SEEK_SET) = 48404766720 read(4, "\7t\260\366\346\337\304\210\33\267j\35\377'\31f\372\252\ffU\317.y\211\360\36\240c\30`\34"..., 4096) = 4096 lseek(4, 41027002368, SEEK_SET) = 41027002368 write(4, "\232]7Ws\321\352\t\1@[+5\263\334\276{\343zZx\352\21\316`1\271[\202\350R`"..., 4096) = 4096 lseek(4, 48404770816, SEEK_SET) = 48404770816 read(4, "\17\362r\230\327\25\346//\210H\v\311\3237\323K\304\306\361a\223\311\324\272?\213\tq \370\24"..., 4096) = 4096 lseek(4, 41027006464, SEEK_SET) = 41027006464 write(4, "\367yy>x\216?=\324Z\305\351\376&\25\244\210\271\22\306}\276\237\370(\214\205G\262\360\257#"..., 4096) = 4096 lseek(4, 48404774912, SEEK_SET) = 48404774912 read(4, "\365\25\0\21|T\0\21}3t_\272\373\222k\r\177\303\1\201\261\221$\261B\232\3142\21U\316"..., 4096) = 4096 ^CProcess 3704 detached around 16 of these lines every second, so 4 read and 4 write operations every second, which I don't consider to be a lot.. And finally, my question: Will this process ever finish? If those numbers from fseek (48404774912) represent bytes, that would be something like 45 gigabytes, with this beeing a 3 terrabyte disk, which would give me 134 days to go, if the speed stays constant, and he scans the disk like this completly and only once. Do you have some advice for me? I have most of the data on that disk elsewhere, but I've put a lot of hours into sorting and merging it to this disk, so I would prefer to getting this disk up and running again, without formatting it anew. I don't think that the hardware is damaged since the disk is only a few months and since I can't see any I/O errors in the dmesg output. UPDATE: I just looked at the strace output again (2012-11-06_2300), now it looks like this: lseek(4, 1419860611072, SEEK_SET) = 1419860611072 read(4, "3#\f\2447\335\0\22A\355\374\276j\204'\207|\217V|\23\245[\7VP\251\242\276\207\317:"..., 4096) = 4096 lseek(4, 43018145792, SEEK_SET) = 43018145792 write(4, "]\206\231\342Y\204-2I\362\242\344\6R\205\361\324\177\265\317C\334V\324\260\334\275t=\10F."..., 4096) = 4096 lseek(4, 1419860615168, SEEK_SET) = 1419860615168 read(4, "\262\305\314Y\367\37x\326\245\226\226\320N\333$s\34\204\311\222\7\315\236\336\300TK\337\264\236\211n"..., 4096) = 4096 lseek(4, 43018149888, SEEK_SET) = 43018149888 write(4, "\271\224m\311\224\25!I\376\16;\377\0\223H\25Yd\201Y\342\r\203\271\24eG<\202{\373V"..., 4096) = 4096 lseek(4, 1419860619264, SEEK_SET) = 1419860619264 read(4, ";d\360\177\n\346\253\210\222|\250\352T\335M\33\260\320\261\7g\222P\344H?t\240\20\2548\310"..., 4096) = 4096 lseek(4, 43018153984, SEEK_SET) = 43018153984 write(4, "\360\252j\317\310\251G\227\335{\214`\341\267\31Y\202\360\v\374\307oq\3063\217Z\223\313\36D\211"..., 4096) = 4096 So this number of the lseeks before the reads, like 1419860619264 are already a lot bigger, standing for 1.29 terabytes if the numbers are bytes, so it doesn't seem to be a linear progress on a big scale, maybe there are only some areas that need work, that have big gaps in between them. (times are in CET)

    Read the article

  • Website File and Folder Structure

    - by Drummss
    I am having a problem learning how proper website structure should be. And by that I mean how to code the pages and how folder structure should be. Currently I am navigating around my website using GET variables in PHP when you want go to another page, so I am always loading the index.php file. And I would load the page I wanted like so: $page = "error"; if(isset($_GET["page"]) AND file_exists("pages/".$_GET["page"].".php")) { $page = $_GET["page"]; } elseif(!isset($_GET["page"])) { $page = "home"; } And this: <div id="page"> <?php include("pages/".$page.".php"); ?> </div> The reason I am doing this is because it makes making new pages a lot easier as I don't have to link style sheets and javascript in every file like so: <head> <title> Website Name </title> <link href='http://fonts.googleapis.com/css?family=Lato:300,400' rel='stylesheet' type='text/css'> <link rel="stylesheet" href="css/style.css" type="text/css"> <link rel="shortcut icon" href="favicon.png"/> <script src="//ajax.googleapis.com/ajax/libs/jquery/1.11.0/jquery.min.js" type="text/javascript"></script> <script src="//ajax.googleapis.com/ajax/libs/jqueryui/1.10.4/jquery-ui.min.js" type="text/javascript"></script> </head> There is a lot of problems doing it this way as URLs don't look normal ("?page=apanel/app") if I am trying to access a page from inside a folder inside the pages folder to prevent clutter. Obviously this is not the only way to go about it and I can't find the proper way to do it because I'm sure websites don't link style sheets in every file as that would be very bad if you changed a file name or needed to add another file you would have to change it in every file. If anyone could tell me how it is actually done or point me towards a tutorial that would be great. Thanks for your time :)

    Read the article

  • How to refactor a method which breaks "The law of Demeter" principle?

    - by dreza
    I often find myself breaking this principle (not intentially, just through bad design). However recently I've seen a bit of code that I'm not sure of the best approach. I have a number of classes. For simplicity I've taken out the bulk of the classes methods etc public class Paddock { public SoilType Soil { get; private set; } // a whole bunch of other properties around paddock information } public class SoilType { public SoilDrainageType Drainage { get; private set; } // a whole bunch of other properties around soil types } public class SoilDrainageType { // a whole bunch of public properties that expose soil drainage values public double GetProportionOfDrainage(SoilType soil, double blockRatio) { // This method does a number of calculations using public properties // exposed off SoilType as well as the blockRatio value in some conditions } } In the code I have seen in a number of places calls like so paddock.Soil.Drainage.GetProportionOfDrainage(paddock.Soil, paddock.GetBlockRatio()); or within the block object itself in places it's Soil.Drainage.GetProportionOfDrainage(this.Soil, this.GetBlockRatio()); Upon reading this seems to break "The Law of Demeter" in that I'm chaining together these properties to access the method I want. So my thought in order to adjust this was to create public methods on SoilType and Paddock that contains wrappers i.e. on paddock it would be public class Paddock { public double GetProportionOfDrainage() { return Soil.GetProportionOfDrainage(this.GetBlockRatio()); } } on the SoilType it would be public class SoilType { public double GetProportionOfDrainage(double blockRatio) { return Drainage.GetProportionOfDrainage(this, blockRatio); } } so now calls where it used would be simply // used outside of paddock class where we can access instances of Paddock paddock.GetProportionofDrainage() or this.GetProportionOfDrainage(); // if used within Paddock class This seemed like a nice alternative. However now I have a concern over how would I enforce this usage and stop anyone else from writing code such as paddock.Soil.Drainage.GetProportionOfDrainage(paddock.Soil, paddock.GetBlockRatio()); rather than just paddock.GetProportionOfDrainage(); I need the properties to remain public at this stage as they are too ingrained in usage throughout the code block. However I don't really want a mixture of accessing the method on DrainageType directly as that seems to defeat the purpose altogether. What would be the appropiate design approach in this situation? I can provide more information as required to better help in answers. Is my thoughts on refactoring this even appropiate or should is it best to leave it as is and use the property chaining to access the method as and when required?

    Read the article

  • Is it appropriate to try to control the order of finalization?

    - by Strilanc
    I'm writing a class which is roughly analogous to a CancellationToken, except it has a third state for "never going to be cancelled". At the moment I'm trying to decide what to do if the 'source' of the token is garbage collected without ever being set. It seems that, intuitively, the source should transition the associated token to the 'never cancelled' state when it is about to be collected. However, this could trigger callbacks who were only kept alive by their linkage from the token. That means what those callbacks reference might now in the process of finalization. Calling them would be bad. In order to "fix" this, I wrote this class: public sealed class GCRoot { private static readonly GCRoot MainRoot = new GCRoot(); private GCRoot _next; private GCRoot _prev; private object _value; private GCRoot() { this._next = this._prev = this; } private GCRoot(GCRoot prev, object value) { this._value = value; this._prev = prev; this._next = prev._next; _prev._next = this; _next._prev = this; } public static GCRoot Root(object value) { return new GCRoot(MainRoot, value); } public void Unroot() { lock (MainRoot) { _next._prev = _prev; _prev._next = _next; this._next = this._prev = this; } } } intending to use it like this: Source() { ... _root = GCRoot.Root(callbacks); } void TransitionToNeverCancelled() { _root.Unlink(); ... } ~Source() { TransitionToNeverCancelled(); } but now I'm troubled. This seems to open the possibility for memory leaks, without actually fixing all cases of sources in limbo. Like, if a source is closed over in one of its own callbacks, then it is rooted by the callback root and so can never be collected. Presumably I should just let my sources be collected without a peep. Or maybe not? Is it ever appropriate to try to control the order of finalization, or is it a giant warning sign?

    Read the article

  • Deploying, but without those pesky test files!

    - by Chris Skardon
    Silverlight testing is great, we all know that (don’t we??), we’re expected to do it as part of the development process, but once we’ve got an awesome application written and we come to deploy it, we don’t want the test files going out with it… You might be like me, have the files in a Web project – let’s face it, that’s how we’re pushed into doing it… So let’s stick with it! Now. I’m deploying via the wonders of the Web Deployment shizzle, but this also applies to the classic ‘installer’ project as well.. Baaaasically, we’re going to use the ‘Debug’ / ‘Release’ configurations to include given files. ?? OK, you know in the top of your visual studio editor, you (usually) have a drop down which predominantly reads ‘Debug’? Those are ‘configurations’. Mostly we don’t bother changing it, primarily due to laziness, but also the fact that we generally don’t see ‘Release’ as actually doing anything other than making it harder to find problems :) Well today my friends we’re going to change that bad boy… The next few steps are just helping you set up a new ‘Debug’ configuration, but you can just switch to the ‘Release’ configuration and skip to the end… First let’s go to the Configuration Manager. There are multiple ways, through the ‘Build’ menu (at the bottom), or via the drop down which currently has ‘Debug’ in it :) Got it? Select ‘New’ from the ‘Active solution configuration’ drop down: Create a new configuration, kind of like the picture below shows (or for those graphically challenged – Name: DebugWithNoTests, and Copy settings from: ‘Debug’, ensuring the ‘Create new project configurations’ checkbox is checked). Press OK. VS will do some shizzle, and in the Configuration manager, you will see pretty much exactly what you did before, only with ‘Debug’ replaced with ‘DebugWithNoTests’. Turn off the build options for the test projects. We won’t need them.. IF you skipped down from the top, this is where you’ll be wanting to stop!!! Close and now we’re one notepad step away from achieving our goals. Yes, I said notepad. You can’t do what we’re going to do in VS. (Pity). Go to the folder where your web project is, and right click on the ‘.csproj’ file. Now open it with notepad. Head on down to the ‘<Content Include’ bits, they’ll look like this: <ItemGroup> <Content Include="ClientBin\Tests.xap" /> ... </ItemGroup> Take this and modify each of the files you don’t want deployed and change to: <Content Include="ClientBin\Tests.xap" Condition="'$(Configuration)' == 'Debug'" /> Once you’ve got that sorted publish your project, once with the Debug configuration selected, and another with any other configuration (‘Release’, ‘DebugWithNoTests’ etc).. No files! Huzzah!

    Read the article

  • What OpenGL version(s) to learn and/or use?

    - by zuko
    So, I'm new to OpenGL... I have general knowledge of game programming but little practical experience. I've been looking into various articles and books and trying to dive into OpenGL, but I've found the various versions and old vs new way of doing things confusing. I guess my first questions is does anyone know some figures about percentages of gamers that can run each version of OpenGL. What's the market share like? 2.x, 3.x, 4.x... I looked into the requirements for Half Life 2 since I know Valve updated it with OpenGL to run on Mac and I know they usually try to hit a very wide user-base, and they say a minimum of GeForce 8 Series. I looked at the 8800 GT on Nvidia's website and it listed support for OpenGL 2.1. Which, maybe I'm wrong, sounds ancient to me since there's already 4.x. I looked up a driver for 8800GT and it says it supports 4.2! A bit of a discrepancy there, lol. I've also read things like XP only supports up to a certain version, or OS X only supports 3.2, or all kinds of other things. Overall, I'm just confused as to how much support there is for various versions and what version to learn/use. I'm also looking for learning resources. My search results thus far have pointed me to the OpenGL SuperBible. The 4th edition has great reviews on Amazon, but it teaches 2.1. The 5th edition teaches 3.3 and there are a couple things in the reviews that mention the 4th edition is better and that the 5th edition doesn't properly teach the new features or something? Basically, even within learning material I'm seeing discrepancies and I just don't even know where to start. From what I understand, 3.x started a whole new way of doing things and I've read from various articles and reviews that you want to "stay away from deprecated features like glBegin(), glEnd()" yet a lot of books and tutorials I've seen use that method. I've seen people saying that, basically, the new way of doing stuff is more complicated yet the old way is bad . Just a side note, personally, I know I still have a lot to learn beforehand, but I'm interested in tessellation; so I guess that factors into it as well, because, as far as I understand that's only in 4.x? [just btw, my desktop supports 4.2]

    Read the article

  • Why is Java the lingua franca at so many institutions?

    - by Billy ONeal
    EDIT: This question at first seems to be bashing Java, and I guess at this point it is a bit. However, the bigger point I am trying to make is why any one single language is chosen as the one end all be all solution to all problems. Java happens to be the one that's used so that's the one I had to beat on here, but I'm not intentionality ripping Java a new one :) I don't like Java in most academic settings. I'm not saying the language itself is bad -- it has several extremely desirable aspects, most importantly the ability to run without recompilation on most any platform. Nothing wrong with using the language for Your Next App ^TM. (Not something I would personally do, but that's more because I have less experience with it, rather than it's design being poor) I think it is a waste that high level CS courses are taught using Java as a language. Too many of my co-students cannot program worth a damn, because they don't know how to work in a non-garbage-collected world. They don't fundamentally understand the machines they are programming for. When someone can work outside of a garbage collected world, they can work inside of one, but not vice versa. GC is a tool, not a crutch. But the way it is used to teach computer science students is a as a crutch. Computer science should not teach an entire suite of courses tailored to a single language. Students leave with the idea that all good design is idiomatic Java design, and that Object Oriented Design is the ONE TRUE WAY THAT IS THE ONLY WAY THINGS CAN BE DONE. Other languages, at least one of them not being a garbage collected language, should be used in teaching, in order to give the graduate a better understanding of the machines. It is an embarrassment that somebody with a PHD in CS from a respected institution cannot program their way out of a paper bag. What's worse, is that when I talk to those CS professors who actually do understand how things operate, they share feelings like this, that we're doing a disservice to our students by doing everything in Java. (Note that the above would be the same if I replaced it with any other language, generally using a single language is the problem, not Java itself) In total, I feel I can no longer respect any kind of degree at all -- when I can't see those around me able to program their way out of fizzbuzz problems. Why/how did it get to be this way?

    Read the article

  • Should i continue my self-taught coding practice or learn how to do coding professionally?

    - by G1i1ch
    Lately I've been getting professional work, hanging out with other programmers, and making friends in the industry. The only thing is I'm 100% self-taught. It's caused my style to extremely deviate from the style of those that are properly trained. It's the techniques and organization of my code that's different. It's a mixture of several things I do. I tend to blend several programming paradigms together. Like Functional and OO. I lean to the Functional side more than OO, but I see the use of OO when something would make more sense as an abstract entity. Like a game object. Next I also go the simple route when doing something. When in contrast, it seems like sometimes the code I see from professional programmers is complicated for the sake of it! I use lots of closures. And lastly, I'm not the best commenter. I find it easier just to read through my code than reading the comment. And most cases I just end up reading the code even if there are comments. Plus I've been told that, because of how simply I write my code, it's very easy to read it. I hear professionally trained programmers go on and on about things like unit tests. Something I've never used before so I haven't even the faintest idea of what they are or how they work. Lots and lots of underscores "_", which aren't really my taste. Most of the techniques I use are straight from me, or a few books I've read. Don't know anything about MVC, I've heard a lot about it though with things like backbone.js. I think it's a way to organize an application. It just confuses me though because by now I've made my own organizational structures. It's a bit of a pain. I can't use template applications at all when learning something new like with Ubuntu's Quickly. I have trouble understanding code that I can tell is from someone trained. Complete OO programming really leaves a bad taste in my mouth, yet that seems to be what EVERYONE else is strictly using. It's left me not that confident in the look of my code, or wondering whether I'll cause sparks when joining a company or maybe contributing to open source projects. In fact I'm rather scared of the fact that people will eventually be checking out my code. Is this just something normal any programmer goes through or should I really look to change up my techniques?

    Read the article

< Previous Page | 240 241 242 243 244 245 246 247 248 249 250 251  | Next Page >