Search Results

Search found 908 results on 37 pages for 'the worst shady'.

Page 27/37 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • The Java Community Process: What's Broken and How to Fix It

    - by Tori Wieldt
    In a panel discussion today at TheServerSide Java Symposium, Patrick Curran, Head of the Java Community Process, James Gosling, and ?Reza Rahman, member, Java EE 6 and EJB 3.1 expert groups, discussed the state of the JCP. Moderated by Cameron McKenzie, Editor of TheServerSide.com, they discussed what's wrong with JCP and ways to fix it.What's wrong with the JCP? Reza Rahman was quite supportive of the JCP. "I work as a consultant, and it's much better than getting a decision made a large company," Reza commented. He gave the JCP "Five stars" and explained that as an individual, he was able to have an impact on things that mattered to him. Cameron asked, "Now all these JCP problems came after Oracle acquired Sun, right?" To which the crowd had a good laugh, and the panel all agreed many of the JCP problems existed under Sun. How is the JCP handled differently under Oracle than Sun? "Pretty similar," said James. Oracle "tends more towards practicality" said Reza. "I'm glad to see things moving again, we've got several new JSRs filed," Patrick commented.How to Fix It?They all agreed greater transparency is a top issue. Without it, people assume sinister behavior whether it's there or not. Patrick said that currently spec leads are "encouraged" to be transparent, and the JCP office is planning to submit JSRs to change the JCP process so transparency is mandated, both for mailing lists and issue tracking. Shining a light on problems is the best way to fix them.Reza said the biggest problem is lack of a participation from the community. If more people are involved, a lot of the problems go away. "Developers are too non-chalant, they should realize what happens in the JCP has an direct impact on their career and they need to get involved." Reza commented.Got Involved!During Q&A, someone asked how a developer could get involved. They answered: Pick a JSR you are interested in and follow it. To start, you could read an article about the JSR and comment on the article (expert group members do read the comments). Or read the spec, discuss it with others and post a blog about it. Read the Expert Group proceedings. Join the JCP (free for individuals). Open source projects have code that you can download and play with, download it and provide feedback. Patrick mentioned that the JCP really wants more participation. "One way we are working on it is that we are encouraging JUGs to join the JCP as a group, and that makes all members of the JUG JCP members," Patrick said.They commented that most spec leads are desperate for feedback. "And, please get involved BEFORE the spec is finalized!" James declared. Someone from the audience said it's hard to put valuable time into something before it's baked. Patrick explained that Post Final Draft (PFD) is the time in the JCP process when the spec is mature enough to review but before the spec is finalized. The panel agreed the worst thing that could happen is that most people in the Java community just complain about the JCP without getting involved. Developer Sumit Goyal, conference attendee, thought it was a healthy discussion. "I got insights into how JSRs are worked on and finalized," he said.Key LinksThe Java Community Process Website  http://jcp.org/en/home/indexArticle: A Conversation with JCP Chair Patrick Curran Oracle Technology Network http://www.oracle.com/technetwork/java/index.htmlTheServerSide Java Symposium  http://javasymposium.techtarget.com/

    Read the article

  • Using Live Data in Database Development Work

    - by Phil Factor
    Guest Editorial for Simple-Talk Newsletter... in which Phil Factor reacts with some exasperation when coming across a report that a majority of companies were still using financial and personal data for both developing and testing database applications. If you routinely test your development work using real production data that contains personal or financial information, you are probably being irresponsible, and at worst, risking a heavy financial penalty for your company. Surprisingly, over 80% of financial companies still do this. Plenty of data breaches and fraud have happened from the use of real data for testing, and a data breach is a nightmare for any organisation that suffers one. The cost of each data breach averages out at around $7.2 million in the US in notification, escalation, credit monitoring, fines, litigation, legal costs, and lost business due to customer churn, £1.9 million in the UK. 70% of data breaches are done from within the organisation. Real data can be exploited in a number of ways for malicious or criminal purposes. It isn't just the obvious use of items such as name and address, date of birth, social security number, and credit card and bank account numbers: Data can be exploited in many subtle ways, so there are excellent reasons to ensure that a high priority is given to the detection and prevention of any data breaches. You'll never successfully guess all the ways that real data can be exploited maliciously, or the ease with which it can be accessed. It would be silly to argue that developers never need access to a copy of the database containing live data. Developers sometimes need to track a bug that can only be replicated on the data from the live database. However, it has to be done in a very restrictive harness. The law makes no distinction between development and production databases when a data breach occurs, so the data has to be held with all appropriate security measures in place. In Europe, the use of personal data for testing requires the explicit consent of the people whose data is being held. There are federal standards such as GLBA, PCI DSS and HIPAA, and most US States have privacy legislation. The task of ensuring compliance and tight security in such circumstances is an expensive and time-consuming overhead. The developer is likely to suffer investigation if a data breach occurs, even if the company manages to stay in business. Ironically, the use of copies of live data isn't usually the most effective way to develop or test your data. Data is usually time-specific and isn't usually current by the time it is used for testing, Existing data doesn't help much for new functionality, and every time the data is refreshed from production, any test data is likely to be overwritten. Also, it is not always going to test all the 'edge' conditions that are likely to flush out bugs. You still have the task of simulating the dynamics of actual usage of the database, and here you have no alternative to creating 'spoofed' data. Because of the complexities of relational data, It used to be that there was no realistic alternative to developing and testing with live data. However, this is no longer the case. Real data can be obfuscated, or it can be created entirely from scratch. The latter process used to be impractical, now that there are plenty of third-party tools to choose from. The process of obfuscation isn't risk free. The process must access the live data, and the success of the obfuscation process has to be carefully monitored. Database data security isn't an exciting topic to you or I, but to a hacker it can be an all-consuming obsession, especially if there is financial or political gain involved. This is not the sort of adversary one would wish for and it is far better to accept, and work with, security restrictions that exist for using live data in database development work, especially when the tools exist to create large realistic database test data that can be better for several aspects of testing.

    Read the article

  • The SmartAssembly Rearchitecture

    - by Simon Cooper
    You may have noticed that not a lot has happened to SmartAssembly in the past few months. However, the team has been very busy behind the scenes working on an entirely new version of SmartAssembly. SmartAssembly 6.5 Over the past few releases of SmartAssembly, the team had come to the realisation that the current 'architecture' - grown organically, way before RedGate bought it, from a simple name obfuscator over the years into a full-featured obfuscator and assembly instrumentation tool - was simply not up to the task. Not for what we wanted to do with it at the time, and not what we have planned for the future. Not only was it not up to what we wanted it to do, but it was severely limiting our development capabilities; long-standing bugs in the root architecture that couldn't be fixed, some rather...interesting...design decisions, and convoluted logic that increased the complexity of any bugfix or new feature tenfold. So, we set out to fix this. Earlier this year, a new engine was written on which SmartAssembly would be based. Over the following few months, each feature was ported over to the new engine and extensively tested by our existing unit and integration tests. The engine was linked into the existing UI (no easy task, due to the tight coupling between the UI and old engine), and existing RedGate products were tested on the new SmartAssembly to ensure the new engine acted in the same way. The result is SmartAssembly 6.5. The risks of a rearchitecture Are there risks to rearchitecting a product like SmartAssembly? Of course. There was a lot of undocumented behaviour in the old engine, and as part of the rearchitecture we had to find this behaviour, define it, and document it. In the process we found some behaviour of the old engine that simply did not make sense; hence the changes in pruning & obfuscation behaviour in the release notes. All the special edge cases we had to find, document, and re-implement. There was a chance that these special cases would not be found until near the end of the project, when everything is functionally complete and interacting together. By that stage, it would be hard to go back and change anything without a whole lot of extra work, delaying the release by months. We always knew this was a possibility; our initial estimate of the time required was '4 months, ± 4 months'. And that was including various mitigation strategies to reduce the likelihood of these issues being found right at the end. Fortunately, this worst-case did not happen. However, the rearchitecture did produce some benefits. As well as numerous bug fixes that we could not fix any other way, we've also added logging that lets you find out exactly why a particular field or property wasn't pruned or obfuscated. There's a new command line interface, we've tested it with WP7.1 and Silverlight 5, and we've added a new option to error reporting to improve the performance of instrumented apps by ~10%, at the cost of inaccurate line numbers in reports. So? What differences will I see? Largely none. SmartAssembly 6.5 produces the same output as SmartAssembly 6.2. The performance of 6.5 will be much faster for some users, and generally the same as 6.2 for the remaining. If you've encountered a bug with previous versions of SmartAssembly, I encourage you to try 6.5, as it has most likely been fixed in the rearchitecture. If you encounter a bug with 6.5, please do tell us; we'll be doing another release quite soon, so we'll aim to fix any issues caused by 6.5 in that release. Most importantly, the new architecture finally allows us to implement some Big Things with SmartAssembly we've been planning for many months; these will fundamentally change how you build, release and monitor your application. Stay tuned for further updates!

    Read the article

  • Strange Happenings

    - by MOSSLover
    There are weeks we go about our life thinking nothing is going to change nothing will happen.  Then there are other weeks a billion things happen at once.  Friday started off very weird for me.  I flew into Atlanta and I met some cool people for another SharePoint event.  I had some good conversations.  Saturday then hit me and my virtual machine bombed in my presentation after the auto updater ran.  I was writing code on the board and describing everything in notepad.  I would say as presentations go it was the best and the worst presentation all wrapped into one.  The next day I was in Baltimore and I hung out with my aunt which was relatively uneventful and great.  Then Monday hit and half my presentations failed or succeeded and my screen freezes so I start describing the code.  I was on top of my game until Monday night.  On top of the world.  I'm exhausted I get into Raleigh and one of the craziest stories of my life happens.  So my boss has been renting cars through Priceline this week I got a different company than the other weeks. The company gives me a Ford Focus and I plug in the coordinates on my IPhone where I want go.  I head out and then I get to the destination hotel (or I thought I did). I go inside it's the wrong hotel the other one is a few miles away.  I walk outside hop into the car and it sounds like a gunshot.  Nothing is starting...Am I doing something wrong?  No I'm not the car is completely dead in the water.  I call the rental car facility and they tell me to call roadside they are closing for the night.  Roadside says they can't give me a new car but they can get me a jump then I have to take it up with the facility.  They send me a tow truck to give me a jump the guy can't jump the car.  He tells me this vehicle was towed about an hour ago.  He shows me a copy of a slip from when he towed it.  We also notice the rental car company left one of there price scanning guns in the vehicle.  I call up roadside and now they are interested in getting me a car because I need to be onsite tomorrow.  They get the manager of the facility on the phone he apologizes profusely and he says he'll be there in 10 minutes.  About 30 minutes pass and him plus another dude show up with a Ford Escape leather interior.  At this point I hand him the gun tell him someone left it in the vehicle and that I'm not so happy with them.  I ask them to comp my rental they can't due to Priceline, however if I call him again this week he can get me a voucher.  It's about 2 am and I'm ready to get to the hotel I don't make it in the next morning until 10 am.  I would say this was a crazy week all forms of technology are trying to tell me something.  What I have no idea, but we'll see the outcome soon.  I feel so weird tons of change is about to happen.  I don't know if it's good or bad.  I think this week is some form of omen.

    Read the article

  • ArchBeat Link-o-Rama for 2012-06-26

    - by Bob Rhubart
    Software Architecture for High Availability in the Cloud | Brian Jimerson Brian Jimerson looks at the paradigm shifts from machine-based architectures to cloud-based architectures when designing fault tolerance, and how enterprise applications need to be engineered to ensure the highest level of availability in the cloud. SOA, Cloud & Service Technology Symposium 2012 London - Special Oracle Discount Registration is now open for one of the premier SOA, Cloud, and Service Technology events. Once again, the Oracle community is well-represented in the session schedule. And now you can save on registration with a special Oracle discount code. Progress 4GL and DB to Oracle and cloud | Tom Laszewski "Getting from client/server based 4GLs and databases where the 4GL is tightly linked to the database to Oracle and the cloud is not easy," says cloud migration expert Tom Laszewski. "The least risky and expensive option...is to use the Progress OpenEdge DataServer for Oracle." Embrace 'big data' now or fall behind the competition, analyst warns | TechTarget TechTarget's Mark Brunelli's story says, in essence, that Big Data is not your fathers Business Intelligence. Calculating the Size (in Bytes and MB) of a Oracle Coherence Cache | Ricardo Ferreira Ferreira illustrates a programmatic way to use the Oracle Coherence API to calculate the total size of a specific cache that resides in the data grid. WebCenter Portal Tutorial Part 7: Integrating Discussions and Link service | Yannick Ongena The latest chapter in Oracle ACE Yannick Ongena's ongoing series. How to Setup JDeveloper workspace for ADF Fusion Applications to run Business Component Tester? | Jack Desai Helpful technical tips from yet another member of the Oracle Fusion Middleware Architecture Team. Big Data for the Enterprise; Software Architecture for High Availability in the Cloud; Why Cloud Computing is a Paradigm Shift - And Why It Isn't This week on the OTN Solution Architect Homepage, along with an updated events list and this weeks list of selected community blog posts. Worst Practices for Big Data | Dain Hansen Dain Hansen shares some insight on what NOT to do if you want to captialize on Big Data. Free Virtual Developer Day - Oracle Fusion Development | Grant Ronald "The online conference will include seminars, hands-on lab and live chats with our technical staff including me!" says Grant Ronald. "And the best bit, it doesn't cost you a single penny. It's free and available right on your desktop." Penguin is Getting Ready for Oracle OpenWorld 2012 | Zeynep Koch Linux fan? Check out Zeynep Koch's post for a list of Linux-based sessions at Oracle OpenWorld 2012 in San Francisco. Amazon Web Services (AWS) Autoscaling | Frank Munz "Autoscaling on AWS can only be configured with lengthy commands from the command line but not from the web cased AWS console," says Frank Munz. "Getting all the parameters right can be tricky." He demonstrates one easy example in this video. Oracle Fusion Applications Design Patterns Now Available For Developers | Ultan O'Broin "These Oracle Fusion Applications UX Design Patterns, or blueprints, enable Oracle applications developers and system implementers everywhere to leverage professional usability insight," says O'Broin. How Much Data Is Created Every Minute? [INFOGRAPHIC] | Mashable Explaining what the "Big" in Big Data really means -- and it's more than a little mind-boggling. Thought for the Day "Real, though miniature, Turing Tests are happening all the time, every day, whenever a person puts up with stupid computer software." — Jaron Lanier Source: SoftwareQuotes.com

    Read the article

  • What Counts For A DBA: ESP

    - by Louis Davidson
    Now I don’t want to get religious here, and I’m not going to, but what I’m going to describe in this ‘What Counts for a DBA’ installment sometimes feels like magic. Often  I will spend hours thinking about the solution to a design issue or coding problem, working diligently to try to come up with a solution and then finally just give up with the feeling that I’m not even qualified to be a data entry clerk, much less a data architect.  At this point I often take a walk (or sometimes a nap), and then it hits me. I realize that I have the answer just sitting in my brain, ready to implement.  This phenomenon is not limited to walks either; it can happen almost any time after I stop my obsession about a problem. I call this phenomena ESP (or Extra-Sensory Programming.)  Another term for this could be ‘sleeping on it’, and while the idiom tends to mean to let time pass to actively think about a problem, sleeping on a problem also lets you relax and let your brain do the work. I first noticed this back in my college days when I would play video games for hours on end. We would get stuck deep in some dungeon unable to find a way out, playing for days on end until we were beaten down tired. Once we gave up and walked away, the solution would usually be there waiting for one of us before we came back to play the next day.  Sometimes it would be in the form of a dream, and sometimes it would just be that the problem was now easy to solve when we started to play again.  While it worked great for video games, it never occurred when I studied English Literature for hours on end, or even when I worked for the same sort of frustrating hours attempting to solve a homework problem in Calculus.  I believe that the difference was that I was passionate about the video game, and certainly far less so about homework where people used the word “thou” instead of “you” or x to represent a number. This phenomenon occurs somewhat more often in my current work as a professional data programmer, because I am very passionate about SQL and love those aspects of my career choice.  Every day that I get to draw a new data model to solve a customer issue, or write a complex SELECT statement to ferret out the answer to a complex data question, is a great day. I hope it is the same for any reader of this blog.  But, unfortunately, while the day on a whole is great, a heck of a lot of noise is generated in work life. There are the typical project deadlines, along with the requisite project manager sitting on your shoulders shouting slogans to try to make you to go faster: Add in office politics, and the occasional family issues that permeate the mind, and you lose the ability to think deeply about any problem, not to mention occasionally forgetting your own name.  These office realities coupled with a difficult SQL problem staring at you from your widescreen monitor will slowly suck the life force out of your body, making it seem impossible to solve the problem This is when the walk starts; or a nap. Maybe you hide from the madness under your desk like George Costanza hides from Steinbrenner on Seinfeld.  Forget about the problem. Free your mind from the insanity of the problem and your surroundings. Then let your training and education deep in your brain take over and see if it will passively do the rest for you. If you don’t end up with a solution, the worst case scenario is that you have a bit of exercise or rest, and you won’t have heard the phrase “better is the enemy of good enough” even once…which certainly will do your brain some good. Once you stop expecting whipping your brain for information, inspiration may just strike and instead of a humdrum solution you find a solution you hadn’t even considered, almost magically. So, my beloved manager, next time you have an urgent deadline and you come across me taking a nap, creep away quietly because I’m working, doing some extra-sensory programming.

    Read the article

  • Modernizr Rocks HTML5

    - by Laila
    HTML5 is a moving target.  At the moment, we don't know what will be in future versions.  In most circumstances, this really matters to the developer. When you're using Adobe Air, you can be reasonably sure what works, what is there, and what isn't, since you have a version of the browser built-in. With Metro, you can assume that you're going to be using at least IE 10.   If, however,  you are using HTML5 in a web application, then you are going to rely heavily on Feature Detection.  Feature-Detection is a collection of techniques that tell you, via JavaScript, whether the current browser has this feature natively implemented or not Feature Detection isn't just there for the esoteric stuff such as  Geo-location,  progress bars,  <canvas> support,  the new <input> types, Audio, Video, web workers or storage, but is required even for semantic markup, since old browsers make a pigs ear out of rendering this.  Feature detection can't rely just on reading the browser version and inferring from that what works. Instead, you must use JavaScript to check that an HTML5 feature is there before using it.  The problem with relying on the user-agent is that it takes a lot of historical data  to work out what version does what, and, anyway, the user-agent can be, and sometimes is, spoofed. The open-source library Modernizr  is just about the most essential  JavaScript library for anyone using HTML5, because it provides APIs to test for most of the CSS3 and HTML5 features before you use them, and is intelligent enough to alter semantic markup into 'legacy' 'markup  using shims  on page-load  for old browsers. It also allows you to check what video Codecs are installed for playing video. It also provides media queries  and conditional resource-loading (formerly YepNope.js.).  Generally, Modernizr gives you the choice of what you do about browsers that don't support the feature that you want. Often, the best choice is graceful degradation, but the resource-loading feature allows you to dynamically load JavaScript Shims to replace the standard API for missing or defective HTML5 functionality, called 'PolyFills'.  As the Modernizr site says 'Yes, not only can you use HTML5 today, but you can use it in the past, too!' The evolutionary progress of HTML5  requires a more defensive style of JavaScript programming where the programmer adopts a mindset of fearing the worst ( IE 6)  rather than assuming the best, whilst exploiting as many of the new HTML features as possible for the requirements of the site or HTML application.  Why would anyone want the distraction of developing their own techniques to do this when  Modernizr exists to do this for you? Laila

    Read the article

  • How to solve conflicts with another programmer..

    - by Tio
    Hi all.. I've read this question, but I think it doesn't really applies in my case.. I started to work at new company about 2 months ago with the position of senior web developer, there was already one programmer there, my position is above his, but I'm not his boss.. I don't tell him what to do.. Since the day I started to work at the company, I managed to implement a kind of a test server which he refuses to use, implemented a project management tool which he refuses to use also, and I'm in the process of implementing version control using mercurial ( damn Mercurial that's giving me so much headaches ), which he is going to use.. He is a nice guy, but just the other day we had a big discussion about "best practices", and "coding standards".. for me it's absolutely necessary to have this two things, at the place I'm currently working... otherwise it's not going to work.. This discussion, basically revolved about using short tags and the echo shortcut, and how we shouldn't use it anymore ( because I sometimes use short tags ).. this went for about 15 minutes, until I finally dropped the subject because I had work to do.. and of course he didn't budge even a millimeter, he's continuing to use short tags, and the echo shortcut and he not even cares about what I think.. When I mentioned that we are a team, he told me: "We are not a team, you work on your projects and I work on mine".. Let's just say, that the switch in my brain flipped, I raised my voice, and I told him that he was going nuts.. this was the most improper way to deal it with, I know, but there are certain things that can't be said to me.. The question here, is how do I deal with this? I want, to implement more changes on our work workflow, and I know that it's going to be a pain, with him always complaining and saying things like this.. Our boss is going to intervene in a few days, I talked to him today, the other programmer send him an email the day we had the discussion complaining.. Just to clarify, when I talk about implementing changes, I just don't appear at work, the next day with a sheet of paper, and say: "This is our we are doing things from now on! And there is no discussion.." For example, when I was trying to implement the project management tool, I took the time to talk to everybody that was going to be involved in it, to see what they think about it.. everyone was positive except him, he responded that it was just a mean to control us even more.. Does anyone has any idea on how to deal with this? PS: Truth to be told, I didn't start the best way with him, in the 4º day of work at the company, I found a really bad piece of code on our custom CMS, one of those things that I only expect to find in code produced by a programmer that has only 1 month of training in programming, and I talked to my boss about it... he showed up at the time I saw that piece of code, he saw my face and asked about it, so I told him.. I know the worst thing that can be said to a programmer is saying that their code is awful, but I've already learned that my code isn't the best of the world, so I take criticism in a completely different way now.. maybe he doesn't..

    Read the article

  • 12.04 making BCM4313 card work with aircrack-ng?

    - by Charles Forest
    I'm a real Linux Noob, just started using it (this month) and until now i had no issues. now i'm trying to set-up aircrack-ng on my laptop, but it seems like it's using the worst card possible (or almost) there is a TON of tutorial on this card (seems to be hell to set-up) i have tryed some, but i ended up uninstalling my drivers, messing with my desktops, and ended by having no more "X" to close my windows (i have no clue how i ended there) i just re-installed my linux (took me 2 hours to setup everything again), but now i'm a bit "Scared" to try tutorials randomly again. Right now it says the driver is wl, wich is not the one i want (AFAIK it's not supported) i'm not sure what kind of informations are needed, but here's what i think could be usefull. lspci -knn 00:00.0 Host bridge [0600]: Intel Corporation 2nd Generation Core Processor Family DRAM Controller [8086:0104] (rev 09) Subsystem: Samsung Electronics Co Ltd Device [144d:c0a5] Kernel driver in use: agpgart-intel 00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port [8086:0101] (rev 09) Kernel driver in use: pcieport Kernel modules: shpchp 00:02.0 VGA compatible controller [0300]: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller [8086:0116] (rev 09) Subsystem: Samsung Electronics Co Ltd Device [144d:c0a5] Kernel driver in use: i915 Kernel modules: i915 00:16.0 Communication controller [0780]: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 [8086:1c3a] (rev 04) Subsystem: Samsung Electronics Co Ltd Device [144d:c0a5] Kernel driver in use: mei Kernel modules: mei 00:1a.0 USB controller [0c03]: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 [8086:1c2d] (rev 04) Subsystem: Samsung Electronics Co Ltd Device [144d:c0a5] Kernel driver in use: ehci_hcd 00:1b.0 Audio device [0403]: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller [8086:1c20] (rev 04) Subsystem: Samsung Electronics Co Ltd Device [144d:c0a5] Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel 00:1c.0 PCI bridge [0604]: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 [8086:1c10] (rev b4) Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.3 PCI bridge [0604]: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 4 [8086:1c16] (rev b4) Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.4 PCI bridge [0604]: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 5 [8086:1c18] (rev b4) Kernel driver in use: pcieport Kernel modules: shpchp 00:1d.0 USB controller [0c03]: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 [8086:1c26] (rev 04) Subsystem: Samsung Electronics Co Ltd Device [144d:c0a5] Kernel driver in use: ehci_hcd 00:1f.0 ISA bridge [0601]: Intel Corporation HM65 Express Chipset Family LPC Controller [8086:1c49] (rev 04) Subsystem: Samsung Electronics Co Ltd Device [144d:c0a5] Kernel modules: iTCO_wdt 00:1f.2 SATA controller [0106]: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller [8086:1c03] (rev 04) Subsystem: Samsung Electronics Co Ltd Device [144d:c0a5] Kernel driver in use: ahci 00:1f.3 SMBus [0c05]: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller [8086:1c22] (rev 04) Subsystem: Samsung Electronics Co Ltd Device [144d:c0a5] Kernel modules: i2c-i801 01:00.0 3D controller [0302]: NVIDIA Corporation GF108 [GeForce GT 540M] [10de:0df4] (rev a1) Subsystem: Samsung Electronics Co Ltd Device [144d:c0a5] Kernel driver in use: nouveau Kernel modules: nouveau, nvidiafb WIRELESS CARD 02:00.0 Network controller [0280]: Broadcom Corporation BCM4313 802.11b/g/n Wireless LAN Controller [14e4:4727] (rev 01) Subsystem: Wistron NeWeb Corp. Device [185f:051a] Kernel driver in use: wl Kernel modules: wl, bcma, brcmsmac REST... 03:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller [10ec:8168] (rev 06) Subsystem: Samsung Electronics Co Ltd Device [144d:c0a5] Kernel driver in use: r8169 Kernel modules: r8169 04:00.0 USB controller [0c03]: NEC Corporation uPD720200 USB 3.0 Host Controller [1033:0194] (rev 04) Subsystem: Samsung Electronics Co Ltd Device [144d:c0a5] Kernel driver in use: xhci_hcd Also, if i'm "screwed" with my hardware, just tell me.

    Read the article

  • The Future of Air Travel: Intelligence and Automation

    - by BobEvans
    Remember those white-knuckle flights through stormy weather where unexpected plunges in altitude result in near-permanent relocations of major internal organs? Perhaps there’s a better way, according to a recent Wall Street Journal article: “Pilots of a Honeywell International Inc. test plane stayed on their initial flight path, relying on the company's latest onboard radar technology to steer through the worst of the weather. The specially outfitted Boeing 757 barely shuddered as it gingerly skirted some of the most ferocious storm cells over Fort Walton Beach and then climbed above the rest in zero visibility.” Or how about the multifaceted check-in process, which might not wreak havoc on liver location but nevertheless makes you wonder if you’ve been trapped in some sort of covert psychological-stress test? Another WSJ article, called “The Self-Service Airport,” says there’s reason for hope there as well: “Airlines are laying the groundwork for the next big step in the airport experience: a trip from the curb to the plane without interacting with a single airline employee. At the airport of the near future, ‘your first interaction could be with a flight attendant,’ said Ben Minicucci, chief operating officer of Alaska Airlines, a unit of Alaska Air Group Inc.” And in the topsy-turvy world of air travel, it’s not just the passengers who’ve been experiencing bumpy rides: the airlines themselves are grappling with a range of challenges—some beyond their control, some not—that make profitability increasingly elusive in spite of heavy demand for their services. A recent piece in The Economist illustrates one of the mega-challenges confronting the airline industry via a striking set of contrasting and very large numbers: while the airlines pay $7 billion per year to third-party computerized reservation services, the airlines themselves earn a collective profit of only $3 billion per year. In that context, the anecdotes above point unmistakably to the future that airlines must pursue if they hope to be able to manage some of the factors outside of their control (e.g., weather) as well as all of those within their control (operating expenses, end-to-end visibility, safety, load optimization, etc.): more intelligence, more automation, more interconnectedness, and more real-time awareness of every facet of their operations. Those moves will benefit both passengers and the air carriers, says the WSJ piece on The Self-Service Airport: “Airlines say the advanced technology will quicken the airport experience for seasoned travelers—shaving a minute or two from the checked-baggage process alone—while freeing airline employees to focus on fliers with questions. ‘It's more about throughput with the resources you have than getting rid of humans,’ said Andrew O'Connor, director of airport solutions at Geneva-based airline IT provider SITA.” Oracle’s attempting to help airlines gain control over these challenges by blending together a range of its technologies into a solution called the Oracle Airline Data Model, which suggests the following steps: • To retain and grow their customer base, airlines need to focus on the customer experience. • To personalize and differentiate the customer experience, airlines need to effectively manage their passenger data. • The Oracle Airline Data Model can help airlines jump-start their customer-experience initiatives by consolidating passenger data into a customer data hub that drives realtime business intelligence and strategic customer insight. • Oracle’s Airline Data Model brings together multiple types of data that can jumpstart your data-warehousing project with rich out-of-the-box functionality. • Oracle’s Intelligent Warehouse for Airlines brings together the powerful capabilities of Oracle Exadata and the Oracle Airline Data Model to give you real-time strategic insights into passenger demand, revenues, sales channels and your flight network. The airline industry aside, the bullet points above offer a broad strategic outline for just about any industry because the customer experience is becoming pre-eminent in each and there is simply no way to deliver world-class customer experiences unless a company can capture, manage, and analyze all of the relevant data in real-time. I’ll leave you with two thoughts from the WSJ article about the new in-flight radar system from Honeywell: first, studies show that a single episode of serious turbulence can wrack up $150,000 in additional costs for an airline—so, it certainly behooves the carriers to gain the intelligence to avoid turbulence as much as possible. And second, it’s back to that top-priority customer-experience thing and the value that ever-increasing levels of intelligence can deliver. As the article says: “In the cabin, reporters watched screens showing the most intense parts of the nearly 10-mile wide storm, which churned some 7,000 feet below, in vibrant red and other colors. The screens also were filled with tiny symbols depicting likely locations of lightning and hail, which can damage planes and wreak havoc on the nerves of white-knuckle flyers.”  (Bob Evans is senior vice-president, communications, for Oracle.)  

    Read the article

  • AI to move custom-shaped spaceships (shape affecting movement behaviour)

    - by kaoD
    I'm designing a networked turn based 3D-6DOF space fleet combat strategy game which relies heavily on ship customization. Let me explain the game a bit, since you need to know a bit about it to set the question. What I aim for is the ability to create your own fleet of ships with custom shapes and attached modules (propellers, tractor beams...) which would give advantages and disadvantages to each ship, so you have lots of different fleet distributions. E.g., long ship with two propellers at the side would let the ship spin around that plane easily, bigger ships would move slowly unless you place lots of propellers at the back (therefore spending more "construction" points and energy when moving, and it will only move fast towards that direction.) I plan to balance all the game around this feature. The game would revolve around two phases: orders and combat phase. During the orders phase, you command the different ships. When all players finish the order phase, the combat phase begins and the ship orders get resolved in real-time for some time, then the action pauses and there's a new orders phase. The problem comes when I think about player input. To move a ship, you need to turn on or off different propellers if you want to steer, travel forward, brake, rotate in place... These propellers don't have to work at their whole power, so you can achieve more movement combinations with less propellers. I think this approach is a bit boring. The player doesn't want to fiddle with motors or anything, you just want to MOVE and KILL. The way I intend the player to give orders to these ships is by a destination and a rotation, and then the AI would calculate the correct propeller power to achive that movement and rotation. Propulsion doesn't have to be the same throught the entire turn calculation (after the orders have been given) so it would be cool if the ships reacted as they move, adjusting the power of the propellers for their needs dynamically, but it may be too hard to implement and it's not really needed for the game to work. In both cases, how would that AI decide which propellers to activate for the best (or at least not worst) trajectory to be achieved? I though about some approaches: Learning AI: The ship types would learn about their movement by trial and error, adjusting their behaviour with more uses, and finally becoming "smart". I don't want to get involved THAT far in AI coding, and I think it can be frustrating for the player (even if you can let it learn without playing.) Pre-calculated timestep movement: Upon ship creation, ALL possible movements are calculated for each propeller configuration and power for a given delta-time. Memory intensive, ugly, bad. Pre-calculated trajectories: The same as above but not for each delta-time but the whole trajectory, which would then be fitted as much as possible. Requires a fixed propeller configuration for the whole combat phase and is still memory intensive, ugly and bad. Continuous brute forcing: The AI continously checks ALL possible propeller configurations throughout the entire combat phase, precalculates a few time steps and decides which is the best one based on that. Con: what's good now might not be that good later, and it's too CPU intensive, ugly, and bad too. Single brute forcing: Same as above, but only brute forcing at the beginning of the simulation, so it needs constant propeller configuration throughout the entire combat phase. Coninuous angle check: This is not a full movement method, but maybe a way to discard "stupid" propeller configurations. Given the current propeller's normal vector and the final one, you can approximate the power needed for the propeller based on the angle. You must do this continuously throughout the whole combat phase. I figured this one out recently so I didn't put in too much thought. A priori, it has the "what's good now might not be that good later" drawback too, and it doesn't care about the other propellers which may act together to make a better propelling configuration. I'm really stuck here. Any ideas?

    Read the article

  • How to deal with overly aggressive "Link Take Down Demands"?

    - by Eoin
    I've been receiving a large number of emails recently requesting I clean from link spam from my forum. Initially the emails were very polite and professional, and I was happy to remove the links. Recently the email have gotten very abrasive, here is a particularly rude example: From: [email protected] To: [email protected] Hi, This is the second time we are reaching out to you regarding your link to our site hxxp://www.company-two.com from hxxp://www.my-forum.com/some-topic-id. We really do need to remove this link. We have to report to Google any link we were unable to remove, and I wouldn't want to have to include your site in the list. Could you please remove our link from this page and any other page on your site? Thank You, Name Changed Behind the superficial pleasantries I feel there is some very real maliciousness. Note the email address, DMCA Violations, I don't see how the DMCA is involved here, except as a word which tends to strike fear in many people. Also relating to the email address, it doesn't match the company being linked to at all. How am I to trust they are truely operating on behalf of company-two when they don't even use one of it's email addresses. My email is hidden by privacypost. While a service with legitimate uses, I feel it's highly unprofessional for communications between to companies. The claim "This is the second time..." Every email I've received has started like this, but a check of my spam filters has never revealed a 1st mail. Initially I gave them the benefit of the doubt, by now though it's clear this is a cheap ploy to start me off on the defensive. And finally worst of all- the threats of reporting me to Google if I don't do everything they ask. I sent a polite reply asking for more information. I have no idea if the email address was even valid but I never received any response. Much later I got this followup mail From: [email protected] To: [email protected] Hi, This is the final time we are reaching out to you regarding your link to our site hxxp://www.company-two.com from hxxp://www.my-forum.com/some-topic-id. We will soon be reporting to Google any link we were unable to remove, and currently your site will have to be on the list. Could you please remove our link from this page and any other page on your site? I appreciate your urgent attention to this matter. Thank You, Name Changed This time the from address was more personal, though still not obviously connected to the spammed company. Lets be honest, I don't for one second believe that the companies were the victim of a 3rd party spammer as they claim. The links in questions were generated well over a year ago, and I firmly believe the companies were directly responsible for the spam links in question, a type of spam that has plagued my forum. Now they have the audacity to demand I spend my time cleaning up their mess, using threats to ensure they get their way. Have recent changes in Googles algorithms meant all the cash they spent spamming the web has now turned into a liability? If so I can see why these companies are all of a sudden running scared. Frankly, cleaning up my forum is a good things, but the threats they are using sickens me. So my question here is specifically about the threats: Are they vaild, and would such reports to Google destroy my page rankings? Is there a way I can report this abusive behaviour to Google?

    Read the article

  • Documentation and Test Assertions in Databases

    - by Phil Factor
    When I first worked with Sybase/SQL Server, we thought our databases were impressively large but they were, by today’s standards, pathetically small. We had one script to build the whole database. Every script I ever read was richly annotated; it was more like reading a document. Every table had a comment block, and every line would be commented too. At the end of each routine (e.g. procedure) was a quick integration test, or series of test assertions, to check that nothing in the build was broken. We simply ran the build script, stored in the Version Control System, and it pulled everything together in a logical sequence that not only created the database objects but pulled in the static data. This worked fine at the scale we had. The advantage was that one could, by reading the source code, reach a rapid understanding of how the database worked and how one could interface with it. The problem was that it was a system that meant that only one developer at the time could work on the database. It was very easy for a developer to execute accidentally the entire build script rather than the selected section on which he or she was working, thereby cleansing the database of everyone else’s work-in-progress and data. It soon became the fashion to work at the object level, so that programmers could check out individual views, tables, functions, constraints and rules and work on them independently. It was then that I noticed the trend to generate the source for the VCS retrospectively from the development server. Tables were worst affected. You can, of course, add or delete a table’s columns and constraints retrospectively, which means that the existing source no longer represents the current object. If, after your development work, you generate the source from the live table, then you get no block or line comments, and the source script is sprinkled with silly square-brackets and other confetti, thereby rendering it visually indigestible. Routines, too, were affected. In our system, every routine had a directly attached string of unit-tests. A retro-generated routine has no unit-tests or test assertions. Yes, one can still commit our test code to the VCS but it’s a separate module and teams end up running the whole suite of tests for every individual change, rather than just the tests for that routine, which doesn’t scale for database testing. With Extended properties, one can get the best of both worlds, and even use them to put blame, praise or annotations into your VCS. It requires a lot of work, though, particularly the script to generate the table. The problem is that there are no conventional names beyond ‘MS_Description’ for the special use of extended properties. This makes it difficult to do splendid things such ensuring the integrity of the build by running a suite of tests that are actually stored in extended properties within the database and therefore the VCS. We have lost the readability of database source code over the years, and largely jettisoned the use of test assertions as part of the database build. This is not unexpected in view of the increasing complexity of the structure of databases and number of programmers working on them. There must, surely, be a way of getting them back, but I sometimes wonder if I’m one of very few who miss them.

    Read the article

  • An observation on .NET loops – foreach, for, while, do-while

    It’s very common that .NET programmers use “foreach” loop for iterating through collections. Following is my observation whilst I was testing simple scenario on loops. “for” loop is 30% faster than “foreach” and “while” loop is 50% faster than “foreach”. “do-while” is bit faster than “while”. Someone may feel that how does it make difference if I’m iterating only 1000 times in a loop. This test case is only for simple iteration. According to the "Data structure" concepts, best and worst cases are completely based on the data we provide to the algorithm. so we can not conclude that a "foreach" algorithm is not good. All I want to tell that we need to be little cautious even choosing the loops. Example:- You might want to chose quick sort when you want to sort more numbers. At the same time bubble sort may be effective than quick sort when you want to sort less numbers. Take a simple scenario, a request of a simple web application fetches the data of 10000 (10K) rows and iterating them for some business logic. Think, this application is being accessed by 1000 (1K) people simultaneously. In this simple scenario you are ending up with 10000000 (10Million or 1 Crore) iterations. below is the test scenario with simple console application to test 100 Million records. using System;using System.Collections.Generic;using System.Diagnostics;namespace ConsoleApplication1{ class Program { static void Main(string[] args) { var sw = new Stopwatch(); var numbers = GetSomeNumbers(); sw.Start(); foreach (var item in numbers) { } sw.Stop(); Console.WriteLine( String.Format("\"foreach\" took {0} milliseconds", sw.ElapsedMilliseconds)); sw.Reset(); sw.Start(); for (int i = 0; i < numbers.Count; i++) { } sw.Stop(); Console.WriteLine( String.Format("\"for\" loop took {0} milliseconds", sw.ElapsedMilliseconds)); sw.Reset(); sw.Start(); var it = 0; while (it++ < numbers.Count) { } sw.Stop(); Console.WriteLine( String.Format("\"while\" loop took {0} milliseconds", sw.ElapsedMilliseconds)); sw.Reset(); sw.Start(); var it2 = 0; do { } while (it2++ < numbers.Count); sw.Stop(); Console.WriteLine( String.Format("\"do-while\" loop took {0} milliseconds", sw.ElapsedMilliseconds)); } #region Get me 10Crore (100 Million) numbers private static List<int> GetSomeNumbers() { var lstNumbers = new List<int>(); var count = 100000000; for (var i = 1; i <= count; i++) { lstNumbers.Add(i); } return lstNumbers; } #endregion Get me some numbers }} In above example, I was just iterating through 100 Million numbers. You can see the time to execute various  loops provided in .NET Output "foreach" took 1108 milliseconds "for" loop took 727 milliseconds "while" loop took 596 milliseconds "do-while" loop took 594 milliseconds   Press any key to continue . . . So I feel we need to be careful while choosing the looping strategy. Please comment your thoughts. span.fullpost {display:none;}

    Read the article

  • What Counts for a DBA: Skill

    - by drsql
    “Practice makes perfect:” right? Well, not exactly. The reality of it all is that this saying is an untrustworthy aphorism. I discovered this in my “younger” days when I was a passionate tennis player, practicing and playing 20+ hours a week. No matter what my passion level was, without some serious coaching (and perhaps a change in dietary habits), my skill level was never going to rise to a level where I could make any money at the sport that involved something other than selling tennis balls at a sporting goods store. My game may have improved with all that practice but I had too many bad practices to overcome. Practice by itself merely reinforces what we know and what we can figure out naturally. The truth is actually closer to the expression used by Vince Lombardi: “Perfect practice makes perfect.” So how do you get to become skilled as a DBA if practice alone isn’t sufficient? Hit the Internet and start searching for SQL training and you can find 100 different sites. There are also hundreds of blogs, magazines, books, conferences both onsite and virtual. But then how do you know who is good? Unfortunately often the worst guide can be to find out the experience level of the writer. Some of the best DBAs are frighteningly young, and some got their start back when databases were stored on stacks of paper with little holes in it. As a programmer, is it really so hard to understand normalization? Set based theory? Query optimization? Indexing and performance tuning? The biggest barrier often is previous knowledge, particularly programming skills cultivated before you get started with SQL. In the world of technology, it is pretty rare that a fresh programmer will gravitate to database programming. Database programming is very unsexy work, because without a UI all you have are a bunch of text strings that you could never impress anyone with. Newbies spend most of their time building UIs or apps with procedural code in C# or VB scoring obvious interesting wins. Making matters worse is that SQL programming requires mastery of a much different toolset than most any mainstream programming skill. Instead of controlling everything yourself, most of the really difficult work is done by the internals of the engine (written by other non-relational programmers…we just can’t get away from them.) So is there a golden road to achieving a high skill level? Sadly, with tennis, I am pretty sure I’ll never discover it. However, with programming it seems to boil down to practice in applying the appropriate techniques for whatever type of programming you are doing. Can a C# programmer build a great database? As long as they don’t treat SQL like C#, absolutely. Same goes for a DBA writing C# code. None of this stuff is rocket science, as long as you learn to understand that different types of programming require different skill sets and you as a programmer must recognize the difference between one of the procedural languages and SQL and treat them differently. Skill comes from practicing doing things the right way and making “right” a habit.

    Read the article

  • Modernizr Rocks HTML5

    - by Laila
    HTML5 is a moving target.  At the moment, we don't know what will be in future versions.  In most circumstances, this really matters to the developer. When you're using Adobe Air, you can be reasonably sure what works, what is there, and what isn't, since you have a version of the browser built-in. With Metro, you can assume that you're going to be using at least IE 10.   If, however,  you are using HTML5 in a web application, then you are going to rely heavily on Feature Detection.  Feature-Detection is a collection of techniques that tell you, via JavaScript, whether the current browser has this feature natively implemented or not Feature Detection isn't just there for the esoteric stuff such as  Geo-location,  progress bars,  <canvas> support,  the new <input> types, Audio, Video, web workers or storage, but is required even for semantic markup, since old browsers make a pigs ear out of rendering this.  Feature detection can't rely just on reading the browser version and inferring from that what works. Instead, you must use JavaScript to check that an HTML5 feature is there before using it.  The problem with relying on the user-agent is that it takes a lot of historical data  to work out what version does what, and, anyway, the user-agent can be, and sometimes is, spoofed. The open-source library Modernizr  is just about the most essential  JavaScript library for anyone using HTML5, because it provides APIs to test for most of the CSS3 and HTML5 features before you use them, and is intelligent enough to alter semantic markup into 'legacy' 'markup  using shims  on page-load  for old browsers. It also allows you to check what video Codecs are installed for playing video. It also provides media queries  and conditional resource-loading (formerly YepNope.js.).  Generally, Modernizr gives you the choice of what you do about browsers that don't support the feature that you want. Often, the best choice is graceful degradation, but the resource-loading feature allows you to dynamically load JavaScript Shims to replace the standard API for missing or defective HTML5 functionality, called 'PolyFills'.  As the Modernizr site says 'Yes, not only can you use HTML5 today, but you can use it in the past, too!' The evolutionary progress of HTML5  requires a more defensive style of JavaScript programming where the programmer adopts a mindset of fearing the worst ( IE 6)  rather than assuming the best, whilst exploiting as many of the new HTML features as possible for the requirements of the site or HTML application.  Why would anyone want the distraction of developing their own techniques to do this when  Modernizr exists to do this for you? Laila

    Read the article

  • Cannot get ATI Drivers installed

    - by bittoast67
    I am trying to install the Catalyst driver. The best I can get is a strange resolution problem and firefox acts all wonkt. The worst I have gotten is low graphics mode in which I just reinstall Ubuntu. I have a HP Pavilion Dv7 laptop. With Radeon 3200 HD. I plan to try again with a fresh install of Ubuntu 12.4.3 as I have heard its the most compatible. This is what I have done: I have tried just the easy way of going to synaptic and installing the drivers that way. the fglrx package (not the fglrx update). And if memory serves I think that boots me into low graphics mode. So, fresh install of Ubuntu and tried again. I have done everything a couple times from this site (http://wiki.cchtml.com/index.php/Ubuntu_Precise_Installation_Guide) following every instruction to a T. That gets me something, such as a lowered fan speed and a much cooler computer, but I also lose most of my resolution. And displays says its the best resolution I can get. I also have a very screwy firefox. Using this method I can see AMD Catalyst Control Center in my dash (two of them really one administrator and one not) but when I try to open it it says no amd driver detected. So again, ubuntu reinstall. I have tried the GUI method from the Legacy driver I got from AMD's site. It runs through smoothly and at the very end after I exit the installer it gives me an error. I have also tried various other methods using terminal, as well as various different drivers (the one from the amd's site and the one suggested in the above link for my graphics card) both to no avail. When I try the method in the link on number 2, and I get the super low res and screwy fire fox. I type in, fglrxinfo ,and get a badrequest error. I have yet to type in fglrxinfo and get anything like what I am supposed to. UPDATE: I am now currently reinstalling Ubuntu 12.4. I tried the above mentioned link - thank you very much!- just to see on the previously failed driver attempt by following the purge commands. And to no avail when typing fglrxinfo I still get the badrequest thing. I will update again after a try with a true fresh install. Thanks again!! UPDATE: Alright everyone. Still no go. I have done everything word per word in the provided tutorial. I have rebooted my computer again to a fucked up resolution and this is what I get when typing fglrxinfo: $ fglrxinfo X Error of failed request: BadRequest (invalid request code or no such operation) Major opcode of failed request: 153 (GLX) Minor opcode of failed request: 19 (X_GLXQueryServerString) Serial number of failed request: 12 Current serial number in output stream: 12 I would like to add that when installing this file: fglrx_8.970-0ubuntu1_amd64.deb I got this: Building initial module for 3.8.0-29-generic Error! Bad return status for module build on kernel: 3.8.0-29-generic (x86_64) Consult /var/lib/dkms/fglrx/8.970/build/make.log for more information. update-initramfs: deferring update (trigger activated) Processing triggers for ureadahead ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.8.0-29-generic Processing triggers for libc-bin ... ldconfig deferred processing now taking place Any ideas? Anyone? I cant for the life of me figure out what I am doing wrong.

    Read the article

  • College Ratings via the Federal Government

    - by user9147039
    A few weeks back you might remember news about a higher education rating system proposal from the Obama administration. As I've discussed previously, political and stakeholder pressures to improve outcomes and increase transparency are stronger than ever before. The executive branch proposal is intended to make progress in this area. Quoting from the proposal itself, "The ratings will be based upon such measures as: Access, such as percentage of students receiving Pell grants; Affordability, such as average tuition, scholarships, and loan debt; and Outcomes, such as graduation and transfer rates, graduate earnings, and advanced degrees of college graduates.” This is going to be quite complex, to say the least. Most notably, higher ed is not monolithic. From community and other 2-year colleges, to small private 4-year, to professional schools, to large public research institutions…the many walks of higher ed life are, well, many. Designing a ratings system that doesn't wind up with lots of unintended consequences and collateral damage will be difficult. At best you would end up potentially tarnishing the reputation of certain institutions that were actually performing well against the metrics and outcome measures that make sense in their "context" of education. At worst you could spend a lot of time and resources designing a system that would lose credibility with its "customers". A lot of institutions I work with already have in place systems like the one described above. They are tracking completion rates, completion timeframes, transfers to other institutions, job placement, and salary information. As I talk to these institutions there are several constants worth noting: • Deciding on which metrics to measure is complicated. While employment and salary data are relatively easy to track, qualitative measures are more difficult. How do you quantify the benefit to someone who studies in one field that may not compensate him or her as well as another field but that provides huge personal fulfillment and reward is a difficult measure to quantify? • The data is available but the systems to transform the data into actual information that can be used in meaningful ways are not. Too often in higher ed information is siloed. As such, much of the data that need to be a part of a comprehensive system sit in multiple organizations, oftentimes outside the reach of core IT. • Politics and culture are big barriers. One of the areas that my team and I spend a lot of time talking about with higher ed institutions all over the world is the imperative to optimize for student success. This, like the tracking of the students’ achievement after graduation, requires a level or organizational capacity that does not currently exist. The primary barrier is the culture of "data islands" in higher ed, and the need for leadership to drive out the divisions between departments, schools, colleges, etc. and institute academy-wide analytics and data stewardship initiatives that will enable student success. • Data quality is a very big issue. So many disparate systems exist (some on premise, some "in the cloud") that keep data about "persons" using different means to identify them. Establishing a single source of truth about an individual and his or her data is difficult without some type of data quality policy and tools. Good tools actually exist but are seldom leveraged. Don't misunderstand - I think it's a great idea to drive additional transparency and accountability into the system of higher education. And not just at home, but globally. Students and parents need access to key data to make informed, responsible choices. The tools exist to not only enable this kind of information to be shared but to capture the very metrics stakeholders care most about and in a way that makes sense in the context of a given institution's "place" in the overall higher ed panoply.

    Read the article

  • View all ntext column text in SQL Server Management Studio for SQL CE database

    - by Dave
    I often want to do a "quick check" of the value of a large text column in SQL Server Management Studio (SSMS). The maximum number of characters that SSMS will let you view, in grid results mode, is 65535. (It is even less in text results mode.) Sometimes I need to see something beyond that range. Using SQL Server 2005 databases, I often used the trick of converting it to XML, because SSMS lets you view much larger amounts of text that way: SELECT CONVERT(xml, MyCol) FROM MyTable WHERE ... But now I am using SQL CE, and there is no Xml data type. There is still a "Maximum Characters Retreived XML" value under Options; I suppose this is useful when connecting to other data sources. I know I can just get the full value by running a little console app or something, but is there a way within SSMS to see the entire ntext column value? [Edit] OK, this didn't get much attention the first time around (18 views?!). It's not a huge concern, but maybe I'm just obsessed with it. There has to be some good way around this, doesn't there? So a modest bounty is active. What I am willing to accept as answers, in order from best-to-worst: A solution that works just as easy as the XML trick in SQL CE. That is, a single function (convert, cast, etc.) that does the job. A not-too-invasive way to hack SSMS to get it to display more text in the results. An equivalent SQL query (perhaps something that creatively uses SUBSTRING and generates multiple ad-hoc columns??) to see the results. The solution should work with nvarchar and ntext columns of any length in SQL CE from SSMS. Any ideas?

    Read the article

  • Do SEO-friendly URLs really affect a page's ranking?

    - by Lee Harold
    SEO-friendly URLs are all the rage these days. But do they actually have a meaningful impact on a page's ranking in Google and other search engines? If so, why? If not, why not? (Note that I would absolutely agree that SEO-friendly URLs are nicer to use for human beings. My question is whether they actually make a difference to the ranking algorithms.) Update: As it turns out, the Google post that endorphine points to here has caused tremendous confusion in the SEO community. For a sampling of the discussion, see here, here, and here. Part of the problem is that the Google post is addressing the worst case where URL rewriting is done poorly and so you'd be better off sticking with a dynamic URL rather than a mangled static "SEO-friendly" URL. There's no question dynamic URLs can be crawled by Google and can achieve high rankings. Maybe it would be easier to reframe the question more concretely: given 2 otherwise equivalent pages, which will rank higher for the search "do seo friendly urls really affect page ranking"? A) http://stackoverflow.com/questions/505793/do-seo-friendly-urls-really-affect-a-pages-ranking or B) http://stackoverflow.com?question=505793 (a fake URL for comparison only)

    Read the article

  • Interpret Objective C scripts at runtime on iPhone?

    - by Brad Parks
    Is there anyway to load an objective c script at runtime, and run it against the classes/methods/objects/functions in the current iPhone app? The reason i ask is that I've been playing around with iPhone wax, a lua interpreter that can be embedded in an iPhone app, and it works very nicely, in the sense that any object/method/function that's publically available in your Objective C code is automatically bridged, and available in lua. This allows you to rapidly prototype applications by simply making the core of your app be lua files that are in the users documents directory. Just reload the app, and you can test out changes to your lua files without needing to rebuild the app in XCode - a big time saver! But, with Apples recent 3.1.3 SDK stuff, it got me thinking that the safest approach for doing this type of rapid prototypeing would be if you could use Objective C as the interpreted code... That way, worst case scenario, you could just compile it into your app before your release, instead. I have heard that the lua source can be compiled to byte code, and linked in at build time, but I think the ultimate safe thing would be if the scripted source was in objective c, not lua. This leads me to wondering (i've searched, but come up with nothing) if there are any examples on how to embed an Objective C Interpreter in an iPhone app? This would allow you to rapidly prototype your app against the current classes that are built into your binary, and, when your about to deploy your app, instead of running the classes through the in app interpreter, you compile them in instead.

    Read the article

  • Azure - Unable To Get SQL Azure Invitation Code!

    - by Goober
    Scenario I'm trying to convert my Silverlight Business Application over to the cloud with the help of Azure. I have been following this link from Brad Abrams blog. Both the links to Windows Azure and SQL Azure crash out in Google Chrome, they work in Internet Explorer, but it's literally one of the worst user experiences I've ever had. The Problem I'm asked to sign in to Microsoft connect with my Live ID. I do so, I'm then asked to register!? - I do so. I'm then sent a verification email which I verify. I'm then signed out!?!?! When I sign back in, it repeats the process.... ANY IDEAS!??! Edit/Update: Finally managed to get signed up/in to connect. From here I was able to get hold of an invitation code to Windows Azure. Now I need an invitation code for SQL Azure. I cannot see ANYWHERE that advertises a way of getting this SQL Azure code, the only thing that I have seen is some text saying that there "may be a delay" in receiving codes due to volume of interest, which quite frankly I find hard to believe......... It's so far been 3 days now.This officially Sucks! If I have any more news I'll post back here.

    Read the article

  • Java RegEx API "Look-behind group does not have an obvious maximum length near index ..."

    - by Foo Inc
    Hello, I'm on to some SQL where clause parsing and designed a working RegEx to find a column outside string literals using "Rad Software Regular Expression Desginer" which is using the .NET API. To make sure the designed RegEx works with Java too, I tested it by using the API of course (1.5 and 1.6). But guess what, it won't work. I got the message "Look-behind group does not have an obvious maximum length near index 28". The string that I'm trying to get parsed is Column_1='test''the''stuff''all''day''long' AND Column_2='000' AND TheVeryColumnIWantToFind = 'Column_1=''test''''the''''stuff''''all''''day''''long'' AND Column_2=''000'' AND TheVeryColumnIWantToFind = '' TheVeryColumnIWantToFind = '' AND (Column_3 is null or Column_3 = ''Not interesting'') AND ''1'' = ''1''' AND (Column_3 is null or Column_3 = 'Still not interesting') AND '1' = '1' As you may have guessed, I tried to create some kind of worst case to ensure the RegEx won't fail on more complicated SQL where clauses. The RegEx itself looks like this (?i:(?<!=\s*'(?:[^']|(?:''))*)((?<=\s*)TheVeryColumnIWantToFind(?=(?:\s+|=)))) I'm not sure if there is a more elegant RegEx (there'll most likely be one), but that's not important right now as it does the trick. To explain the RegEx in a few words: If it finds the column I'm after, it does a negative look-behind to figure out if the column name is used in a string literal. If so, it won't match. If not, it'll match. Back to the question. As I mentioned before, it won't work with Java. What will work and result in what I want? I found out, that Java does not seem to support unlimited look-behinds but still I couldn't get it to work. Isn't it right that a look-behind is always putting a limit up on itself from the search offset to the current search position? So it would result in something like "position - offset"?

    Read the article

  • Developer certificate vs purchased certificate for WCF

    - by RemotecUk
    I understsand that if I want to use authentication in WCF then I need to install a certificate on my server which WCF will use to encrypt data passing between my server and client. For development purposes I believe I can use the makecert.exe util. to make a development certificate. What is the worst that can happen if I use this certificate on the production environment? and... Why cant I use this certificate on the production environment? and ... What is the certificate actually going to do in this scenario? [Edit: Added another question] finally... In a scenario where the website has a certificate installed to provide HTTPS support can the same certificate be used for the WCF services as well? Note on my application: Its a NetTCP client and server service. The users will log in using the same username and password which they use for the website which is passed in clear text. I would be happy to pass the u/n + p/w in cleartext to WCF but this isnt allowed by the framework and a certificate must be in place. However, I dont want to buy an certificate due to budget constraints! (Sorry for the possibly stupid question but I really dont understand this so would welcome some help with this).

    Read the article

  • Microsoft JScript runtime error: Sys.InvalidOperationException: Two components with the same id.

    - by Irwin
    I'm working in ASP .NET dynamic data. In one of my edit controls I wanted to allow the user to add records from a related table to the current page. (Literally, if you are on the orders page, you would be allowed to add a new customer to the system on this page as well, and then associate it with that order). So, I have a DetailsView set to InsertMode, nested inside of an UpdatePanel, which is shown by a ModalPopupExtender which is invoked when 'add new' is clicked. This doohickey works the first time i execute this process, that is, a customer is added (and i update the dropdown list as well). However, I realized it didn't work (properly) again until I refreshed the entire page. When I attached my debugger, my worst fears were realized (ok, not really). But an exception was being thrown: "Microsoft JScript runtime error: Sys.InvalidOperationException: Two components with the same id." Which seemed to be complaining about a Calendar Extender Control that is part of the details view. Any guidance on what's going on here would be great. Thanks.

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >