Search Results

Search found 2257 results on 91 pages for 'crazy jim'.

Page 73/91 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • Thoughts on Thoughts on TDD

    Brian Harry wrote a post entitled Thoughts on TDD that I thought I was going to let lie, but I find that I need to write a response. I find myself in agreement with Brian on many points in the post, but I disagree with his conclusion. Not surprisingly, I agree with the things that he likes about TDD. Focusing on the usage rather than the implementation is really important, and this is important whether you use TDD or not. And YAGNI was a big theme in my Seven Deadly Sins of Programming series. Now, on to what he doesnt like. He says that he finds it inefficient to have tests that he has to change every time he refactors. Here is where we part company. If you are having to do a lot of test rewriting (say, more than a couple of minutes work to get back to green) *often* when you are refactoring your code, I submit that either you are testing things that you dont need to test (internal details rather than external implementation), your code perhaps isnt as decoupled as it could be, or maybe you need a visit to refactorers anonymous. I also like to refactor like crazy, but as we all know, the huge downside of refactoring is that we often break things. Important things. Subtle things. Which makes refactoring risky. *Unless* we have a set of tests that have great coverage. And TDD (or Example-based Design, which I prefer as a term) gives those to us. Now, I dont know what sort of coverage Brian gets with the unit tests that he writes, but I do know that for the majority of the developers Ive worked with and I count myself in that bucket the coverage of unit tests written afterwards is considerably inferior to the coverage of unit tests that come from TDD. For me, it all comes down to the answer to the following question: How do you ensure that your code works now and will continue to work in the future? Im willing to put up with a little efficiency on the front side to get that benefit later. Its not the writing of the code thats the expensive part, its everything else that comes after. I dont think that stepping through test cases in the debugger gets you what you want. You can verify what the current behavior is, sure, and do it fairly cheaply, but you dont help the guy in the future who doesnt know what conditions were important if he has to change your code. His second part that he doesnt like backing into an architecture (go read to see what he means). Ive certainly had to work with code that was like this before, and its a nightmare the code that nobody wants to touch. But thats not at all the kind of code that you get with TDD, because if youre doing it right youre doing the write a failing tests, make it pass, refactor approach. Now, you may miss some useful refactorings and generalizations for this, but if you do, you can refactor later because you have the tests that make it safe to do so, and your code tends to be easy to refactor because the same things that make code easy to write unit tests for make it easy to refactor. I also think Brian is missing an important point. We arent all as smart as he is. Im reminded a bit of the lesson of Intentional Programming, Charles Simonyis paradigm for making programming easier. I played around with Intentional Programming when it was young, and came to the conclusion that it was a pretty good thing if you were as smart as Simonyi is, but it was pretty much a disaster if you were an average developer. In this case, TDD gives you a way to work your way into a good, flexible, and functional architecture when you dont have somebody of Brians talents to help you out. And thats a good thing.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Why SQL Developer Rocks for the Advanced User Too

    - by thatjeffsmith
    While SQL Developer may be ‘perfect for Oracle beginners,’ that doesn’t preclude advanced and intermediate users from getting their fair share of toys! I’ve been working with Oracle since the 7.3.4 days, and I think it’s pretty safe to say that the WAY an ‘old timer’ uses a tool like SQL Developer is radically different than the ‘beginner.’ If you’ve been reluctant to use SQL Developer because it’s a GUI, give me a few minutes to try to convince you it’s worth a second (or third) look. 1. Help when you want it, and only when you want it One of the biggest gripes any user has with a piece of software is when said software can’t get out of it’s own way. When you’re typing in a word processor, sometimes you can do without the grammar and spelling checks, the offer to auto-complete your words, and all of the additional mark-up. This drives folks to programs like Notepad++ and vi. You can disable the code insight feature so you can type unmolested by SQL Developer’s attempt to auto-complete your object names. Now, if you happen to come across a long or hard to spell object name, you can still invoke the feature on demand using Ctrl+Spacebar Code Editor – Completion Insight – Enable Completion Auto-Popup (Keyword being Auto) 2. Automatic File Tracking SQL*Minus is nice. Vi is cool. Notepad++ has a lot of features I like. But not too many editors offer automatic logging of changes to your files without having to setup a source control system. I was doing some work on my login.sql. I’m not doing anything crazy, but seeing what I had done in previous iterations was helpful. Now imagine how nice it would be to have this available for your l,000+ line scripts! Track your scripts as they change, no setup required! 3. Extend the Functionality Know SQL and XML? Wish SQL Developer did JUST a little bit more? Build your own extensions. You can have custom context menus and object pages in just a few minutes. This is an example of lazy developers writing code that write code. 4. Get Your Money’s Worth You’ve licensed Enterprise Edition. You got your Diagnostic and Tuning packs. Now start using them! Not everyone has access to Enterprise Manager, especially developers. But that doesn’t mean they don’t need help with troubleshooting and optimizing poorly performing SQL statements. ASH, AWR, Real-Time SQL Monitoring and the SQL Tuning Advisor are built into the Reports and Worksheet. Yes you could make the package calls, but that’s a whole lot of typing, and I’d rather just get to the results. 5. Profile, Debug, & Unit Testing PLSQL An Interactive Development Environment (IDE) built by the same folks that own the programming language (Hello – Oracle PLSQL!) should be complete. It should ‘hug’ the developer and empower them to churn out programs that work, run fast, and are easy to maintain. Write it, test it, debug it, and tune it. When you’re running your programs and you just want to see the data that’s returned, that shouldn’t require any special settings or workaround to make it happen either. Magic! And a whole lot more… I could go on and talk about the support for things like DataPump, RMAN, and DBMS_SCHEDULER, but you’re experts and you’re plenty busy. If you think SQL Developer is falling short somewhere, I want you to let us know about it.

    Read the article

  • 256 Worker Role 3D Rendering Demo is now a Lab on my Azure Course

    - by Alan Smith
    Ever since I came up with the crazy idea of creating an Azure application that would spin up 256 worker roles (please vote if you like it ) to render a 3D animation created using the Kinect depth camera I have been trying to think of something useful to do with it. I have also been busy working on developing training materials for a Windows Azure course that I will be delivering through a training partner in Stockholm, and for customers wanting to learn Windows Azure. I hit on the idea of combining the render demo and a course lab and creating a lab where the students would create and deploy their own mini render farms, which would participate in a single render job, consisting of 2,000 frames. The architecture of the solution is shown below. As students would be creating and deploying their own applications, I thought it would be fun to introduce some competitiveness into the lab. In the 256 worker role demo I capture the rendering statistics for each role, so it was fairly simple to include the students name in these statistics. This allowed the process monitor application to capture the number of frames each student had rendered and display a high-score table. When I demoed the application I deployed one instance that started rendering a frame every few minutes, and the challenge for the students was to deploy and scale their applications, and then overtake my single role instance by the end of the lab time. I had the process monitor running on the projector during the lab so the class could see the progress of their deployments, and how they were performing against my implementation and their classmates. When I tested the lab for the first time in Oslo last week it was a great success, the students were keen to be the first to build and deploy their solution and then watch the frames appear. As the students mostly had MSDN suspicions they were able to scale to the full 20 worker role instances and before long we had over 100 worker roles working on the animation. There were, however, a few issues who the couple of issues caused by the competitive nature of the lab. The first student to scale the application to 20 instances would render the most frames and win; there was no way for others to catch up. Also, as they were competing against each other, there was no incentive to help others on the course get their application up and running. I have now re-written the lab to divide the student into teams that will compete to render the most frames. This means that if one developer on the team can deploy and scale quickly, the other team still has a chance to catch up. It also means that if a student finishes quickly and puts their team in the lead they will have an incentive to help the other developers on their team get up and running. As I was using “Sharks with Lasers” for a lot of my demos, and reserved the sharkswithfreakinlasers namespaces for some of the Azure services (well somebody had to do it), the students came up with some creative alternatives, like “Camels with Cannons” and “Honey Badgers with Homing Missiles”. That gave me the idea for the teams having to choose a creative name involving animals and weapons. The team rendering architecture diagram is shown below.   Render Challenge Rules In order to ensure fair play a number of rules are imposed on the lab. ·         The class will be divided into teams, each team choses a name. ·         The team name must consist of a ferocious animal combined with a hazardous weapon. ·         Teams can allocate as many worker roles as they can muster to the render job. ·         Frame processing statistics and rendered frames will be vigilantly monitored; any cheating, tampering, and other foul play will result in penalties. The screenshot below shows an example of the team render farm in action, Badgers with Bombs have taken a lead over Camels with Cannons, and both are  leaving the Sharks with Lasers standing. If you are interested in attending a scheduled delivery of my Windows Azure or Windows Azure Service bus courses, or would like on-site training, more details are here.

    Read the article

  • Thread safe double buffering

    - by kdavis8
    I am trying to implement a draw map method that will draw the tiled image across the surface of the component. I'm having issue with this code. The double buffering does not seem to be working, because the sprite flickers like crazy; my source code: package myPackage; import java.awt.Color; import java.awt.Graphics; import java.awt.Graphics2D; import java.awt.Image; import java.awt.Toolkit; import java.awt.image.BufferStrategy; import java.awt.image.BufferedImage; import javax.swing.JFrame; public class GameView extends JFrame implements Runnable { public BufferedImage backbuffer; public Graphics2D g2d; public Image img; Thread gameloop; Scene scene; public GameView() { super("Game View"); setSize(600, 600); setVisible(true); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); backbuffer = new BufferedImage(getWidth(), getHeight(), BufferedImage.TYPE_INT_RGB); g2d = backbuffer.createGraphics(); Toolkit tk = Toolkit.getDefaultToolkit(); img = tk.getImage(this.getClass().getResource("cage.png")); scene = new Scene(g2d, this); gameloop = new Thread(this); gameloop.start(); } public static void main(String args[]) { new GameView(); } public void paint(Graphics g) { g.drawImage(backbuffer, 0, 0, this); repaint(); } @Override public void run() { // TODO Auto-generated method stub Thread t = Thread.currentThread(); while (t == gameloop) { scene.getScene("dirtmap"); g2d.drawImage(img, 80, 80,this![enter image description here][1]); } } private void drawScene(String string) { // TODO Auto-generated method stub // g2d.setColor(Color.white); // g2d.fillRect(0, 0, getWidth(), getHeight()); scene.getScene(string); } } package myPackage; import java.awt.Color; import java.awt.Component; import java.awt.Graphics; import java.awt.Graphics2D; import java.awt.Image; import java.awt.Toolkit; public class Scene { Graphics g2d; Component c; boolean loaded = false; public Scene(Graphics2D gr, Component co) { g2d = gr; c = co; } public void getScene(String mapName) { Toolkit tk = Toolkit.getDefaultToolkit(); Image tile = tk.getImage(this.getClass().getResource("dirt.png")); // g2d.setColor(Color.red); for (int y = 0; y <= 18; y++) { for (int x = 0; x <= 18; x += 1) { g2d.drawImage(tile, x * 32, y * 32, c); } } loaded = true; } }

    Read the article

  • Serial plans: Threshold / Parallel_degree_limit = 1

    - by jean-pierre.dijcks
    As a very short follow up on the previous post. So here is some more on getting a serial plan and why that happens Another reason - compared to the auto DOP is not on as we looked at in the earlier post - and often more prevalent to get a serial plan is if the plan simply does not take long enough to consider a parallel path. The resulting plan and note looks like this (note that this is a serial plan!): explain plan for select count(1) from sales; SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY()); PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- Plan hash value: 672559287 -------------------------------------------------------------------------------------- | Id  | Operation            | Name  | Rows  | Cost (%CPU)| Time     | Pstart| Pstop | -------------------------------------------------------------------------------------- PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- |   0 | SELECT STATEMENT     |       |     1 |     5   (0)| 00:00:01 |       |     | |   1 |  SORT AGGREGATE      |       |     1 |            |          |       |     | |   2 |   PARTITION RANGE ALL|       |   960 |     5   (0)| 00:00:01 |     1 |  16 | |   3 |    TABLE ACCESS FULL | SALES |   960 |     5   (0)| 00:00:01 |     1 |  16 | Note -----    - automatic DOP: Computed Degree of Parallelism is 1 because of parallel threshold 14 rows selected. The parallel threshold is referring to parallel_min_time_threshold and since I did not change the default (10s) the plan is not being considered for a parallel degree computation and is therefore staying with the serial execution. Now we go into the land of crazy: Assume I do want this DOP=1 to happen, I could set the parameter in the init.ora, but to highlight it in this case I changed it on the session: alter session set parallel_degree_limit = 1; The result I get is: ERROR: ORA-02097: parameter cannot be modified because specified value is invalid ORA-00096: invalid value 1 for parameter parallel_degree_limit, must be from among CPU IO AUTO INTEGER>=2 Which of course makes perfect sense...

    Read the article

  • Why are my Unity procedural animations jerky?

    - by Phoenix Perry
    I'm working in Unity and getting some crazy weird motion behavior. I have a plane and I'm moving it. It's ever so slightly getting about 1 pixel bigger and smaller. It looks like the it's kind of getting squeezed sideways by a pixel. I'm moving a plane by cos and sin so it will spin on the x and z axes. If the planes are moving at Time.time, everything is fine. However, if I put in slower speed multiplier, I get an amazingly weird jerk in my animation. I get it with or without the lerp. How do I fix it? I want it to move very slowly. Is there some sort of invisible grid in unity? Some sort of minimum motion per frame? I put a visual sample of the behavior here. Here's the relevant code: public void spin() { for (int i = 0; i < numPlanes; i++ ) { GameObject g = planes[i] as GameObject; //alt method //currentRotation += speed * Time.deltaTime * 100; //rotation.eulerAngles = new Vector3(0, currentRotation, 0); //g.transform.position = rotation * rotationRadius; //sine method g.GetComponent<PlaneSetup>().pos.x = g.GetComponent<PlaneSetup>().radiusX * (Mathf.Cos((Time.time*speed) + g.GetComponent<PlaneSetup>().startAngle)); g.GetComponent<PlaneSetup>().pos.z = g.GetComponent<PlaneSetup>().radius * Mathf.Sin((Time.time*speed) + g.GetComponent<PlaneSetup>().startAngle); g.GetComponent<PlaneSetup>().pos.y = g.GetComponent<Transform>().position.y; ////offset g.GetComponent<PlaneSetup>().pos.z += 20; g.GetComponent<PlaneSetup>().posLerp.x = Mathf.Lerp(g.transform.position.x,g.GetComponent<PlaneSetup>().pos.x, .5f); g.GetComponent<PlaneSetup>().posLerp.z = Mathf.Lerp(g.transform.position.z, g.GetComponent<PlaneSetup>().pos.z, .5f); g.GetComponent<PlaneSetup>().posLerp.y = g.GetComponent<Transform>().position.y; g.transform.position = g.GetComponent<PlaneSetup>().posLerp; } Invoke("spin",0.0f); } The full code is on github. There is literally nothing else going on. I've turned off all other game objects so it's only the 40 planes with a texture2D shader. I removed it from Invoke and tried it in Update -- still happens. With a set frame rate or not, the same problem occurs. Tested it in Fixed Update. Same issue. The script on the individual plane doesn't even have an update function in it. The data on it could functionally live in a struct. I'm getting between 90 and 123 fps. Going to investigate and test further. I put this in an invoke function to see if I could get around it just occurring in update. There are no physics on these shapes. It's a straight procedural animation. Limited it to 1 plane - still happens. Thoughts? Removed the shader - still happening.

    Read the article

  • Strange Happenings

    - by MOSSLover
    There are weeks we go about our life thinking nothing is going to change nothing will happen.  Then there are other weeks a billion things happen at once.  Friday started off very weird for me.  I flew into Atlanta and I met some cool people for another SharePoint event.  I had some good conversations.  Saturday then hit me and my virtual machine bombed in my presentation after the auto updater ran.  I was writing code on the board and describing everything in notepad.  I would say as presentations go it was the best and the worst presentation all wrapped into one.  The next day I was in Baltimore and I hung out with my aunt which was relatively uneventful and great.  Then Monday hit and half my presentations failed or succeeded and my screen freezes so I start describing the code.  I was on top of my game until Monday night.  On top of the world.  I'm exhausted I get into Raleigh and one of the craziest stories of my life happens.  So my boss has been renting cars through Priceline this week I got a different company than the other weeks. The company gives me a Ford Focus and I plug in the coordinates on my IPhone where I want go.  I head out and then I get to the destination hotel (or I thought I did). I go inside it's the wrong hotel the other one is a few miles away.  I walk outside hop into the car and it sounds like a gunshot.  Nothing is starting...Am I doing something wrong?  No I'm not the car is completely dead in the water.  I call the rental car facility and they tell me to call roadside they are closing for the night.  Roadside says they can't give me a new car but they can get me a jump then I have to take it up with the facility.  They send me a tow truck to give me a jump the guy can't jump the car.  He tells me this vehicle was towed about an hour ago.  He shows me a copy of a slip from when he towed it.  We also notice the rental car company left one of there price scanning guns in the vehicle.  I call up roadside and now they are interested in getting me a car because I need to be onsite tomorrow.  They get the manager of the facility on the phone he apologizes profusely and he says he'll be there in 10 minutes.  About 30 minutes pass and him plus another dude show up with a Ford Escape leather interior.  At this point I hand him the gun tell him someone left it in the vehicle and that I'm not so happy with them.  I ask them to comp my rental they can't due to Priceline, however if I call him again this week he can get me a voucher.  It's about 2 am and I'm ready to get to the hotel I don't make it in the next morning until 10 am.  I would say this was a crazy week all forms of technology are trying to tell me something.  What I have no idea, but we'll see the outcome soon.  I feel so weird tons of change is about to happen.  I don't know if it's good or bad.  I think this week is some form of omen.

    Read the article

  • Tuning Red Gate: #1 of Many

    - by Grant Fritchey
    Everyone runs into performance issues at some point. Same thing goes for Red Gate software. Some of our internal systems were running into some serious bottlenecks. It just so happens that we have this nice little SQL Server monitoring tool. What if I were to, oh, I don't know, use the monitoring tool to identify the bottlenecks, figure out the causes and then apply a fix (where possible) and then start the whole thing all over again? Just a crazy thought. OK, I was asked to. This is my first time looking through these servers, so here's how I'd go about using SQL Monitor to get a quick health check, sort of like checking the vitals on a patient. First time opening up our internal SQL Monitor instance and I was greeted with this: Oh my. Maybe I need to get our internal guys to read my blog. Anyway, I know that there are two servers where most of the load is. I'll drill down on the first. I'm selecting the server, not the instance, by clicking on the server name. That opens up the Global Overview page for the server. The information here much more applicable to the "oh my gosh, I have a problem now" type of monitoring. But, looking at this, I am seeing something immediately. There are four(4) drives on the system. The C:\ has an average read time of 16.9ms, more than double the others. Is that a problem? Not sure, but it's something I'll look at. It's write time is higher too. I'll keep drilling down, first, to the unclosed alerts on the server. Now things get interesting. SQL Monitor has a number of different types of alerts, some related to error states, others to service status, and then some related to performance. Guess what I'm seeing a bunch of right here: Long running queries and long job durations. If you check the dates, they're all recent, within the last 24 hours. If they had just been old, uncleared alerts, I wouldn't be that concerned. But with all these, all performance related, and all in the last 24 hours, yeah, I'm concerned. At this point, I could just start responding to the Alerts. If I click on one of the the Long-running query alerts, I'll get all kinds of cool data that can help me determine why the query ran long. But, I'm not in a reactive mode here yet. I'm still gathering data, trying to understand how the server works. I have the information that we're generating a lot of performance alerts, let's sock that away for the moment. Instead, I'm going to back up and look at the Global Overview for the SQL Instance. It shows all the databases on the server and their status. Then it shows a number of basic metrics about the SQL Server instance, again for that "what's happening now" view or things. Then, down at the bottom, there is the Top 10 expensive queries list: This is great stuff. And no, not because I can see the top queries for the last 5 minutes, but because I can adjust that out 3 days. Now I can see where some serious pain is occurring over the last few days. Databases have been blocked out to protect the guilty. That's it for the moment. I have enough knowledge of what's going on in the system that I can start to try to figure out why the system is running slowly. But, I want to look a little more at some historical data, to understand better how this server is behaving. More next time.

    Read the article

  • 13 Things From the Oracle Social Summit You Should Know

    - by Mike Stiles
    Oracle held its first annual Oracle Social Summit, “The School for the Socially Gifted,” this past week in Las Vegas.  If anyone came to the event uncertain as to why Oracle has such an interest in social, and what its plans for social are, they left with an entirely new vision of where social is headed, and why.For those unable to attend, I was able to keep my MacBook charged just long enough to capture some of the more pertinent takeaways.1. The social enterprise is inevitable.  Social technology is disrupting the hierarchies of big companies.  It’s a revolution in corporate structures, just as it has been in various governments.  It’s not crazy to ask yourself if your CEO is the next Mubarak.  (David Kilpatrick Author of “The Facebook Effect” and founder of the Techonomy Conference) 2. The social enterprise represents collaboration on steroids.  It’s tapping into the power of your people, as opposed to keeping them “in their place.”  3. 1 in every 7 humans on earth is an active Facebook user.  75% have posted a negative comment after a poor customer experience.  The average user will inform 53 people of a bad experience.4. Checking social media is the 2nd biggest use of phones now.  Reading posts from brands is 4th.5. 70% of marketers have little or no understanding of the social conversations happening around their brand.6. Advertising, when done well, is content we care about, preferably informed by those we trust.7. Acquiring low-quality fans through gimmicks, or focusing purely on fan acquisition is a mistake.  And relying purely on organic distribution is a mistake.  (John Yi, Head of Marketing Partnerships – Facebook)8. Using all this newfound data and insight serves to positively affect the customer experience.  It allows organizations to now leverage the investments they’ve made in social up to now.9. Social is not a marketing utopia where everything is free.  It’s pay to play.  The paid component is about driving attention.  10. We are only in the infancy of ad-targeting opportunities in social.  There’s an evolution underway from interest-based targeting to action-based targeting.11. There’s actually very little overlap of the people following you on different social platforms.  Don’t assume it’s the same audience on each.12. People who can create content and who also have an understanding of what drives that content are growing increasingly valuable.13. Oracle Social’s future is enterprise SRM, integrated across marketing, selling, service, HR and every other corner of the organization.And in case you thought those were the only gems to come out of the summit, you may want to keep an eye out for Tuesday’s Social Spotlight, ever so aptly titled “13 More Things from the Oracle Social Summit You Should Know.”

    Read the article

  • Weird SSIS Configuration Error

    - by Christopher House
    I ran into an interesting SSIS issue that I thought I'd share in hopes that it may save someone from bruising their head after repeatedly banging it on the desk like I did.  I was trying to setup what I believe is referred to as "indirect configuration" in SSIS.  This is where you store your configuration in some repository like a database or a file, then store the location of that repository in an environment variable and use that to configure the connection to your configuration repository.  In my specific situation, I was using a SQL database.  I had this all working, but for reasons I'll not bore you with, I had to move my SSIS development to a new VM last week.  When I got my new VM, I set about creating a new package.  I finished up development on the package and started setting up configuration.  I created an OLE DB connection that pointed to my configuration table then went through the configuration wizard to have the connection string for this connection set through my environment variable.  I then went through the wizard to set another property through a value stored in the configuration table.  When I got to the point where you select the connection, my connection wasn't in the list: As you can see in the screen capture above, the ConfigurationDb connection isn't in the list of available SQL connections in the configuration wizard.  Strange.  I canceled out of the wizard, went to the properties for ConfigurationDb, tested the connection and it was successful.  I went back to the wizard again and this time ConfigurationDb was there.  I completed the wizard then went to test my package.  Unfortunately the package wouldn't run, I got the following error: Unfortunately, googling for this error code didn't help much as none of the results appears related to package configuration.  I did notice that when I went back through the package configuration and tried to edit a previously saved config entry,  I was getting the following error: I checked the connection string I had stored in my environment variable and noticed that indeed, it did not have a provider name.  I didn't recall having included one on my previous VM, but I figured I'd include it just to see what happened.  That made no difference at all.  After a day and a half of trying to figure out what the problem was, I'm pleased to report that through extensive trial and error, I have resolved the error. As it turns out, the person who setup this new VM for me named the server SQLSERVER2008.  This meant my configuration connection string was: Initial Catalog=SSISConfigDb;Data Source=SQLSERVER2008;Integrated Security=SSPI; Just for the heck of it, I tried changing it to: Initial Catalog=SSISConfigDb;Data Source=(local);Integrated Security=SSPI; That did the trick!  As soon as I restarted BIDS, I was able to run the package with no errors at all.  Crazy.  So, the moral of the story is, don't name your server SQLSERVER2008 if you want SSIS configuration to work when using SQL as your config store.

    Read the article

  • Java EE @ No Fluff Just Stuff Tour

    - by reza_rahman
    If you work in the US and still don't know what the No Fluff Just Stuff (NFJS) Tour is, you are doing yourself a very serious disfavor. NFJS is by far the cheapest and most effective way to stay up to date through some world class speakers and talks. This is most certainly true for US enterprise Java developers in particular. Following the US cultural tradition of old-fashioned roadshows, NFJS is basically a set program of speakers and topics offered at major US cities year round. Many now famous world class technology speakers can trace their humble roots to NFJS. Via NFJS you basically get to have amazing training without paying for an expensive venue, lodging or travel. The events are usually on the weekends so you don't need to even skip work if you want (a great feature for consultants on tight budgets and deadlines). I am proud to share with you that I recently joined the NFJS troupe. My hope is that this will help solve the lingering problem of effectively spreading the Java EE message here in the US. For NFJS I hope my joining will help beef up perhaps much desired Java content. In any case, simply being accepted into this legendary program is an honor I could have perhaps only dreamed of a few years ago. I am very grateful to Jay Zimmerman for seeing the value in me and the Java EE content. The current speaker line-up consists of the likes of Neal Ford, Venkat Subramaniam, Nathaniel Schutta, Tim Berglund and many other great speakers. I actually had my tour debut on April 4-5 with the NFJS New York Software Symposium - basically a short train commute away from my home office. The show is traditionally one of the smaller ones and it was not that bad for a start. I look forward to doing a few more in the coming months (more on that a bit later). I had four talks back to back (really my most favorite four at the moment). The first one was a talk on JMS 2 - some of you might already know JMS is one of my most favored Java EE APIs. The slides for the talk are posted below: What’s New in Java Message Service 2 from Reza Rahman The next talk I delivered was my Cargo Tracker/Java EE + DDD talk. This talk basically overviews DDD and describes how DDD maps to Java EE using code examples/demos from the Cargo Tracker Java EE Blue Prints project. Applied Domain-Driven Design Blue Prints for Java EE from Reza Rahman The third talk I delivered was our flagship Java EE 7/8 talk. As you may know, currently the talk is basically about Java EE 7. I'll probably slowly evolve this talk to gradually transform it into a Java EE 8 talk as we move forward (I'll blog about that separately shortly). The following is the slide deck for the talk: JavaEE.Next(): Java EE 7, 8, and Beyond from Reza Rahman My last talk for the show was my JavaScript+Java EE 7 talk. This talk is basically about aligning EE 7 with the emerging JavaScript ecosystem (specifically AngularJS). The slide deck for the talk is here: JavaScript/HTML5 Rich Clients Using Java EE 7 from Reza Rahman Unsurprisingly this talk was well-attended. The demo application code is posted on GitHub. The code should be a helpful resource if this development model is something that interests you. Do let me know if you need help with it but the instructions should be fairly self-explanatory. My next NFJS show is the Central Ohio Software Symposium in Columbus on June 6-8 (sorry for the late notice - it's been a really crazy few weeks). Here's my tour schedule so far, I'll keep you up-to-date as the tour goes forward: June 6 - 8, Columbus Ohio. June 24 - 27, Denver Colorado (UberConf) - my most extensive agenda on the tour so far. July 18 - 20, Austin Texas. I hope you'll take this opportunity to get some updates on Java EE as well as the other awesome content on the tour?

    Read the article

  • Impressions on jQuery Mobile

    - by Jeff
    For the uninitiated, jQuery Mobile is a sweet little client framework that turns regular HTML into something more touch and mobile friendly. It results in a user interface that has bigger targets, rounded corners and simple skinning capability. When it was announced that ASP.NET MVC 4 would include support for a mobile-sensitive view engine, offering up alternate views for clients that fit the mobile profile, I was all over that. Combined with jQuery Mobile, it brought a chance to do some experimentation. I blitzed through the views in POP Forums and converted them all to mobile views. (For the curious, this first pass can be found here on CodePlex, while a more recent update that uses RC 2 of jQuery Mobile v1.1.0 is running on the demo site.) Initially, it was kind of a mixed bag. The jQuery demo site also acts as documentation, and it’s reasonably complete. I had no problem getting up a lot of basic views quickly, splitting out portions of some pages as subpages that they quickly load in. The default behavior in the older version was to slide the pages in, which looked a little weird when you were using a back button. They’ve since changed it so the default transition is a fade in/out. Because you’re dealing with Web pages, I don’t think anyone is really under the illusion that you’re not using a native app, so I don’t know that this matters. I’ve tested extensively on iPad and Windows Phone, and to be honest, I’ve encountered a lot of issues. On Windows Phone, there is some kind of inconsistency that prevents the proper respect for the viewport settings. The text background on text fields (for labeling) doesn’t work, either. On both platforms, certain in-DOM page navigation links work only half of the time. Is this an issue of user error? Probably, but that’s what’s frustrating about it. Most of what you accomplish with this framework involves decorating various elements with CSS classes. There isn’t any design-time safety to speak of to make sure that you’re doing it right. I think the issues can be overcome, but there are some trade-offs to consider. The first is download size. Yes, the scripts and CSS do get cached, but that first hit will cost nearly 40k for the mobile parts. That’s still a lot when you’re on some crappy AT&T EDGE network, or hotel Wi-Fi. Then you have to ask yourself, do you really want your app to look like it’s native to iOS? I’m not saying that’s a bad thing, because consistent UI is good, but you will end up feeling a whole lot of sameness, and maybe you don’t want that. I did some experimentation to try and Metro-ize the jQuery Mobile theme, and it’s kind of a mixed bag. It mostly works, but you get some weirdness on badges and with buttons that I’m not crazy about. It probably just means you need to keep tweaking. At this point, I’m a little torn about whether or not I’ll use it for POP Forums or one of the sites I’m working on. The benefits are pretty strong, but figuring out where I’m doing it wrong is proving a little time consuming.

    Read the article

  • How Can I Start an Incognito/Private Browsing Window from a Shortcut?

    - by Jason Fitzpatrick
    Sometimes you just want to pop the browser open for a quick web search without reloading all your saved tabs; read on as we show a fellow reader how to make a quick private-browsing shortcut. Dear How-To Geek, I came up with a solution to my problem, but I need your help implementing it. I typically have a ton of tabs open in my web browser and, when I need to free up system resources when gaming or using a resource intense application, I shut down the web browser. The problem arises when I find myself needing to do quick web search while the browser is shut down. I don’t want to open it up, load all the tabs, and waste the resources in doing so all for a quick Google search. The perfect solution, it would seem, is to open up one of Chrome’s Incognito windows: it loads separate, it won’t open up all the old tabs, and it’s perfect for a quick Google search. Is there a way to launch Chrome with a single Incognito window open without having to open the browser in the normal mode (and load the bazillion tabs I have sitting there)? Sincerely, Tab Crazy That’s a rather clever work around to your problem. Since you’ve already done the hard work of figuring out the solution you need, we’re more than happy to help you across the finish line. The magic you seek is available via what are known as “command line options” which allow you to add additional parameters and switches onto a command.   By appending the command the Chrome shortcut uses, we can easily tell it to launch in Incognito mode. (And, for other readers following along at home, we can do the same thing with other browsers like Firefox). First, let’s look at Chrome’s default shortcut: If you right click on it and select the properties menu, you’ll see where the shortcut points: "C:\Program Files (x86)\Google\Chrome\Application\chrome.exe" If you run that shortcut, you’ll open up normal browsing mode in Chrome and your saved tabs will all load. What we need to do is use the command line switches available for Chrome and tell it that we want it to launch an Incognito window instead. Doing so is as simple as appending the end of the “Target” box’s command line entry with -incognito, like so: "C:\Program Files (x86)\Google\Chrome\Application\chrome.exe" -incognito We’d also recommend changing the icon to it’s easy to tell the default Chrome shortcut apart from your new Incognito shortcut. When you’re done, make sure to hit OK/Apply at the button to save the changes. You can recreate the same private-browsing-shortcut effect with other major web browsers too. Repeat shortcut editing steps we highlighted above, but change out the -incognito with -private (for Firefox and Internet Explorer) and -newprivatetab (for Opera). With just a simple command line switch applied, you can now launch a lightweight single browser window for those quick web searches without having to stop your game and load up all your saved tabs. Have a pressing tech question? Email us at [email protected] and we’ll do our best to answer it.

    Read the article

  • Managing game state / 'what to update' within an XNA game 'screen'

    - by codinghands
    Note - having read through other GDev questions suggested when writing this question I'm confident this isn't a dupe. Of course, it's 3am and I'm likely wrong, so please mod as such if so. I'm trying to figure out how best to manage state within my game screens - please bare with me though! At the moment I'm using a heavily modified version of the fantastic game state management example on the XNA site available here. This is working perfectly for my 'Screens' - 'IntroScreen' with some shiny logos, 'TitleScreen' and a 'MenuScreen' stacked on top for the title and menu, 'PlayScreen' for the actual gameplay, etc. Each screen has the a bunch of sprites, and an 'Update' and 'Draw', managed by a 'ScreenManager'. In addition to the above, and as suggested as an answer to my other question here, most screens have a 'GameProcessQueue' class full of 'GameProcess'es which lets me do just about anything (animations, youbetcha!), in any order, in sequence or parallel. Why mention all this? When I talk about managing game state I'm thinking more for complex scenarios within a 'Screen'. 'TitleScreen', 'MenuScreen' and the like are all relatively simple. 'Play Screen' less so. How do people manage the different 'states' within the screen (or whatever you call it) that 'does' gameplay? (for me, the 'PlayScreen') I've thought about the following: Enum of different states in the Screen, 'activeState' enum-type variable, switching on the enum in the Screen Update() loop to determine what Screen Update 'sub'-function is called. I can see this getting hairy pretty fast though as screens get more complex and with the 'PlayScreen' becoming a behemoth mega-class. 'State' class with Update loop - a Screen can have any number of 'States', 1+ of which are 'active'. Screen update loop calls update on all active states. States themselves know which screen they belong to, and may even belong to a 'StateManager' which handles transitioning from one state to the next. Once a state is over it's removed from the ScreenState list. The Screen doesn't need a bunch of GameProcessQueues, each State has its own. Abstract Screen further to be more flexible - I can see the similarities between what I've got (game 'Screens' handled by a ScreenManager) and what I want (states within a screen, and a mechanism to manage them). However at the moment I see 'Screens' as high level and very distinct ('PlayScreen' with baddies != 'MenuScreen' with 4 words and event handlers), where as my proposed 'States' are more intrinsically tied to a specific screen with complex requirements. I think. This is for a turn-based board game, so it's easier to define things as a discrete series of steps (IntroAnimation - P1Turn - P2Turn - P1Turn ... - GameOver - .... Obviously with an open-world RPG things are very different, but any advice in this scenario is appreciated. If I'm just going OOP-crazy please say so. Similarly I'm concious there's a huge amount on this site re: state management. But as my first 'serious' game after a couple of false starts I'd like to get this right, and would rather be harassed and modded down than never ask :)

    Read the article

  • Hell and Diplomacy: Notes on Software Integration

    - by ericajanine
    Well, I'm getting cabin fever and short-timer's ADD all at the same time. I haven't been anywhere outside of my greater city area in FOREVER and I'm only days away from my vacation. I have brainlock because the last few days have been non-stop diffusing amazingly hostile conversations. I think I'll write about that. So then, I "do" software. At the end of the day, software is pretty straightforward. Software is that thing we love and try to make do things not currently in play, in existence. If a process around getting software to do something is broken (like most actually are), then we should acknowledge it and move on. We are professional. We are helpful beyond the normal call of duty. We live and breathe making the lives better for those apps being active in the world. But above all--the shocker: We are SERVICE. In a service frame of mind, all perspectives shift to what is best overall for system stabilization vs. what must be in production to meet business objectives. It doesn't matter how much you like or dislike the creator of said software. It doesn't matter what time you went to bed last night or if your mate appreciates your Death March attitude. Getting a product in and when is an age-old dilemma in a software environment where more than, say, 3 people are involved. We know this. Taking a servant's perspective eliminates the drama surrounding what a group of half-baked developers forgot to tell each other in the 11th hour about their trampling changes before check-in. We, my counterparts in society, get paid to deal with that drama. I get paid to diffuse that drama and make everything integrate as smoothly as possible. At the end of the day, attacking someone over a minor detail not only makes things worse, it's against the whole point of our real existence. Being in support or software integration means you are to keep your eyes on the end game. That end game? It's making a solution work for all stakeholders, not just you or your immediate superior. Development and technology groups exist because business groups need them to exist and solve their issues. The end game? Doing what is best for those business groups ultimately. Period. Note: That does not mean you let your business users solely dictate when and if something gets changed in an environment you ultimately own. That's just crazy. Software and its environments are legitimately owned by those who manage it directly, no matter how important a business group believes it is to the existence of mankind. So, you both negotiate the terms of changing that environment and only do so upon that negotiation. Diplomacy is in order. So, to finish my thoughts: If you have no ability to keep your mouth shut in a situation where a business or development group truly need your help to make something work even beyond a deadline, find another profession. Beating up someone verbally because they screw up means a service attitude is not at the forefront of your motivation for doing what is ultimately their work and their product. Software, especially integration, requires a strong will and a soft touch to keep it on track. Not a hammer covered in broken glass.

    Read the article

  • How to Speed Up Any Android Phone By Disabling Animations

    - by Chris Hoffman
    Android phones — and tablets, too — display animations when moving between apps and screens. These animations look very slick, but they waste time — especially on fast phones, which could switch between apps instantly if not for the animations. Disabling these animations will speed up navigating between different apps and interface screens on your phone, saving you time. You can also speed up the animations if you’d rather see them. Access the Developer Options Menu First, we’ll need to access the Developer Options menu. It’s hidden by default so Android users won’t stumble across it unless they’re actually looking for it. To access the Developer Options menu, open the Settings screen, scroll down to the bottom of the list, and tap the About phone or About tablet option. Scroll down to the Build number field and tap it repeatedly. Eventually, you’ll see a message appear saying “You are now a developer!”. The Developer options submenu now appears on the Settings screen. You’ll find it near the bottom of the list, just above the About phone or About tablet option. Disable Interface Animations Open the Developer Options screen and slide the switch at the top of the screen to On. This allows you to change the hidden options on this screen. If you ever want to re-enable the animations and revert your changes, all you have to do is slide the Developer Options switch back to Off. Scroll down to the Drawing section. You’ll find the three options we want here — Window animation scale, Transition animation scale, and Animator duration scale. Tap each option and set it to Animation off to disable the associated animations. If you’d like to speed up the animations without disabling them entirely, select the Animation .5x option instead. If you’re feeling really crazy, you can even select longer animation durations. You can make the animations take as much as ten times longer with the Animation 10x setting. The Animator duration scale option applies to the transition animation that appears when you tap the app drawer button on your home screen.  Your change here won’t take effect immediately — you’ll have to restart Android’s launcher after changing the Animator duration scale setting. To restart Android’s launcher, open the Settings screen, tap Apps, swipe over to the All category, scroll down, and tap the Launcher app. Tap the Force stop button to forcibly close the launcher, then tap your device’s home button to re-launch the launcher. Your app drawer will now open immediately, too. Now whenever you open an app or transition to a new screen, it will pop up as quickly as possible — no waiting for animations and wasting processing power rendering them. How much of a speed improvement you’ll see here depends on your Android device and how fast it is. On our Nexus 4, this change makes many apps appear and become usable instantly if they’re running in the background. If you have a slower device, you may have to wait a moment for apps to be usable. That’s one of the big reasons why Android and other operating systems use animations. Animations help paper over delays that can occur while the operating system loads the app.     

    Read the article

  • SharePoint Content and Site Editing Tips

    - by Bil Simser
    A few content management and site editing tips for power users on this bacon flavoured unicorn morning. The theme here is keep it clean!Write "friendly" email addressesRemember it's human beings reading your content. So seeing something like "If you have questions please send an email to [email protected]" breaks up the readiblity. Instead just do the simple steps of writing the content in plain English and going back, highlighting the name and insert a link (note: you might have to prefix the link with mailto:[email protected]). It makes for a friendlier looking page and hides the ugliness that are sometimes in email addresses.Use friendly column and list namesThis is a big pet peeve of mine. When you first create a column or list with spaces the internal name is changed. The display name might be "My Amazing List of Animals with Large Testicles" but the internal (and link) name becomes "My_x00x20_Amazing_x00x20_List_x00x20_of_x00x20_Animals_x00x20_with_x00x20_Large_x00x20_Testicles". What's worse is if you create a publishing page named "This Website is Fueled By a Dolphin's Spleen". Not only is it incorrect grammar, but the apostrophe wreaks havoc on both the internal name for the list (with lots of crazy hex codes) as well as the hyperlink (where everything is uuencoded). Instead create the list with a distinct and compact name then go back and change it to whatever you want. The end result is a better formed name that you can both script and access in code easier.Keep your Views CleanWhen you add a column to a list or create a new list the default is to add it to the default view. Do everyone a favour and don't check this box! The default view of a list should be something similar to the Title field and nothing else. Keep it clean. If you want to set a defalt view that's different, go back and create one with all the fields and filtering and sorting columns you want and set it as default. It's a good idea to keep the original AllItems.aspx (note the lack of space in the filename!) easy and unfiltered. It's also a good idea to keep your column count down in views. Don't let every column be added by default and don't add every column just because you can. Create separate views for distinct responsibilities and try to keep the number of columns down to a single screen to prevent horizontal scrolling.Simple NavigationThe Quick Launch is a great tool for navigating around your site but don't use the default of adding all lists to it. Uncheck that box and keep navigation simple. Create custom groupings that make sense so if you don't have a site with "Documents and Lists" but "Reports and Notices" makes more sense then do it. Also hide internal lists from the Quick Launch. For example, if most users don't need to see all the lookup tables you might have on a site don't show them. You can use audience filtering on the Quick Launch if you want to hide admin items from non-admin users so consider that as an option.Enjoy!

    Read the article

  • PCI Encryption Key Management

    - by Unicorn Bob
    (Full disclosure: I'm already an active participant here and at StackOverflow, but for reasons that should hopefully be obvious, I'm choosing to ask this particular question anonymously). I currently work for a small software shop that produces software that's sold commercially to manage small- to mid-size business in a couple of fairly specialized industries. Because these industries are customer-facing, a large portion of the software is related to storing and managing customer information. In particular, the storage (and securing) of customer credit card information. With that, of course, comes PCI compliance. To make a long story short, I'm left with a couple of questions about why certain things were done the way they were, and I'm unfortunately without much of a resource at the moment. This is a very small shop (I report directly to the owner, as does the only other full-time employee), and the owner doesn't have an answer to these questions, and the previous developer is...err...unavailable. Issue 1: Periodic Re-encryption As of now, the software prompts the user to do a wholesale re-encryption of all of the sensitive information in the database (basically credit card numbers and user passwords) if either of these conditions is true: There are any NON-encrypted pieces of sensitive information in the database (added through a manual database statement instead of through the business object, for example). This should not happen during the ordinary use of the software. The current key has been in use for more than a particular period of time. I believe it's 12 months, but I'm not certain of that. The point here is that the key "expires". This is my first foray into commercial solution development that deals with PCI, so I am unfortunately uneducated on the practices involved. Is there some aspect of PCI compliance that mandates (or even just strongly recommends) periodic key updating? This isn't a huge issue for me other than I don't currently have a good explanation to give to end users if they ask why they are being prompted to run it. Question 1: Is the concept of key expiration standard, and, if so, is that simply industry-standard or an element of PCI? Issue 2: Key Storage Here's my real issue...the encryption key is stored in the database, just obfuscated. The key is padded on the left and right with a few garbage bytes and some bits are twiddled, but fundamentally there's nothing stopping an enterprising person from examining our (dotfuscated) code, determining the pattern used to turn the stored key into the real key, then using that key to run amok. This seems like a horrible practice to me, but I want to make sure that this isn't just one of those "grin and bear it" practices that people in this industry have taken to. I have developed an alternative approach that would prevent such an attack, but I'm just looking for a sanity check here. Question 2: Is this method of key storage--namely storing the key in the database using an obfuscation method that exists in client code--normal or crazy? Believe me, I know that free advice is worth every penny that I've paid for it, nobody here is an attorney (or at least isn't offering legal advice), caveat emptor, etc. etc., but I'm looking for any input that you all can provide. Thank you in advance!

    Read the article

  • GameStateManagement and inputs not being recognized

    - by Dave Voyles
    EDIT: I've removed a bit of code from the input class to make this more readable, and updated my StartScreen class, which is now at the bottom. I have the same issues though, but they are explained in my comments on the bottom of this page. It won't let me paste my additional code here (the format comes out crazy), so I've linked to pastebin with the code pastebin I've been trying to implement the MS provided GameStateManagement sample with my game, but it has proven a bit difficult. Really, I'm using Oneksoft's Starter Kit, which uses the MS provided sample, so they are identical, except for my splash screen. I'm able to get the splash screen to launch, where it informs the player to press A to advance the screen, but this doesn't seem to accept any of my inputs. I’ve also added Console.Writeline(“Pressing A”) under the IsMenuPressed method in Input.cs to verify that it is getting called, but for some reason it is constantly spamming my log, rather than just appearing each time I press it. Not sure why this is happening. I have a bit too much code to post it all here, so I’ve attached a link to my .rar with my classes, but I’ll also leave a bit here which I thinkmay be applicable. https://www.dropbox.com/sh/6ek4uru2jc2ch0k/JTeBWN_3PQ What do you guys think the issue is? namespace Pong { public class Input { public const int MaxInputs = 4; public readonly KeyboardState[] CurrentKeyboardState; public readonly GamePadState[] CurrentGamePadState; public KeyboardState[] LastKeyboardState; public GamePadState[] LastGamePadState; public readonly bool[] GamePadWasConnected; public Input() { // Get input state CurrentKeyboardState = new KeyboardState[MaxInputs]; CurrentGamePadState = new GamePadState[MaxInputs]; // Preserving last states to check for isKeyUp events LastKeyboardState = CurrentKeyboardState; LastGamePadState = CurrentGamePadState; } /// <summary> /// Checks for a "menu select" input action. /// The controllingPlayer parameter specifies which player to read input for. /// If this is null, it will accept input from any player. When the action /// is detected, the output playerIndex reports which player pressed it. /// </summary> public bool IsMenuSelect(PlayerIndex? controllingPlayer, out PlayerIndex playerIndex) { Console.WriteLine("Pressing A"); return IsNewKeyPress(Keys.Space, controllingPlayer, out playerIndex) || IsNewKeyPress(Keys.Enter, controllingPlayer, out playerIndex) || IsNewButtonPress(Buttons.A, controllingPlayer, out playerIndex) || IsNewButtonPress(Buttons.Start, controllingPlayer, out playerIndex); } /// <summary> /// Checks for a "menu cancel" input action. /// The controllingPlayer parameter specifies which player to read input for. /// If this is null, it will accept input from any player. When the action /// is detected, the output playerIndex reports which player pressed it. /// </summary> public bool IsMenuCancel(PlayerIndex? controllingPlayer, out PlayerIndex playerIndex) { return IsNewKeyPress(Keys.Escape, controllingPlayer, out playerIndex) || IsNewButtonPress(Buttons.B, controllingPlayer, out playerIndex) || IsNewButtonPress(Buttons.Back, controllingPlayer, out playerIndex); }

    Read the article

  • Software and/(x)or Hardware Projects for Pre-School Kids

    - by haylem
    I offered to participate at my kid's pre-school for various activities (yes, I'm crazy like that), and one of them is to help them discover extra-curricular (big word for a pre-school, but by lack of a better one... :)) hobbies, which may or may not relate to a professional activity. At first I thought that it wouldn't be really easy to have pre-schoolers relate to programming or the internal workings of a computer system in general (and I'm more used to teaching middle-school to university-level students), but then I thought there must be a way. So I'm trying to figure out ways to introduce very young kids (3yo) to computer systems in a fun and preferably educational way. Of course, I don't expect them to start smashing the stack for fun and profit right away (or at least not voluntarily, though I could use the occasion for some toddler tests...), but I'm confident there must be ways to get them interested in both: using the systems, becoming curious about understanding what they do, interacting with the systems to modify them. I guess this setting is not really relevant after all, it's pretty much the same as if you were aiming to achieve the same for your own kids at home. Ideas Considering we're talking 3yo pre-schoolers here, and that at this age some kids are already quite confident using a mouse (some even a keyboard, if not for typing, at least to press some buttons they've come to associate with actions) while others have not yet had any interaction with computers of any kind, it needs to be: rather basic, demonstrated and played with in less then 5 or 10 minutes, doable in in groups or alone, scalable and extendable in complexity to accommodate their varying abilities. The obvious options are: basic smallish games to play with, interactive systems like LOGO, Kojo, Squeak and clones (possibly even simpler than that), or thngs like Lego Systems. I guess it can be a thing to reflect on both at the software and the hardware levels: it could be done with a desktop or laptop machine, a tablet, a smartphone (or a crap-phone, for that matter, as long as you can modify it), or even get down to building something from scratch (Raspberry Pi and Arduino being popular options at the moment). I can probably be in the form of games, funny visualizations (which are pretty much games) w/ Prototype, virtual worlds to explore. I also thought on the moment (and I hope this won't offend anyone) that some approaches to teaching pets could work (reward systems, haptic feedback and such things could quickly point a kid in the right direction to understanding how things work, in a similar fashion - I'm not suggesting to shock the kids!). Hmm, Is There an Actual Question in There? What type of systems do you think might be a good fit, both in terms of hardware and software? Do you have seen such systems, or have anything in mind to work on? Are you aware of some research in this domain, with tangible results? Any input is welcome. It's not that I don't see options: there are tons, but I have a harder time pinpointing a more concrete and definite type of project/activity, so I figure some have valuable ideas or existing ones. Note: I am not advocating that every kid should learn to program, be interested in computer systems, or that all of them in a class would even care enough to follow such an introduction with more than a blank stare. I don't buy into the "everybody would benefit from learning to program" thing. Wouldn't hurt, but not necessary in any way. But if I can walk out of there with a few of them having smiled using the thing (or heck, cried because others took them away from them), that'd be good enough. Related Questions I've seen and that seem to complement what I'm looking for, but not exactly for the same age groups or with the same goals: Teaching Programming to Kids Recommendations for teaching kids math concepts & skills for programming?

    Read the article

  • How to restore/change Alt+Tab behaviour/ram usage and a few other things after Ubuntu upgrade from 11.04 to 11.10?

    - by fiktor
    I use Ubuntu for programming. I recently updated it from 11.04 to 11.10. There are some things I don't like in the new version of Unity desktop interface. I don't actually know if it is hard to restore previous behavior or not, and if it is not, where should I look to do that. I know a bit of programming, but I really don't know much about Linux settings. I used to have 3-6 terminal windows and switch between them with Alt+Tab and Shift+Alt+Tab. I liked half-transparent terminal windows, since with them I could open web-page with some instruction in Firefox, press Alt+Tab and type commands in a console window, being able to recognize text on a web-page under it. Now I have problems with my usual work-style because of the following. List of "negative" changes Alt+Tab shows just one icon for all console windows. When I wait some time, it, however, shows all windows, but I don't like to wait. I prefer to remember order of windows and press Alt+Tab as many times as I need to switch to the right window. Alt+Shift+Tab to switch in reverse order doesn't work now. Console windows are not transparent any more. When I don't wait, and switch to this icon, it shows all console windows altogether. So even if they were transparent, I wouldn't be able to see anything below them (I can read something only from the window, which is directly under current one, not a few levels under). When I run a few console windows in Unity I had 740Mb used on Ubuntu 11.04, but I have 1050Mb now. The question is how to make it back to 750-. I really need my memory, since I use my computer to work with 1512Mb of data and I try to save every 10Mb possible (if it doesn't take too much of machine and, more importantly, my time). When I press "The Super key" I have a field to type the name of the program I want to run. But now it sometimes shows this field, but when I'm trying to type nothing happens. Probably, focus is not on the right field. I don't really mean to restore exactly the same behavior, but I want to make my work in Ubuntu 11.10 efficient (at least as efficient as in Ubuntu 11.04). I would be happy if there are some ways to accomplish that. What have I tried I have installed CompizConfig Settings Manager. I have read this question. However enabling "Static Application Switcher" makes Alt+Tab crazy: after enabling it It says about key-binding conflicts with "Ubuntu Unity Plugin"; "Alt+Tab" switching doesn't change, but "Shift+Alt+Tab" now works and shows all windows; Memory usage increases. I have tried turning off Ubuntu Unity Plugin, but this doesn't seem right thing to do, since it seems to turn off all menus, a lot of keystrokes and app-launcher, which usually activates with "The Super key". I have found, that window transparency can be enabled by "Opacity, Brightness and Saturation" plugin from Accessibility. However I don't know if enabling it is the right thing to do (at least it increases memory usage). Update: everything solved but #3: see my own answer below. I have made a separate question about issue #3 (transparency).

    Read the article

  • Hosted Monitoring

    - by Grant Fritchey
    The concept of using services to take the place of writing a lot of your own code goes way, way back in computing history. The fundamentals of the concept go back to the dawn of computing with places like IBM hosting time-shares for computing power that you could rent for short periods of time. But things really took off with the building of the Web. Now, all the growth with virtual machines, hosted machines, hosted services from vendors like Amazon and Microsoft, the need to keep all of your software locally on physical boxes is just going the way of the dodo. There will likely always be some pieces of software that you keep on machines on your property or on your person, but the concept of keeping fundamental services locally is going away. As someone put it to me once, if you were starting a business right now, would you bother setting up an Exchange server to manage your email or would you just go to one of the external mail services for everything? For most of us (who are not Exchange admins) the answer is pretty easy. With all this momentum to having external services manage more and more of the infrastructure that’s not business unique, why would you burn up a server and license instance setting up monitoring for your SQL Servers? Of course, some of you are dealing with hyper-sensitive data that might require, through law or treaty, that you lock it down and never expose it to the intertubes, but most of us are not. So, what if someone else took on the basic hassle of setting up monitoring on your systems? That’s what we’re working on here at Red Gate. Right now it’s a private test, but we’re growing it and developing it and it’ll be going to a public beta, probably (hopefully) this year. I’m running it on my machines right now. The concept is pretty simple. You put a relay on your server, poke a hole in your firewall for it, and we start monitoring your server using SQL Monitor. It’s actually shocking how easy it is to get going. You still have to adjust your alerting thresholds, but that’s a standard part of alerting. Your pain threshold and my pain threshold for any given alert may be different. But from there, we do all the heavy lifting, keeping your data online and available, providing you with access to the information about how your servers are behaving, everything. Maybe it’s just me, but I’m really excited by this. I think we’re getting to a place where we can really help the small and medium sized businesses get a monitoring solution in place, quickly and easily. All you crazy busy, and possibly accidental, DBAs and system admins finally can set up monitoring without taking all the time to configure systems, run installs, and all the rest. You just have to tweak your alerts and you’re ready to run. If you are interested in checking it out, you can apply for the closed beta through the Monitor web page.

    Read the article

  • UnauthorizedAccessException on MemoryMappedFile in C# 4.

    - by Kevin Nisbet
    Hey, I wanted to play around with using a MemoryMappedFile to access an existing binary file. If this even at all possible or am I a crazy person? The idea would be to map the existing binary file directly to memory for some preferably higher-speed operations. Or to atleast see how these things worked. using System.IO.MemoryMappedFiles; System.IO.FileInfo fi = new System.IO.FileInfo(@"C:\testparsercap.pcap"); MemoryMappedFileSecurity sec = new MemoryMappedFileSecurity(); System.IO.FileStream file = fi.Open(System.IO.FileMode.Open, System.IO.FileAccess.ReadWrite, System.IO.FileShare.ReadWrite); MemoryMappedFile mf = MemoryMappedFile.CreateFromFile(file, "testpcap", fi.Length, MemoryMappedFileAccess.Read, sec, System.IO.HandleInheritability.Inheritable, true); MemoryMappedViewAccessor FileMapView = mf.CreateViewAccessor(); PcapHeader head = new PcapHeader(); FileMapView.Read<PcapHeader>(0, out head); I get System.UnauthorizedAccessException was unhandled (Message=Access to the path is denied.) on the mf.CreateViewAccessor() line. I don't think it's file-permissions, since I'm running as a nice insecure administrator user, and there aren't any other programs open that might have a read-lock on the file. This is on Vista with UAC disabled. If it's simply not possible and I missed something in the documentation, please let me know. I could barely find anything at all referencing this feature of .net 4.0 Thanks!

    Read the article

  • MVVM Prism Nested Regions Can't Find Child Regions

    - by Garry Clark
    I have a Menu (Telerik RadMenu) that has nested regions defined in the Shell. In my modules I will register the modules menu or toolbar items with these regions. Everything works fine for the root regions, but when I try and add something to a child region, such as the File region on the Menu, I get the error "The exception message was: The region manager does not contain the FileMenuRegion region." However like I said if I change this code regionManager.Regions[RegionNames.FileMenuRegion].Add(menuItem); to this regionManager.Regions[RegionNames.MainMenuRegion].Add(menuItem); everything works fine. Below is the XAML for my menu so you can see the region names and how they are constructed. Any help would greatly be appreciated as this is bewildering and driving me crazy. Menu <telerikNavigation:RadMenu x:Name="menuMain" DockPanel.Dock="Top" prismrgn:RegionManager.RegionName="{x:Static i:RegionNames.MainMenuRegion}" telerik:StyleManager.Theme="{Binding Source={StaticResource settings}, Path=Default.CurrentTheme}"> <telerikNavigation:RadMenuItem Header="{x:Static p:Resources.File}" prismrgn:RegionManager.RegionName="{x:Static i:RegionNames.FileMenuRegion}"> <telerikNavigation:RadMenuItem Header="{x:Static p:Resources.Exit}" Command="{Binding ExitCommand}"> <telerikNavigation:RadMenuItem.Icon> <Image Source="../Resources/Close.png" Stretch="None" /> </telerikNavigation:RadMenuItem.Icon> </telerikNavigation:RadMenuItem> </telerikNavigation:RadMenuItem> </telerikNavigation:RadMenu>

    Read the article

  • sqlite3 problem..need help..urgently..HELP**

    - by summer
    - (void) hydrateDetailViewData { //if detail view is hydrated then do not get it from database if(isDetailViewHydrated) return; if(detailStmt == nil) { const char *sql = "select snapTitle, snapDesc from Snap where snapID =?"; if(sqlite3_prepare_v2(database, sql, -1, &detailStmt, NULL) != SQLITE_OK) NSAssert1(0, @"Error while creating detail view statement. '%s'", sqlite3_errmsg(database)); NSLog(@"SQLite= %d", sqlite3_step(detailStmt)); } if (sqlite3_step(detailStmt) == SQLITE_ROW)//execute sql statement on database, and make sure it executed properly. { self.snapDescription = [NSString stringWithUTF8String:(char *)sqlite3_column_text(detailStmt, 1)]; } looking at the above code, can someone tell me what's wrong and why is it that i can't get it to load on my detailview?i basically followed the tutorial on iphone sdk article..but yet i am having this don't know why error..need help urgently because i have been at this for days!!it's driving me crazy! i can even send my project if u guys need to take a look..help please!! Error msg: * Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[UIViewController _loadViewFromNibNamed:bundle:] was unable to load a nib named "DetailView"' 2010-03-15 16:35:55.202 Snap2Play[58213:20b]

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >