Search Results

Search found 11536 results on 462 pages for 'whole foods market'.

Page 229/462 | < Previous Page | 225 226 227 228 229 230 231 232 233 234 235 236  | Next Page >

  • Dual LAN Printing

    - by Christopher
    I want to use Ubuntu 10.10 Server in a classroom, a computer lab whose bandwidth is provided by a local cable ISP. That's no problem, though the school network has an IP printer that I want to use. I cannot reach the printer through the cable Internet. But, I have two network cards. How is it possible to use both networks at once? eth0 (static 192.168.1.254) is plugged into a four-port router, 192.168.1.1. On the public side of the four-port router is Internet provided by the cable company. I also have the classroom workstations plugged into a switch. The switch is plugged into the four-port router. The whole classroom is wired into the cable Internet. The other NIC, eth1, could it be plugged into an Ethernet jack in the wall? It uses the school network, and I might receive by DHCP an IP address like 10.140.10.100, with the printer on maybe 10.120.50.10. I was thinking about installing the printer on the server so that it could be shared with the workstations. But how does this work? Can I just plug eth1 into the school network and access both LANs? Thanks for any insight, Chris

    Read the article

  • Commands don't have permission when using absolute path

    - by Markos
    I have folders set up this way: /srv/samba/video getfacl /srv/samba/video # file: srv/samba/video # owner: root # group: nogroup user::rwx group::--- group:sambaclients:rwx group:deluge:rwx mask::rwx other::--- default:user::rwx default:group::--- default:group:sambaclients:rwx default:group:deluge:rwx default:mask::rwx default:other::--- That means, user deluge has rwx to folder /srv/samba/video. However, when running command as user deluge, I am getting weird permission errors. When in folder /srv/samba/video: sudo -u deluge mkdir foo works flawlessly. But when using absolute path: sudo -u deluge mkdir /srv/samba/video/foo I am getting permission denied. When running sudo -u deluge id, I get output uid=113(deluge) gid=124(deluge) skupiny=124(deluge) which shows that user deluge is indeed in group deluge. Also, the behavior was the same when I gave the permissions also to user deluge not just group deluge. When executing as non-system user, it does work. The reason that I want to use absolute paths is that I am using automatically triggered post-download script which extracts some files into the folder. I have spent way too many hours to solve this problem myself. mkdir isn't the only command that fails, touch is doing the same thing, so I suspect that it's not mkdir's fault. If you need more info, I will try to put it in here, just ask. Thanx in advance. Edit: It seems that the root of the problem is acl set on perent folder /srv/samba, which indeed does not grant permissions to deluge (but neither denies it). getfacl /srv/samba # file: srv/samba # owner: root # group: nogroup user::rwx group::--- group:sambaclients:rwx mask::rwx other::--- default:user::rwx default:group::--- default:group:sambaclients:rwx default:mask::rwx default:other::--- If I grant the permission also to this folder, it suddenly starts to work so I believe that the acl on /srv/samba is somehow denying the permissions to deluge. So the question is: how do I set acl to both /srv/samba and /srv/samba/video so that sambaclients have access to whole /srv/samba and subdirectories and deluge has access only to /srv/samba/video and subdirectories?

    Read the article

  • Is rotating the lead developer a good or bad idea?

    - by Renesis
    I work on a team that has been flat organizationally since it's creation several months ago. My manager is non-technical and this means that our whole team is responsible for decision-making. My manager is beginning to realize that there are several benefits to having a lead developer, both for his sake (a single point of contact and single responsible party for tasks) and ours (dispute resolution, organized technical guidance, etc.). Because the team has been flat, one concern is that picking one lead developer may discourage the others. A non-developer suggested to my manager that rotating the lead developer is a possible way to avoid this issue. One developer would be lead one month, another the next, and so on. Is this a good idea? Why or why not? Keep in mind that this means all developers — All developers are good, but not necessarily equally suited to leadership. And if it is not, suppose I am likely the best candidate for lead developer — how do I recommend that we avoid this approach without looking like it's merely for selfish reasons? (In other words, the team is small enough that anyone recommending a single leader is likely to appear to be recommending themselves — especially those who have been part of the team longer.)

    Read the article

  • Big project layout : adding new feature on multiple sub-projects

    - by Shiplu
    I want to know how to manage a big project with many components with version control management system. In my current project there are 4 major parts. Web Server Admin console Platform. The web and server part uses 2 libraries that I wrote. In total there are 5 git repositories and 1 mercurial repository. The project build script is in Platform repository. It automates the whole building process. The problem is when I add a new feature that affects multiple components I have to create branch for each of the affected repo. Implement the feature. Merge it back. My gut feeling is "something is wrong". So should I create a single repo and put all the components there? I think branching will be easier in that case. Or I just do what I am doing right now. In that case how do I solve this problem of creating branch on each repository?

    Read the article

  • Dynamic/Adaptive RLE

    - by Lucius
    So, I'm developing a 2D, tile based game and a map maker thingy - all in Java. The problem is that recently I've been having some memory issues when about 4 maps are loaded. Each one of these maps are composed of 128x128 tiles and have 4 layers (for details and stuff). I already spent a good amount of time searching for solutions and the best thing I found was run-length enconding (RLE). It seems easy enough to use with static data, but is there a way to use it with data that is constantly changing, without a big drop in performance? In my maps, supposing I'm compressing the columns, I would have 128 rows, each with some amount of data (hopefully less than it would be without RLE). Whenever I change a tile, that whole row would have to be checked and I'm affraid that would slow down too much the production (and I'm in a somewhat tight schedule). Well, worst case scenario I work on each map individually, and save them using RLE, but it would be really nice if I could avoind that. EDIT: What I'm currently using to store the data for the tiles is a 2D array of HashMaps that use the layer as key and store the id of the tile in that position - like this: private HashMap< Integer, Integer [][]

    Read the article

  • SQLAuthority News – My Evaluation of Singapore SharePoint Conference

    - by pinaldave
    Earlier this year I presented at SQLAuthority News – Presenting at South East Asia SharePoint Conference – Oct 26, 27, 2010 – Singapore. I felt very good to be presenting at Singapore as SharePoint Conference as I was the only SQL Speaker at the event. The event was filled with SharePoint enthusiasts and lots of other experts from all around the globe. The event was one of the best organized event, I attended in subcontinent. I just received my feedback score of the event. I was very much surprised and stunned. My ratting are very high as well, my demo were considered as one of the best demo of the whole event. I am not sure how much feedback I can share with community as organizer did not specify to me but I am very certain that I am allowed to share my own feedback. Speaker – 4.39 (best score 4.74, average 3.84) Contents – 4.37 (best score 4.39, average 3.65) Demo – 4.48 (best score, 4.48, average 3.61) I am very glad that all of my efforts to do presentations at SharePoint Conference are finally paid up. I was very much worried earlier if attendees will accept me or not as I am coming as speaker from foreign technology and no one knows me there. I must thank all of you for the same. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SharePoint, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL, Technology Tagged: SharePoint

    Read the article

  • Sorry. Not Much Happened Today!

    - by steve.diamond
    And THAT blog headline is dedicated to Seth Godin, who recently wrote that unlike its print brethren, digital media outlets aren't burdened with having to make their articles long enough to match the number of surrounding ad pages. He states that just because you CAN write more doesn't mean you SHOULD. Well, you don't have to tell me that twice. So to continue my rambling entry today, I'd suggest you read this post by Donal Daly on 10 steps to intelligent Social CRM for Sales. No seriously, read it. It's almost like a Groundswell Cliff Notes for sales people. I particularly love his third point. Of course I haven't "gotten" it yet, but I've got a whole life time, for crying out loud. Seriously, this is a great read and a fast one. And finally, in the department of longer reads, a thanks and shout out to Paul Greenberg for mentioning Oracle's new iPad app for Siebel CRM in his ZDNet blog. Hey, I warned you...not much happened today. Per se!

    Read the article

  • Agile Testing Days 2012 – Day 2 – Learn through disagreement

    - by Chris George
    I think I was in the right place! During Day 1 I kept on reading tweets about Lean Coffee that has happened earlier that morning. It intrigued me and I figured in for a penny in for a pound, and set my alarm for 6:45am. Following the award night the night before, it was _really_ hard getting up when it went off, but I did and after a very early breakfast, set off for the 10 min walk to the Dorint. With Lean Coffee due to start at 07:30, I arrived at the hotel and made my way to one of the hotel bars. I soon realised I was in the right place as although the bar was empty, there was a table with post-it’s and pens! This MUST be the place! The premise of Lean Coffee is to have several small timeboxed discussions. Everyone writes down what they would like to discuss on post-its that are then briefly explained and submitted to the pile. Once everyone is done, the group dot-votes on the topics. The topics are then sorted by the dot vote counts and the discussions begin. Each discussion had 8 mins to start with, which meant it prevented the discussions getting off topic too much. After the time elapsed, the group had a vote whether to extend the discussion by a further 4 mins or move on. Several discussion were had around training, soft skills etc. The conversations were really interesting and there were quite a few good ideas. Overall it was a very enjoyable experience, certainly worth the early start! Make Melly Happy Following Lean Coffee was real coffee, and much needed that was! The first keynote of the day was “Let’s help Melly (Changing Work into Life)”by Jurgen Appelo. Draw lines to track happiness This was a very interesting presentation, and set the day nicely. The theme to the keynote was projects are about the people, more-so than the actual tasks. So he started by showing a photo of an employee ‘Melly’ who looked happy enough. He then stated that she looked happy but actually hated her job. In fact 50% of Americans hate their jobs. He went on to say that the world over 50% of people hate Americans their jobs. Jurgen talked about many ways to reduce the feedback cycle, not only of the project, but of the people management. Ideas such as Happiness doors, happiness tracking (drawing lines on a wall indicating your happiness for that day), kudo boxes (to compliment a colleague for good work). All of these (and more) ideas stimulate conversation amongst the team, lead to early detection of issues and investigation of solutions. I’ve massively simplified Jurgen’s keynote and have certainly not done it justice, so I will post a link to the video once it’s available. Following more coffee, the next talk was “How releasing faster changes testing” by Alexander Schwartz. This is a topic very close to our hearts at the moment, so I was eager to find out any juicy morsels that could help us achieve more frequent releases, and Alex did not disappoint. He started off by confirming something that I have been a firm believer in for a number of years now; adding more people can do more harm than good when trying to release. This is for a number of reasons, but just adding new people to a team at such a critical time can be more of a drain on resources than they add. The alternative is to have the whole team have shared responsibility for faster delivery. So the whole team is responsible for quality and testing. Obviously you will have the test engineers on the project who have the specialist skills, but there is no reason that the entire team cannot do exploratory testing on the product. This links nicely with the Developer Exploratory testing presented by Sigge on Day 1, and certainly something that my team are really striving towards. Focus on cycle time, so what can be done to reduce the time between dev cycles, release cycles. What’s stops a release, what delays a release? all good solid questions that can be answered. Alex suggested that perhaps the product doesn’t need to be fully tested. Doing less testing will reduce the cycle time therefore get the release out faster. He suggested a risk-based approach to planning what testing needs to happen. Reducing testing could have an impact on revenue if it causes harm to customers, so test the ‘right stuff’! Determine a set of tests that are ‘face saving’ or ‘smoke’ tests. These tests cover the core functionality of the product and aim to prevent major embarrassment if these areas were to fail! Amongst many other very good points, Alex suggested that a good approach would be to release after every new feature is added. So do a bit of work -> release, do some more work -> release. By releasing small increments of work, the impact on the customer of bugs being introduced is reduced. Red Pill, Blue Pill The second keynote of the day was “Adaptation and improvisation – but your weakness is not your technique” by Markus Gartner and proved to be another very good presentation. It started off quoting lines from the Matrix which relate to adapting, improvising, realisation and mastery. It has alot of nerds in the room smiling! Markus went on to explain how through deliberate practice ( and a lot of it!) you can achieve mastery, but then you never stop learning. Through methods such as code retreats, testing dojos, workshops you can continually improve and learn. The code retreat idea was one that interested me. It involved pairing to write an automated test for, say, 45 mins, they deleting all the code, finding a different partner and writing the same test again! This is another keynote where the video will speak louder than anything I can write here! Markus did elaborate on something that Lisa and Janet had touched on yesterday whilst busting the myth that “Testers Must Code”. Whilst it is true that to be a tester, you don’t need to code, it is becoming more common that there is this crossover happening where more testers are coding and more programmers are testing. Markus made a special distinction between programmers and developers as testers develop tests code so this helped to make that clear. “Extending Continuous Integration and TDD with Continuous Testing” by Jason Ayers was my next talk after lunch. We already do CI and a bit of TDD on my project team so I was interested to see what this continuous testing thing was all about and whether it would actually work for us. At the start of the presentation I was of the opinion that it just would not work for us because our tests are too slow, and that would be the case for many people. Jason started off by setting the scene and saying that those doing TDD spend between 10-15% of their time waiting for tests to run. This can be reduced by testing less often, reducing the test time but this then increases the risk of introduced bugs not being spotted quickly. Therefore, in comes Continuous Testing (CT). CT systems run your unit tests whenever you save some code and runs them in the background so you can continue working. This is a really nice idea, but to do this, your tests must be fast, independent and reliable. The latter two should be the case anyway, and the first is ideal, but hard! Jason makes several suggestions to make tests fast. Firstly keep the scope of the test small, secondly spin off any expensive tests into a suite which is run, perhaps, overnight or outside of the CT system at any rate. So this started to change my mind, perhaps we could re-engineer our tests, and continuously run the quick ones to give an element of coverage. This talk was very interesting and I’ve already tried a couple of the tools mentioned on our product (Mighty Moose and NCrunch). Sadly due to the way our solution is built, it currently doesn’t work, but we will look at whether we can make this work because this has the potential to be a mini-game-changer for us. Using the wrong data Gojko’s Hierarchy of Quality The final keynote of the day was “Reinventing software quality” by Gojko Adzic. He opened the talk with the statement “We’ve got quality wrong because we are using the wrong data”! Gojko then went on to explain that we should judge a bug by whether the customer cares about it, not by whether we think it’s important. Why spend time fixing issues that the customer just wouldn’t care about and releasing months later because of this? Surely it’s better to release now and get customer feedback? This was another reference to the idea of how it’s better to build the right thing wrong than the wrong thing right. Get feedback early to make sure you’re making the right thing. Gojko then showed something which was very analogous to Maslow’s heirachy of needs. Successful – does it contribute to the business? Useful – does it do what the user wants Usable – does it do what it’s supposed to without breaking Performant/Secure – is it secure/is the performance acceptable Deployable Functionally ok – can it be deployed without breaking? He then explained that User Stories should focus on change. In other words they should focus on the users needs, not the users process. Describe what the change will be, how that change will happen then measure it! Networking and Beer Following the day’s closing keynote, there were drinks and nibble for the ‘Networking’ evening. This was a great opportunity to talk to people. I find approaching strangers very uncomfortable but once again, when in Rome! Pete Walen and I had a long conversation about only fixing issues that the customer cares about versus fixing issues that make you proud of your software! Without saying much, and asking the right questions, Pete made me re-evaluate my thoughts on the matter. Clever, very clever!  Oh and he ‘bought’ me a beer! My Takeaway Triple from Day 2: release small and release often to minimize issues creeping in and get faster feedback from ‘the real world’ Focus on issues that the customers care about, not what we think is important It’s okay to disagree with someone, even if they are well respected agile testing gurus, that’s how discussion and learning happens!  

    Read the article

  • Camunda BPM 7.0 on WebLogic 12c

    - by JuergenKress
    If we go on tour together with Oracle I think we have to have camunda BPM running on the Oracle WebLogic application server 12c (WLS in short). And one of our enterprise customers asked - so I invested a Sunday and got it running (okay - to be honest - I needed quite some help from our Java EE server guru Christian). In this blog post I give a step by step description how run camunda BPM on WLS. Please note that this is not an official distribution (which would include a sophisticated QA, a comprehensive documentation and a proper distribution) - it was my personal hobby. And I did not fire the whole test suite agains WLS - so there might be some issues. We will do the real productization as soon as we have a customer for it (let us know if this is interesting for you). Necessary steps After installing and starting up WLS (I used the zip distribution of WLS 12c by the way) you have to do: Add a datasource Add shared libraries Add a resource adapter (for the Job Executor using a proper WorkManager from WLS) Add an EAR starting up one process engine Add a WAR file containing the REST API Add other WAR files (e.g. cockpit) and your own process applications Actually that sounds more work to do than it is ;-) So let's get started: Add a datasource Add a datasource via the Administration Console (or any other convenient way on WLS - I should admit that personally I am not the WLS expert). Make sure that you target it on your server - this is not done by default: Read the full article here. For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Camunda,BPM,JavaEE7,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Is unit testing development or testing?

    - by Rubio
    I had a discussion with a testing manager about the role of unit and integration testing. She requested that developers report what they have unit and integration tested and how. My perspective is that unit and integration testing are part of the development process, not the testing process. Beyond semantics what I mean is that unit and integration tests should not be included in the testing reports and systems testers should not be concerned about them. My reasoning is based on two things. Unit and integration tests are planned and performed against an interface and a contract, always. Regardless of whether you use formalized contracts you still test what e.g. a method is supposed to do, i.e. a contract. In integration testing you test the interface between two distinct modules. The interface and the contract determine when the test passes. But you always test a limited part of the whole system. Systems testing on the other hand is planned and performed against the system specifications. The spec determines when the test passes. I don't see any value in communicating the breadth and depth of unit and integration tests to the (systems) tester. Suppose I write a report that lists what kind of unit tests are performed on a particular business layer class. What is he/she supposed to take away from that? Judging what should and shouldn't be tested from that is a false conclusion because the system may still not function the way the specs require even though all unit and integration tests pass. This might seem like useless academic discussion but if you work in a strictly formal environment as I do, it's actually important in determining how we do things. Anyway, am I totally wrong? (Sorry for the long post.)

    Read the article

  • Packing jar files into library jar files

    - by Hillel
    Firstly, this question is not about packing a simple jar file (e.g. lwjgl) into a runnable jar file. I know how to do this using JarSplice. So if I have a game which uses JInput, I will pack my game jar and jinput.jar using JarSplice and enter the natives in the process. The problem arises when I want to create a custom library that uses JInput, and then pack that into my games. See, the whole idea of writing a game library is that I don't ever have to even copy code like the wrapper I wrote for JInput Controller, and I always have a definitive version inside a library jar. Basically what I wanna do is create a jar file of my library, pack jinput.jar into it using JarSplice, possibly with the natives as well, and then when I want to export a jar of my game, I either export it automatically through Eclipse with the library jar, or, if that doesn't work, use JarSplice. I've tried several solutions, and nothing works. When I try to pack the game jar and the library jar using JarSplice, I get an error saying that there's either duplicate .project or .classpath. When I try to export my game through Eclipse with the library jar, it won't run (which is to be expected), but then, if I try to attach the natives with JarSplice, it doesn't give me any errors but the jar doesn't run. I'm not expecting anyone to solve this, but if anyone has an idea, something that will allow me to never look at the Gamepad code ever again, that would be awesome. I don't care if I have to package my library jar using JarSplice 5 times, and then do the same with the game jar, as long as it works. Otherwise I'll just have to copy the Gamepad class into every project alongside the library jar. :(

    Read the article

  • Designing configuration for subobjects

    - by Stefano Borini
    I have the following situation: I have a class (let's call it Main) encapsulating a complex process. This class in turn orchestrates a sequence of subalgorithms (AlgoA, AlgoB), each one represented by an individual class. To configure Main, I have a configuration stored into a configuration object MainConfig. This object contains all the config information for AlgoA and AlgoB with their specific parameters. AlgoA has no interest to the information relative to the configuration of AlgoB, so technically I could have (and in practice I have) a contained MainConfig.AlgoAConfig and MainConfig.AlgoBConfig instances, and initialize as AlgoA(MainConfig.AlgoAConfig) and AlgoB(MainConfig.AlgoBConfig). The problem is that there is some common configuration data. One example is the printLevel. I currently have MainConfig.printLevel. I need to propagate this information to both AlgoA and AlgoB, because they have to know how much to print. MainConfig also needs to know how much to print. So the solutions available are I pass the MainConfig to AlgoA and AlgoB. This way, AlgoA has technically access to the whole configuration (even that of AlgoB) and is less self-contained I copy the MainConfig.printLevel into AlgoAConfig and AlgoBConfig, so I basically have three printLevel information repeated. I create a third configuration class PrintingConfig. I have an instance variable MainConfig.printingConfig, and then pass to AlgoA both MainConfig.AlgoAConfig and MainConfig.printingConfig. Have you ever found this situation? How did you solve it ? Which one is stylistically clearer to a new reader of the code ?

    Read the article

  • Is comparing an OO compiler to a SQL compiler/optimizer valid?

    - by Brad
    I'm now doing a lot of SQL development at my new job where as before I was doing Object Oriented desktop app stuff. I keep running across very large scripts (thousands of lines) and wanting to refactor in some way. I am seeing that SQL is a different sort of beast and it's probably fine to have these big scripts for the most part but while explaining this to me people are also insisting that the whole idea of refactoring is bad. That stuff like the .NET compiler are actually burdened by refactored code and that a big wall of code is more efficient and better design than code designed for reuse, readability and scalability. The other argument is that OO compilers are almost dangerously inefficient and don't have efficient memory management or runs too many CPU instructions compared to older "simpler" compilers and compared to SQL. Are these valid complaints? Even if some compiler like a C compiler is modestly more "efficient" (whatever that means on this high of a level without seeing code) would you want to write applications in C over C# or Java? Is comparing an OO compiler to a SQL compiler/optimizer even valid?

    Read the article

  • So are we ever getting the technological singularity

    - by jsoldi
    I´m still waiting for an AI robot that will pass the Turing test. I keep going back to http://www.a-i.com/ and nothing. I don´t know much about AI but, did anyone ever tried to make a genetic algorithm whose evolution algorithm itself evolves? Or how about one whose algorithm that makes the genetic algorithm evolve, evolves? Or one whose genetic algorithm that makes the genetic algorithm that makes the genetic algorithm evolve, evolves? Or how about an algorithm that abstracts all this into a potentially infinitely deep tree of genetic evolution algorithms? Aren´t we just failing as programmers? And I don´t think we can blame the processors speed. If you make and application that simulates consciousness you will get a Nobel prize no matter how many hours it takes to respond to your questions. But nobody did it. It almost reminds me to Randi´s $1000000 paranormal challenge. As I keep going back to AI chat bots, they keep getting better at changing the subject on a way that seems natural. But if I tell them something like "if 'x' is 2 then whats two times 'x'?" then they don't have a clue what I'm talking about. And I don't think they need a whole human brain simulation to be able to answer to something like that. They don't need feelings or perception. This is just language and logics. I don't think my perception of the color red gives me the ability to understand that if 'x' is 2 then two times 'x' is 4. I'm sure we are just missing some elemental principle we cannot grasp because it's probably stuck behind our eyes. What do you think?

    Read the article

  • Error when restarting computer after successful Ubuntu 12.10 installation

    - by Amy
    I was having a lot of sudden issues with my computer (slow on start up progressing quickly to not being able to start ...even in safe mode). Worked on several different things. Finally tried formatting hd and then installing Windows 7. Got errors so I said screw it and tried Ubuntu. Downloaded it (via my laptop), burned on DVD, and tried a disc boot on my desktop. Went through the whole installation, said it was successful and progressed to the restart. It stopped at a screen with an error (0x1b5a6) from 'hd0' I don't remember the error code verbatim. Now when I try restarting it I get to the initial page, hit 'enter' on Ubuntu..and it just sits at a blank screen. Eventually it runs through this screen with a bunch of code and then just sits there. I can't type anything and enter does nothing. Some things on the screen are: Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block (0,0) I don't know what else to do... Restarting does not work. I have been working on fixing my computer for almost 2 days straight...

    Read the article

  • How-To Backup, Swap, and Update Your Wii Game Saves

    - by Jason Fitzpatrick
    Whether you want to backup your game saves because you’ve worked so hard on them or you want to import game saves precisely so you don’t have to work so hard, we’ve got you covered. Image adapted from icon set by GasClown. There are a multitude of reasons you might want to export and import game saves from your Wii including: saving the progress on your favorite games before sending in your Wii for service, copying the progress to a friend’s or your secondary Wii, and importing saved games from the web or your friend’s Wii so that you don’t have to bust your ass to unlock all the specialty items yourself. (Here’s looking at you Mario Kart and House of the Dead: Overkill.) Latest Features How-To Geek ETC How To Create Your Own Custom ASCII Art from Any Image How To Process Camera Raw Without Paying for Adobe Photoshop How Do You Block Annoying Text Message (SMS) Spam? How to Use and Master the Notoriously Difficult Pen Tool in Photoshop HTG Explains: What Are the Differences Between All Those Audio Formats? How To Use Layer Masks and Vector Masks to Remove Complex Backgrounds in Photoshop Bring Summer Back to Your Desktop with the LandscapeTheme for Chrome and Iron The Prospector – Home Dash Extension Creates a Whole New Browsing Experience in Firefox KinEmote Links Kinect to Windows Why Nobody Reads Web Site Privacy Policies [Infographic] Asian Temple in the Snow Wallpaper 10 Weird Gaming Records from the Guinness Book

    Read the article

  • Discovering path through unknown territory

    - by TravisG
    Let's say all the AI knows about it's surroundings is a pixel-map that it has which clearly shows walkable terrain and obstacles. I want the AI to be able to traverse this terrain until it finds an exit point. There are some restrictions: There is always a way to the exit in the entire map that the AI walks around in, but there may be dead ends. The path to the exit is always pretty random, meaning that if you stand at crossroads, nothing indicates which direction would be the right one to go. It doesn't matter if the AI reaches a dead end, but it has to be able walk back out of it to a previously not inspected location and continue its search there. Initially, the AI starts out knowing only the starting area of the whole map. As it walks around, new points will be added to the pixel-map as the AI corresponding to the AIs range of sight (think of it like the AI is clearing the fog of war) The problem is in 2D space. All I have is the pixel map. There are no paths in the pixel map which are "too narrow". The AI fits through everything. It shouldn't be a brute force solution. E.g. it would be possible to simply find a path to each pixel in the pixel map that is yet undiscovered (with A*, for example), which will lead to the AI discovering new pixels. This could be repeated until the end is reached. The path doesn't have to be the shortest path (this is impossible without knowing the entire map beforehand), but when movements within the visible area are calculated, the shortest and from a human standpoint most logical path should be taken (e.g. if you can see a way out of your room into a hallway, you would obviously go there instead of exploring the corner of your current room). What kind of approaches to solve this problem are there?

    Read the article

  • Frequent change of file-system to read-only

    - by Anuj
    I am using Ubuntu 12.04 on my relatively new laptop. Initial few days, everything worked fine, but then suddenly I started getting a strange problem. After every 2-3 days, the file-system was becoming read-only. I was unable to save/download anything in the installation drive and the system hanged if I attempted to do so, after which I needed to force restart. I had to run fsck in the repair mode to get it fixed (temporarily). There I used to get the following messages: "Inodes that were part of a corrupted orphan linked list found. UNEXPECTED INCONSISTENCY: RUN FSCK MANUALLY. fsck / [886] terminated with status 4. Filesystem has errors: /" Then it stopped here and I had to restart again, after which it asked whether I want to repair or not, then the system ran properly. After a day or 2, again I used to face exactly the same problem. The problem become more frequent in the last few days, and even after running the fsck and restarting, was not solving the problem. I re-installed the OS a couple of days back, but even after that, the problem exists. Though it is not that frequent now, but still at times, the file-system becomes read-only and the system stops behaving normally, and I have to run fsck in the repair mode, which makes the whole thing normal. Yes I keep my laptop switched on for long hours and it does get heated up. Please help. Thanks, Anuj

    Read the article

  • Google Glasses–A new world in front of your eyes

    - by Gopinath
    Google is getting into a whole new business that would help us to see the world in a new dimension and free us from all gadgets we carry we today. Google Glasses is a wearable tiny computer that brings information in front of your eyes and lets you interact with it using voice commands. It’s a kind of glasses(spectacles) that you can wear to see and interact with the world in a new way.  With Google Glasses, for example you can look at a beautiful location and through voice you can instruct it to capture a photograph and share it to your friends. You don’t need a camera to capture the beautiful scene, you don’t need an App to upload and share it.  All you need is just Google Glasses By the way these glasses are not heavy head mountable stuff, they are very tiny one and look beautiful too. Check out the embedded video demo released by Google to see them in action and for sure you are going to be amazed.   Last year December 9 to 5 Google posted details about this secret project and NY Times says that these glasses would be available to everyone at affordable cost, anywhere between $250 and $600. It is powered by Android OS and the contains a GPS, motion sensor, camera, voice input & output devices. Check out Project Glass for more details.

    Read the article

  • The How-To Geek Video Guide to Using Windows 7 Speech Recognition

    - by YatriTrivedi
    Ever get the desire to control your computer, Star Trek-style? With Windows 7’s Speech Recognition, it’s easier than you might think. Microsoft has been working on its voice command steadily over the years. XP introduced it, Vista smoothed it, and 7 has it polished. It’s strangely not advertised as a feature, even though other voice command and speech recognition programs are hundreds of dollars. It may not be as perfect as some of them, but there’s definitely something amazing about vocally telling your computer to do things and it actually working Latest Features How-To Geek ETC How To Create Your Own Custom ASCII Art from Any Image How To Process Camera Raw Without Paying for Adobe Photoshop How Do You Block Annoying Text Message (SMS) Spam? How to Use and Master the Notoriously Difficult Pen Tool in Photoshop HTG Explains: What Are the Differences Between All Those Audio Formats? How To Use Layer Masks and Vector Masks to Remove Complex Backgrounds in Photoshop Bring Summer Back to Your Desktop with the LandscapeTheme for Chrome and Iron The Prospector – Home Dash Extension Creates a Whole New Browsing Experience in Firefox KinEmote Links Kinect to Windows Why Nobody Reads Web Site Privacy Policies [Infographic] Asian Temple in the Snow Wallpaper 10 Weird Gaming Records from the Guinness Book

    Read the article

  • Developing an internet-enabled application as a Kiosk on Windows 7

    - by maple_shaft
    I am finalizing development of a desktop Java application that communicates with an outside web server, and now I need to start seriously considering deployment. This application will run on a large touchscreen all-in-one workstation running Windows 7. It will be located in a public-area and thus must be LOCKED-DOWN Hanibal Lecter style. Early in the project nobody really concerned themselves with this fact just assuming that we can buy some magical software for Windows 7 that will automatically take care of all this, however I am finding now that this looks to be a LOT more complicated than my manager ever thought. I need to: - Lock down the standard hot-keys (ALT+TAB, ALT+CTRL+DEL, etc...) Prevent the user from opening ANY programs other than the kiosk application and its spawned executables Prevent the user from closing the application Start the kiosk application on startup (this can be done without kiosk software) Auto-login to Windows on reboot (Windows Updates, power failure, bratty kid pressing the power button, etc...) Administrator passcode escape sequence for routine maintenance by desktop support professionals. To my dismay I am having a really hard time finding software that contains the whole package and am finding numerous swaths of competing information on the best way to do this. I am not necessarily looking for free or open source software and am willing to pay for software that can help me achieve this. Have any of you ever wrote kiosk software before and if so what approaches have you taken to do this?

    Read the article

  • Optimizing MySQL, Improving Performance of Database Servers

    - by Antoinette O'Sullivan
    Optimization involves improving the performance of a database server and queries that run against it. Optimization reduces query execution time and optimized queries benefit everyone that uses the server. When the server runs more smoothly and processes more queries with less, it performs better as a whole. To learn more about how a MySQL developer can make a difference with optimization, take the MySQL Developers training course. This 5-day instructor-led course is available as: Live-Virtual Event: Attend a live class from your own desk - no travel required. Choose from a selection of events on the schedule to suit different timezones. In-Class Event: Travel to an education center to attend an event. Below is a selection of the events on the schedule.  Location  Date  Delivery Language  Vienna, Austria  17 November 2014  German  Brussels, Belgium  8 December 2014  English  Sao Paulo, Brazil  14 July 2014  Brazilian Portuguese London, English  29 September 2014  English   Belfast, Ireland  6 October 2014  English  Dublin, Ireland  27 October 2014  English  Milan, Italy  10 November 2014  Italian  Rome, Italy  21 July 2014  Italian  Nairobi, Kenya  14 July 2014  English  Petaling Jaya, Malaysia  25 August 2014  English  Utrecht, Netherlands  21 July 2014  English  Makati City, Philippines  29 September 2014  English  Warsaw, Poland  25 August 2014  Polish  Lisbon, Portugal  13 October 2014  European Portuguese  Porto, Portugal  13 October 2014  European Portuguese  Barcelona, Spain  7 July 2014  Spanish  Madrid, Spain  3 November 2014  Spanish  Valencia, Spain  24 November 2014  Spanish  Basel, Switzerland  4 August 2014  German  Bern, Switzerland  4 August 2014  German  Zurich, Switzerland  4 August 2014  German The MySQL for Developers course helps prepare you for the MySQL 5.6 Developers OCP certification exam. To register for an event, request an additional event or learn more about the authentic MySQL curriculum, go to http://education.oracle.com/mysql.

    Read the article

  • Blank screen after Switch User or Resume

    - by matt wilkie
    About half the time when I switch users or resume from standby or resume the screen goes blank (black). If I work the cursor keys I can hear the system bell when it gets to the end of the user list. I can also successfully login, going from memory, but screen stays black. Sometimes closing and re-opening the lid will light up the screen again. Pressing the special Function key to enable/disable external monitor connection has no effect [Fn]-[F5],[Fn]-[F6]. If none of the previous work I need to put the computer into hibernation or full power off to restore screen function. If I watch closely when switching users I think I can see the screen initially start to light up and then quickly fade to black. The computer is an Acer Aspire 3500, model ZL6, running Ubuntu 10.10 installed 2 days ago. No proprietary drivers are in use. I'll provide a list of hardware details as soon as I can figure out how to generate that (didn't there used to be an entry for hardware details under the System menu?). Possibly related questions: No resume after Hibernate or Standby When I resume from suspension - the screen is blank Switch user fails to complete successfully For what it's worth, blank after resume also used to happen occasionally when the laptop was running XP-Home, but nowhere near as often, perhaps 6 or 8 times a year. UPDATE: I found System Administration System Testing and ran the Monitor test. It went very very dark, but the window elements could be discerned, and the whole screen flashed (from very very dark to black). On the third repeat of that same test the screen went to full blaupck and stayed there. Moving the mouse, via touchpad, or touch keys did not wake it up again. I had to close the lid and put the computer into hibernate, and press the power button to restore it. UPDATE2: output of lshw: http://pastebin.com/q7n8676r, lspci: http://pastebin.com/6ujzVK4r UPDATE3: sometimes I can restore the screen by flipping to console 1 with ctrl-alt-F1 and then back to graphical with ctrl-alt-F7.

    Read the article

  • Code maintenance: keeping a bad pattern when extending new code for being consistent or not ?

    - by Guillaume
    I have to extend an existing module of a project. I don't like the way it has been done (lots of anti-pattern involved, like copy/pasted code). I don't want to perform a complete refactor. Should I: create new methods using existing convention, even if I feel it wrong, to avoid confusion for the next maintainer and being consistent with the code base? or try to use what I feel better even if it is introducing another pattern in the code ? Precison edited after first answers: The existing code is not a mess. It is easy to follow and understand. BUT it is introducing lots of boilerplate code that can be avoided with good design (resulting code might become harder to follow then). In my current case it's a good old JDBC (spring template inboard) DAO module, but I have already encounter this dilemma and I'm seeking for other dev feedback. I don't want to refactor because I don't have time. And even with time it will be hard to justify that a whole perfectly working module needs refactoring. Refactoring cost will be heavier than its benefits. Remember: code is not messy or over-complex. I can not extract few methods there and introduce an abstract class here. It is more a flaw in the design (result of extreme 'Keep It Stupid Simple' I think) So the question can also be asked like that: You, as developer, do you prefer to maintain easy stupid boring code OR to have some helpers that will do the stupid boring code at your place ? Downside of the last possibility being that you'll have to learn some stuff and maybe you will have to maintain the easy stupid boring code too until a full refactoring is done)

    Read the article

  • 3ds Max error dialog: "Instancing not supported for this action"

    - by monsto
    "Instancing not supported for this action” is the dialog I get. My favorite part is that, according to google and yahoo, apparently i am the only person in the history of mankind to experience these words together in this order, let along get this message from Max. Thanks, autodesk, for putting this dialog in special for me! So I’ve created my model (nws) and was setting up a Skin Wrap. Selected "Face Deformation", added the base-skin for weight, checked “weight all points”. . . clicked “convert to skin” and got that dialog. My model doesn’t have a whole lot of elements to it, I had a left and right appendage that came from a base model (skyrim). so, i did a clonecopy of all 3 of my elements, just to be sure nothing was instanced… and VOILA! Same error message. the only other elements are an imported NIF mesh and skeleton. Any idea where this is coming from or how I can make it go away so that I can export my mesh?

    Read the article

< Previous Page | 225 226 227 228 229 230 231 232 233 234 235 236  | Next Page >