Search Results

Search found 34971 results on 1399 pages for 'st even'.

Page 128/1399 | < Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >

  • Connecting a Samsung Galaxy S3 in Ubuntu 13.04

    - by Squishy
    In 13.04, whenever I connect an Android device, one of three things happens: 1 . It mounts successfully (maybe once out of 3 attempts) 2 . It fails to mount with the following error message: Oops! Something went wrong. Unhandled error message: Unable to open MTP device 3 . This one occasionally happens: Unhandled error message: No such interface `org.gtk.vfs.Mount' on object at path /org/gtk/vfs/mount/1 Regardless of activity (even when successfully mounted) it will continuously spam the following error message: Unable to mount SAMSUNG_Android Unable to open MTP Device '[usb:003,00x]' where x seems to be an arbitrary number below 10 and continues counting up with each new error message until the device is unplugged. I've also just noticed that even if it mounts successfully, it unmounts after about 30 seconds and starts spamming the error message above. The Android device is unlocked, always on and fully charged. ADB seems to function normally. Any suggestions? Further info: this happens on both a stock Samsung S3 and an Xperia Arc S running a custom AOSP based ROM. I've also tried the steps outlined in this Stack Overflow answer, but the problem persists. UPDATE: After doing a dist-upgrade (May 8th 2013), the Xperia Arc S on AOSP ROM now mounts and behaves normally. The S3, however, still behaves as described above. UPDATE: After careful observation, ABD does not, in fact, behave normally. If the error message above appears while sending an app to the device, the attempt is aborted with an error message saying that the device is unavailable.

    Read the article

  • Ubuntu 12.04 LTS , Dell d410 output to External Monitor/TV with docking

    - by Gck
    I am not able to see the video output on My external monitor/TV after installing 12.04 LTS. I had the same issue with 11 versions also. This issue does not happen on 10.04 LTS version. This is my setup. Dell latitude D410 with Intel Mobile 915 GM/910GML express controller using the Dell docking station Connecting to my 42inch TV with DVI (from Docking) to HDMI cable (on TV) If i close the lid, some times i am able to see the video output on TV, but the resolution is making the fonts so small, i cannot even see it and when i click displays option to change the resolution, the output goes off completely and it is shown now on the laptop screen. one time, i got the output on both the screens, but i am not able to replicate that scenario even after trying the Fn+F8 key option on my laptop. earlier some time back when i ran version 11.x ubuntu, i was able to get the output on both the screens (basically mirrored output) with large font resolution on my TV using the monitor Test tool as mentioned here :http://www.bigfatostrich.com/2011/05/solved-ubuntu-11-04-broken-external-display/ The problem with this approach is that i have to do it every time i restart the machine and more over the video output is present on both the screens (mirrored), which is of no use. As i stated before, this does not happen with 10.04 LTS. With 10.04 LTS my laptop screen is off automatically and the output is seen on TV/external display. Can anyone share any options that i can try? How can we achieve the behavior of 10.04 LTS with respect to the External display on 12.04 LTS . Thankyou very much for your help in advance.

    Read the article

  • Unit Tests as a learning tool - a good idea?

    - by Ekkehard.Horner
    I'm interested in ways and means for learning (a) programming language(s) efficiently. I believe that using Unit Test concepts and infrastructure early in that process is a good thing, even better than starting with "Hello world". Why: To write a decent program even for a toy/restricted problem in a new language, you'll have to master many heterogenous concepts (control flow & variables & IO ...), you are tempted to glance over details just to get your program 'to work'. Putting (your understanding of) the facts about the new language in assertions with good descriptions (=success messages) enforces thinking thru/clearness/precision. Grouping topics and adding assertions to such groups is much easier than incorporation features from the 2. chapter of your "Learning X" book to your chapter 1 program. Why not: 'Real' Unit Tests are meant to output "1234 tests ok; 1 failure: saveWorld() chokes on negative input"; 'didactic' Unit Tests should output relevant facts about the new language like perl6 10-string.t # ### p5chop ... ok 13 - p5chop( "cbä" ) returns "ä" ok 14 - after that, victim is changed to "cb" # ### (p6) chop ... ok 27 - (p6) chop( "cbä" ) returns chopped copy: "cb" ok 18 - after that, victim is unchanged: "cbä" # ### chomp ... So (mis?)using Unit Tests may be counterproductive - practicing actions while learning you wouldn't use professionally. How: Writing 'didactic' Unit Tests in languages with lightweight testing systems (Perl 5/6) is easy; (mis?)using more elaborate systems (JUnit, CppUnit) may be not worth the effort or not suitable for a person just starting with a new language. So Is using Unit Tests as a learning tool a bad idea? Can the Unit Test tool(s) of your favourite language(s) used didactically? Should implementation details (eventually) be discussed here or over at stackoverflow.com?

    Read the article

  • Won't boot after installing Ubuntu 12.04 sucessfully

    - by Matt
    I installed 12.04 successfully and rebooted (I took out my installation CD), and selected the newly installed Linux partition to boot from rEFIt. Then it just comes up with this error message: Error loading operating system which could not be more vague. Take that back. I guess it could say just "error." I don't even get to the boot prompt which limits what I can do. I cannot boot into rescue mode. I tried boot-repair, but it took more than 24 hours to check the system configuration, so I gave up on that. I'm running a Mac Mini with its main OS being Mac OS X 10.5.8. I have an alternate OS Windows XP installed, which was virtually destroyed by this Linux installation. I sacrificed my working, speedy Windows partition for something that won't even boot up. What was I thinking. My Mac partition is slow as crap. I've tried installing 12.04 many times with two different disks. The first time, I had one partition for Linux, then I had 2 (swap+main), then 3 (swap, main and BIOS), then 4 which is what I have now (swap, main, BIOS, and boot/grub). The only way I could get through the install without GRUB giving up was if I created a separate partition for it. Which was pointless, because it did install successfully, but it still doesn't boot up at all. Could rEFIt be booting off of the BIOS or one of the other partitions? Because if that's the case, there is no alternative, because Mac itself without rEFIt refuses to recognize a Linux ext4 (or 2 or 3) format partition. Apple always has to make everything so difficult. If I'm not mistaken, rEFIt is the only application of its kind for Mac. I can boot off of the CD back to the install/try screen. This is extremely upsetting, can you guys help? Please?

    Read the article

  • What is Ubuntu's Definition of a "Registered Application"?

    - by Tom
    I've run into this a few times when installing apps from source, and during the occasional hack with update-alternatives. So far, it's only been a minor annoyance (ie, not got in the way of the end-goal) but it's now a frustration as it's pointing to a hole in my knowledge-base... so when I get a message that 'foo' is "not a registered application" (or I can't use foo's default icon cuz Ubuntu has no knowledge of 'foo'): (1) what defines a "registered application"? (2) how can I define an application installed from source (and likely residing in $HOME/bin/app-name) such that it packs the same functionality as a package installed from a .deb? (if the solution is not self-evident from answer 1) Example: I download and unpack daily dev builds of sublime-text-2 to /home/tom/bin/sublime-text-2. I've created a *.desktop file with appropriate shortcuts, etc. But the icon for sublime cannot be display in any launcher even if I provide a full pathname to the option. The solution is to install a 2nd instance of sublime from a deb package. When I install sublime-text-2 from a .deb package, it installs under /usr/bin && /usr/lib, the installed .desktop file is stored under /usr/share/applications, and the relevant line reads: icon=sublime_text. Where's the linkage I'm missing? Somehow Ubuntu knows how to exact the icon from sublime_text in the latter, but not in the former (again, even with a full path provided).

    Read the article

  • Ubuntu and racadm

    - by lmqcn
    I recently purchased a used poweredge 1850 server and it came with a DRAC card. After wiping the HDD and installing ubuntu server 12.04.3 LTS amd64 on it, I am now trying to gain access to the DRAC which I believe is version 4. I have properly configured the DRAC to use it's own IP on my LAN and when I point my browser to the IP address, I am greeted with the DRAC login page (it has the dell logo and everything). However, after trying the credentials of root/calvin, I was denied access. So I think that the previous owners had set their own password. After doing some reading, it appears that I can reset the credentials to the default using racadm config -g cfgUserAdmin -o cfgUserAdminPassword -i 1 newpassword but upon entering the command, I get this error: bash: /usr/sbin/racadm: No such file or directory This holds true even if I run sudo su prior to running the racadm command. If, however, I run sudo racadm config -g cfgUserAdmin -o cfgUserAdminPassword -i 1 newpassword there are no errors. Yet, when I try to log into the DRAC via the web interface using the credentials of root/newpassword I am still not granted access. I installed the dell utilities via the guide at https://wiki.ubuntu.com/HardwareSupportMachinesServersDellNotes. I first tried to install the 64 bit version that is on the dell repositories, but after that was unsuccessful, I just followed the guide verbatim. No errors were produced in either case. I even followed the information at the bottom of the guide by executing sudo pppd /dev/ttyS1 1382400 crtscts noipdefault noauth lock persist connect 'chat -v "" CLIENT CLIENTSERVER "\\c"' but obviously, replacing the /dev/ttyS1 with the correct information for my system. ls -l /usr/sbin/ | grep racadm yields -rwxr-xr-x 1 root root 87930 Sep 16 04:03 racadm I have tried these credentials after each attempt of changing the password: root/calvin root/newpassword admin/calvin admin/newpassword All have been unsuccessful. What is the next course of action that I should take?

    Read the article

  • Windows Azure Virtual Machines - Make Sure You Follow the Documentation

    - by BuckWoody
    To create a Windows Azure Infrastructure-as-a-Service Virtual Machine you have several options. You can simply select an image from a “Gallery” which includes Windows or Linux operating systems, or even a Windows Server with pre-installed software like SQL Server. One of the advantages to Windows Azure Virtual Machines is that it is stored in a standard Hyper-V format – with the base hard-disk as a VHD. That means you can move a Virtual Machine from on-premises to Windows Azure, and then move it back again. You can even use a simple series of PowerShell scripts to do the move, or automate it with other methods. And this then leads to another very interesting option for deploying systems: you can create a server VHD, configure it with the software you want, and then run the “SYSPREP” process on it. SYSPREP is a Windows utility that essentially strips the identity from a system, and when you re-start that system it asks a few details on what you want to call it and so on. By doing this, you can essentially create your own gallery of systems, either for testing, development servers, demo systems and more. You can learn more about how to do that here: http://msdn.microsoft.com/en-us/library/windowsazure/gg465407.aspx   But there is a small issue you can run into that I wanted to make you aware of. Whenever you deploy a system to Windows Azure Virtual Machines, you must meet certain password complexity requirements. However, when you build the machine locally and SYSPREP it, you might not choose a strong password for the account you use to Remote Desktop to the machine. In that case, you might not be able to reach the system after you deploy it. Once again, the key here is reading through the instructions before you start. Check out the link I showed above, and this link: http://technet.microsoft.com/en-us/library/cc264456.aspx to make sure you understand what you want to deploy.  

    Read the article

  • difference between Casini [IIS express] and VS Development server or Expression web

    - by anirudha
    MVC3 project can be run within Expression web and Visual studio as opened like a website not a project. they work same even if you open blogengine.net project in VS they take a time when you have more theme but expression web debug them in a second. well because theme design not matter for code. Expression web is a good because they save time for compile the code. even changes we make a little in design nothing in backend code.   i found a little difference between Casini and VS development server that if image putted in wrong way like <img src=”//img.png”/> instead of <img src=”/img.png”/> the error we make // instead of / that’s not worked in Expression web or Visual studio debugging but in Cassini it’s work fine.   Well i found that debug Blogengine.net in Expression web is a great thing because in VS they take a time like a minute to debug when you trying to debug first time. Expression Web save a time when we design themes within them and that’s much good option because web is also maked for design.   Well if you want to debug application faster then use casini but Expression web debugging is a good option when they take a long time to debug in Visual studio and EW debug them in a seconds.

    Read the article

  • How to make the run button run the project, not the file, in Eclipse

    - by Roy T.
    I'm using the Spring IDE, a variant of Eclipse to create a Java project. One big irritation I have is that when I press the run button Eclipse tries to run the current file, which usually fails because it doesn't have a main method. I've set up run configurations in the hope that would make the play button default to the run configuration instead of the current file, but that doesn't work either. Now to run my application correctly I have to press the little arrow next to play, select my favorite run configuration and then it works, this is only two extra clicks but it's tedious, the button is small and I feel like I shouldn't have to perform these extra steps. I mean what is the point of run configurations and projects if it still tries to run a file by default? Even more preferably I wouldn't even want to touch the mouse but just press Ctrl+F11, but this has the same behavior. All above applies to debugging as well btw. So my question is this: how do I make the run and debug buttons (and their short keys) default to the project's run configuration instead of to trying (and failing) to run only the current file? Much like it is in Visual Studio and other IDEs?

    Read the article

  • Demo on Data Guard Protection From Lost-Write Corruption

    - by Rene Kundersma
    Today I received the news a new demo has been made available on OTN for Data Guard protection from lost-write corruption. Since this is a typical MAA solution and a very nice demo I decided to mention this great feature also in this blog even while it's a recommended best practice for some time. When lost writes occur an I/O subsystem acknowledges the completion of the block write even though the write I/O did not occur in the persistent storage. On a subsequent block read on the primary database, the I/O subsystem returns the stale version of the data block, which might be used to update other blocks of the database, thereby corrupting it.  Lost writes can occur after an OS or storage device driver failure, faulty host bus adapters, disk controller failures and volume manager errors. In the demo a data block lost write occurs when an I/O subsystem acknowledges the completion of the block write, while in fact the write did not occur in the persistent storage. When a primary database lost write corruption is detected by a Data Guard physical standby database, Redo Apply (MRP) will stop and the standby will signal an ORA-752 error to explicitly indicate a primary lost write has occurred (preventing corruption from spreading to the standby database). Links: MOS (1302539.1). "Best Practices for Corruption Detection, Prevention, and Automatic Repair - in a Data Guard Configuration" Demo MAA Best Practices Rene Kundersma

    Read the article

  • How Visual Studio 2010 and Team Foundation Server enable Compliance

    - by Martin Hinshelwood
    One of the things that makes Team Foundation Server (TFS) the most powerful Application Lifecycle Management (ALM) platform is the traceability it provides to those that use it. This traceability is crucial to enable many companies to adhere to many of the Compliance regulations to which they are bound (e.g. CFR 21 Part 11 or Sarbanes–Oxley.)   From something as simple as relating Tasks to Check-in’s or being able to see the top 10 files in your codebase that are causing the most Bugs, to identifying which Bugs and Requirements are in which Release. All that information is available and more in TFS. Although all of this tradability is available within TFS you do need to understand that it is not for free. Well… I say that, but if you are using TFS properly you will have this information with no additional work except for firing up the reporting. Using Visual Studio ALM and Team Foundation Server you can relate every line of code changes all the way up to requirements and back down through Test Cases to the Test Results. Figure: The only thing missing is Build In order to build the relationship model below we need to examine how each of the relationships get there. Each member of your team from programmer to tester and Business Analyst to Business have their roll to play to knit this together. Figure: The relationships required to make this work can get a little confusing If Build is added to this to relate Work Items to Builds and with knowledge of which builds are in which environments you can easily identify what is contained within a Release. Figure: How are things progressing Along with the ability to produce the progress and trend reports the tractability that is built into TFS can be used to fulfil most audit requirements out of the box, and augmented to fulfil the rest. In order to understand the relationships, lets look at each of the important Artifacts and how they are associated with each other… Requirements – The root of all knowledge Requirements are the thing that the business cares about delivering. These could be derived as User Stories or Business Requirements Documents (BRD’s) but they should be what the Business asks for. Requirements can be related to many of the Artifacts in TFS, so lets look at the model: Figure: If the centre of the world was a requirement We can track which releases Requirements were scheduled in, but this can change over time as more details come to light. Figure: Who edited the Requirement and when There is also the ability to query Work Items based on the History of changed that were made to it. This is particularly important with Requirements. It might not be enough to say what Requirements were completed in a given but also to know which Requirements were ever assigned to a particular release. Figure: Some magic required, but result still achieved As an augmentation to this it is also possible to run a query that shows results from the past, just as if we had a time machine. You can take any Query in the system and add a “Asof” clause at the end to query historical data in the operational store for TFS. select <fields> from WorkItems [where <condition>] [order by <fields>] [asof <date>] Figure: Work Item Query Language (WIQL) format In order to achieve this you do need to save the query as a *.wiql file to your local computer and edit it in notepad, but one imported into TFS you run it any time you want. Figure: Saving Queries locally can be useful All of these Audit features are available throughout the Work Item Tracking (WIT) system within TFS. Tasks – Where the real work gets done Tasks are the work horse of the development team, but they only as useful as Excel if you do not relate them properly to other Artifacts. Figure: The Task Work Item Type has its own relationships Requirements should be broken down into Tasks that the development team work from to build what is required by the business. This may be done by a small dedicated group or by everyone that will be working on the software team but however it happens all of the Tasks create should be a Child of a Requirement Work Item Type. Figure: Tasks are related to the Requirement Tasks should be used to track the day-to-day activities of the team working to complete the software and as such they should be kept simple and short lest developers think they are more trouble than they are worth. Figure: Task Work Item Type has a narrower purpose Although the Task Work Item Type describes the work that will be done the actual development work involves making changes to files that are under Source Control. These changes are bundled together in a single atomic unit called a Changeset which is committed to TFS in a single operation. During this operation developers can associate Work Item with the Changeset. Figure: Tasks are associated with Changesets   Changesets – Who wrote this crap Changesets themselves are just an inventory of the changes that were made to a number of files to complete a Task. Figure: Changesets are linked by Tasks and Builds   Figure: Changesets tell us what happened to the files in Version Control Although comments can be changed after the fact, the inventory and Work Item associations are permanent which allows us to Audit all the way down to the individual change level. Figure: On Check-in you can resolve a Task which automatically associates it Because of this we can view the history on any file within the system and see how many changes have been made and what Changesets they belong to. Figure: Changes are tracked at the File level What would be even more powerful would be if we could view these changes super imposed over the top of the lines of code. Some people call this a blame tool because it is commonly used to find out which of the developers introduced a bug, but it can also be used as another method of Auditing changes to the system. Figure: Annotate shows the lines the Annotate functionality allows us to visualise the relationship between the individual lines of code and the Changesets. In addition to this you can create a Label and apply it to a version of your version control. The problem with Label’s is that they can be changed after they have been created with no tractability. This makes them practically useless for any sort of compliance audit. So what do you use? Branches – And why we need them Branches are a really powerful tool for development and release management, but they are most important for audits. Figure: One way to Audit releases The R1.0 branch can be created from the Label that the Build creates on the R1 line when a Release build was created. It can be created as soon as the Build has been signed of for release. However it is still possible that someone changed the Label between this time and its creation. Another better method can be to explicitly link the Build output to the Build. Builds – Lets tie some more of this together Builds are the glue that helps us enable the next level of tractability by tying everything together. Figure: The dashed pieces are not out of the box but can be enabled When the Build is called and starts it looks at what it has been asked to build and determines what code it is going to get and build. Figure: The folder identifies what changes are included in the build The Build sets a Label on the Source with the same name as the Build, but the Build itself also includes the latest Changeset ID that it will be building. At the end of the Build the Build Agent identifies the new Changesets it is building by looking at the Check-ins that have occurred since the last Build. Figure: What changes have been made since the last successful Build It will then use that information to identify the Work Items that are associated with all of the Changesets Changesets are associated with Build and change the “Integrated In” field of those Work Items . Figure: Find all of the Work Items to associate with The “Integrated In” field of all of the Work Items identified by the Build Agent as being integrated into the completed Build are updated to reflect the Build number that successfully integrated that change. Figure: Now we know which Work Items were completed in a build Now that we can link a single line of code changed all the way back through the Task that initiated the action to the Requirement that started the whole thing and back down to the Build that contains the finished Requirement. But how do we know wither that Requirement has been fully tested or even meets the original Requirements? Test Cases – How we know we are done The only way we can know wither a Requirement has been completed to the required specification is to Test that Requirement. In TFS there is a Work Item type called a Test Case Test Cases enable two scenarios. The first scenario is the ability to track and validate Acceptance Criteria in the form of a Test Case. If you agree with the Business a set of goals that must be met for a Requirement to be accepted by them it makes it both difficult for them to reject a Requirement when it passes all of the tests, but also provides a level of tractability and validation for audit that a feature has been built and tested to order. Figure: You can have many Acceptance Criteria for a single Requirement It is crucial for this to work that someone from the Business has to sign-off on the Test Case moving from the  “Design” to “Ready” states. The Second is the ability to associate an MS Test test with the Test Case thereby tracking the automated test. This is useful in the circumstance when you want to Track a test and the test results of a Unit Test designed to test the existence of and then re-existence of a a Bug. Figure: Associating a Test Case with an automated Test Although it is possible it may not make sense to track the execution of every Unit Test in your system, there are many Integration and Regression tests that may be automated that it would make sense to track in this way. Bug – Lets not have regressions In order to know wither a Bug in the application has been fixed and to make sure that it does not reoccur it needs to be tracked. Figure: Bugs are the centre of their own world If the fix to a Bug is big enough to require that it is broken down into Tasks then it is probably a Requirement. You can associate a check-in with a Bug and have it tracked against a Build. You would also have one or more Test Cases to prove the fix for the Bug. Figure: Bugs have many associations This allows you to track Bugs / Defects in your system effectively and report on them. Change Request – I am not a feature In the CMMI Process template Change Requests can also be easily tracked through the system. In some cases it can be very important to track Change Requests separately as an Auditor may want to know what was changed and who authorised it. Again and similar to Bugs, if the Change Request is big enough that it would require to be broken down into Tasks it is in reality a new feature and should be tracked as a Requirement. Figure: Make sure your Change Requests only Affect Requirements and not rewrite them Conclusion Visual Studio 2010 and Team Foundation Server together provide an exceptional Application Lifecycle Management platform that can help your team comply with even the harshest of Compliance requirements while still enabling them to be Agile. Most Audits are heavy on required documentation but most of that information is captured for you as long a you do it right. You don’t even need every team member to understand it all as each of the Artifacts are relevant to a different type of team member. Business Analysts manage Requirements and Change Requests Programmers manage Tasks and check-in against Change Requests and Bugs Testers manage Bugs and Test Cases Build Masters manage Builds Although there is some crossover there are still rolls or “hats” that are worn. Do you thing this is all achievable? Have I missed anything that you think should be there?

    Read the article

  • How common is prototyping as the first stage of development?

    - by EpsilonVector
    I've been taking some software design courses in the past few semesters, and while I see the benefit in a lot of the formalism, I feel like it doesn't tell me anything about the program itself: You can't tell how the program is going to operate from the Use Case spec, even though it discusses what the program can do. You can't tell anything about the user experience from the requirements document, even though it can include quality requirements. Sequence diagrams are a good description of how the software works as the call stack, but are very limited, and give a highly partial view of the overall system. Class diagrams are great for describing how the system is built, but are utterly useless in helping you figure out what the software needs to be. Where in all this formalism is the bottom line: how the program looks, operates, and what experience it gives? Doesn't it make more sense to design off of that? Isn't it better to figure out how the program should work via a prototype and strive to implement it for real? I know that I'm probably suffering from being taught engineering by theoreticians, but I need to ask, do they do this in the industry? How do people figure out what the program actually is, not what it should conform to? Do people prototype a lot, or do they mostly use the formal tools like UML and I just didn't get the hang of using them yet?

    Read the article

  • IT Optimization Plan Pays Off For UK Retailer

    - by [email protected]
    I caught this article in ComputerworldUK yesterday. The headline talks about UK-based supermarket chain Morrisons is increasing their IT spend...OK, sounds good. Even nicer that Oracle is a big part of that. But what caught my eye were three things: 1) Morrison's truly has a long term strategy for IT. In this case, modernizing and optimizing how they use IT for business advantage. 2) Even in a tough economic climate, Morrison's views IT investments as contributing to and improving the bottom line. Specifically, "The investment in IT contributed to a 21 percent increase in Morrison's underlying profit.." 3) The phased, 3-year "Optimization Plan" took a holistic approach to their business--from CRM and Supply Chain systems to the underlying application infrastructure. On the infrastructure front, adopting a more flexible Service-Oriented Architecture enabled them to be more agile and adapt their business and Identity Management helped with sometimes mundane (but costly) issues like lost passwords and being able to document who has access to what. Things don't always turn out so rosy. And I know it was a long and difficult process...but it's nice to see a happy ending every once in a while.

    Read the article

  • Social IT guy barrier [closed]

    - by sergiol
    Possible Duplicate: How do you deal with people who ask you to fix their computer? Hello. Almost every person that deserves the title of being a programmer as faced the problem of persons that do not even remember the mere existence of those professionals, unless they have serious problems in their computer or some other IT related problem. May be my post will be considered off-topic, but I think it is a very important question. As Joel Spolsky says, IT guys are not Asperger geeks, and they need social life like everybody. But the people that is always asking for favors from us, can ruin deeply our social and personal life. I could experience this by myself. This fact as generated articles like http://www.lifereboot.com/2007/10-reasons-it-doesnt-pay-to-be-the-computer-guy/ and http://ecraazul.wordpress.com/2009/01/29/o-gajo-da-informatica-de-a-a-z/ (I received this one in my mailbox. It is in Portuguese, but I believe it is translated from English). Basically the idea is to criticize people that is always asking us favors. It is even more annoying if you are person very specialized in some subject and a person asks you a completely out-of-that-context question. For example, you are a VBA programmer and somebody says you to that his/her Mobile Internet Pen stopped to work five days ago and needs your help to put it working again. When you go to a doctor to fix your legs, you don't go to an ophthalmologist. You go to an orthopedist. And you pay. I don't how it works in other countries, but in Portugal being a doctor is so an overvalued job, that they earn very much money and almost nobody asks them free favors. So, my question is: what kind of social barrier (or whatever else) do you use to protect yourself from that situation?

    Read the article

  • Effective way to check if an Entity/Player enters a region/trigger

    - by Chris
    I was wondering how multiplayer games detect if you enter a special region. Let's assume there is a huge map that is so big that simply checking it would become a huge performance issue. I've seen bukkit (a modding API for Minecraft servers) firing an Event on every single move. I don't think that larger games do the same because even if you have only a few coordinates you are interested in, you have to loop through a few trigger zone to see if the player is inside your region - for every player. This seems like an extremely CPU-intense operation to me even though I've never developed something like that. Is there a special algorithm that is used by larger games to accomplish this? The only thing I could imagine is to split up the world into multiple parts and to register the event not on the movement itself but on all the parts that are covered by your area and only check for areas that are registered in the current part. And another thing I would like to know: How could you detect when someone must have entered a trigger but you never saw him directly in it since his client only sent you an move packet shortly before entering and after leaving the trigger area. Drawing a line and calculate all colliding parts seems rather CPU intensive if you have to perform it every time.

    Read the article

  • how do I make my Dualshock 3 gamepad work in Ubuntu 14.04?

    - by user290527
    When my desktop computer was running Ubuntu 12.04, my PS3 controllers would work with USB. I didn't need to do any special setup. I could just plug it in before I start SuperTuxKart and it would recognize it. I can also do this on my laptop (still running 12.04). Since I gave my desktop a fresh install of Ubuntu 14.04, the controller would never work. I have played with some installed software that I found when looking for information. Here is what I get with xboxdrv: liam@Liam-CustomDesktop:~$ sudo xboxdrv --detach-kernel-driver xboxdrv 0.8.5 - http://pingus.seul.org/~grumbel/xboxdrv/ Copyright © 2008-2011 Ingo Ruhnke <[email protected]> Licensed under GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This program comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions; see the file COPYING for details. Controller: PLAYSTATION(R)3 Controller Vendor/Product: 054c:0268 USB Path: 003:012 Controller Type: Playstation 3 USB Your Xbox/Xbox360 controller should now be available as: /dev/input/js0 /dev/input/event16 Press Ctrl-c to quit, use '--silent' to suppress the event output So the existence of this controller is acknowledged at on my computer. But it never works for input. In my old installation, I didn't even need to get software like xboxdrv for it to work. I have never tried bluetooth on either computer, but I don't think I even have that on my desktop. So now, how do I make my gamepad work in Ubuntu 14.04?

    Read the article

  • Leveraging Code in Ever Bigger Games

    - by ashes999
    Summary: The same way that I continually build complex engines and libraries within a single platform and technology to allow me to build increasingly bigger and better games, how to continue this when development crosses into different platforms? If I switch platforms, how do I leverage past code and experiences? Games are hard to build. Big games are even harder to build. I've decided that to be able to make big games, I need to start building smaller games, and building up an asset base of code, assets (graphics, sounds), tools, and most importantly, game engines, so that I can eventually get there. One game at a time. Let me give an analogy. To build an MMO 3D RPG, I would approach this by building and releasing small games with increasingly more features. This could entail, for example: A simple 2D game A tile-based game A game with RPG elements (items, equipment, monsters, battle) A full-fledged RPG A 3D RPG The problem now is if I have to change platforms or tools, I don't know how to leverage past code-bases (and experience) to start with a mature product. Right now, I'm writing Silverlight (FlatRedBall) games. Let's say I stick with this for ten years, and then suddenly decide to write a PS6 game, which is in a different programming language entirely. Granted, I have ten years of game-development experience (and correspondingly ten years of professional software development experience from my day job) to back me up. But I would still like some way to transplant that 2D RPG engine into the new programming language, or else leverage it somehow. Is this even possible? What are my options?

    Read the article

  • Basket Analysis with #dax in #powerpivot and #ssas #tabular

    - by Marco Russo (SQLBI)
    A few days ago I published a new article on DAX Patterns web site describing how to implement Basket Analysis in DAX. This topic is a very classical one and is also covered in the many-to-many revolution white paper. It has been also discussed in several blog posts, listed here in historical order: Simple Basket Analysis in DAX by Chris Webb PowerPivot, basket analysis and the hidden many to many by Alberto Ferrari Applied Basket Analysis in Power Pivot using DAX by Gerhard Brueckl As usual, in DAX Patterns we try to present the required DAX formulas in a way that is easy to adapt to specific models. We also try to show a good implementation from a performance point of view. Further optimizations are always possible in DAX. However, in order to keep the model simple to adapt in different scenarios, we avoid presenting optimizations that would require particular assumptions or restrictions on the data model. I hope you will find the Basket Analysis pattern useful. Even if you do not need it today, reading the DAX formula is a good exercise to check your knowledge of evaluation contexts in DAX. For example, describing how does it work the following expression is not a trivial task! [Orders with Both Products] := CALCULATE (     DISTINCTCOUNT ( Sales[SalesOrderNumber] ),     CALCULATETABLE (         SUMMARIZE ( Sales, Sales[SalesOrderNumber] ),         ALL ( Product ),         USERELATIONSHIP ( Sales[ProductCode], 'Filter Product'[Filter ProductCode] )     ) ) The good news is that you can use the patterns even if you do not really understand all the details of the DAX formulas you are using! Any feedback on this new pattern is very welcome.

    Read the article

  • Standards for how developers work on their own workstations

    - by Jon Hopkins
    We've just come across one of those situations which occasionally comes up when a developer goes off sick for a few days mid-project. There were a few questions about whether he'd committed the latest version of his code or whether there was something more recent on his local machine we should be looking at, and we had a delivery to a customer pending so we couldn't wait for him to return. One of the other developers logged on as him to see and found a mess of workspaces, many seemingly of the same projects, with timestamps that made it unclear which one was "current" (he was prototyping some bits on versions of the project other than his "core" one). Obviously this is a pain in the neck, however the alternative (which would seem to be strict standards for how each developer works on their own machine to ensure that any other developer can pick things up with a minimum of effort) is likely to break many developers personal work flows and lead to inefficiency on an individual level. I'm not talking about standards for checked-in code, or even general development standards, I'm talking about how a developer works locally, a domain generally considered (in my experience) to be almost entirely under the developers own control. So how do you handle situations like this? Are the one of those things that just happens and you have to deal with, the price you pay for developers being allowed to work in the way that best suits them? Or do you ask developers to adhere to standards in this area - use of specific directories, naming standards, notes on a wiki or whatever? And if so what do your standards cover, how strict are they, how do you police them and so on? Or is there another solution I'm missing? [Assume for the sake of argument that the developer can not be contacted to talk through what he was doing here - even if he could knowing and describing which workspace is which from memory isn't going to be simple and flawless and sometimes people genuinely can't be contacted and I'd like a solution which covers all eventualities.]

    Read the article

  • Can thousands of backlinks from the same site harm PageRank?

    - by Dejan Pelzel
    I just noticed that one particular site has almost 7000 backlinks linking back to our website. The site is something like a news aggregator and for each post they created around 20 (sometimes much more) backlinks back to our page and they basically linked over 400 pages. I am beginning to get concerned that this amount of links might harm our page. They seem to have more backlinks to our page than all the other pages combined and more backlinks that our website has pages. We have seen a massive negative effect going on for quite a while and the PageRank seems to have dropped to None (Not even 0). But I am not sure when and why exactly that happened seeing that PageRank updates take quite a while to appear. The site linking to us is otherwise pretty reputable and doesn't seem to be having any problems with their rank. (PR 6 actually) I was thinking of using the Google disavow tool for this site, but I don't want to make things even worse. Do you think these are harmful? If so, how do I fix this? Thanks :)

    Read the article

  • Cloud computing - database loading question

    - by workwise
    Following is the situation, I want to know whether what I want is possible in cloud computing and is it the best way for me: 1) My main site has a Database with tables with millions of rows, and entries are added almost every second. 2) I will setup a mysql mirror, so there will be a backup database always in sync with the main one. 3) There are few tens of thousands of images- growing. So say total size of images few tens of gigabytes. I will be keeping the image data also in sync on the backup server. 4) There can be short periods where traffic can go 100X the average traffic. 5) I will be using memcache heavily - most database and even frequently used disk files/images will be in RAM. I want that the main site runs on a dedicated server. The backup server is say an Amazon EC2 instance. Now note that since it is live backup, I need to run a small instance continuously. I want that when I anticipate high traffic, I should be able to run a large instance on the cloud and transfer the traffic there. The main point is - I do not want to spend time in "loading" the database on the large instance, as it typically can take few minutes or even hours (experience). So is it possible to just scale the memory/CPU on demand, and not having to load the database or sync up the filesystem? I want to setup my backup scripts etc just ONCE. Thanks JP

    Read the article

  • Meet The MySQL Experts Podcast: MySQL Utilities

    - by Wei-Chen Chiu
    Managing a MySQL database server can become a full time job. In many occasions, one MySQL DBA needs to manage multiple, even tens of, MySQL servers, and tools that bundle a set of related tasks into a common utility can be a big time saver, allowing you spend more time improving performance and less time executing repeating tasks. While there are several such utility libraries to choose, it is often the case that you need to customize them to your needs. The MySQL Utilities library is the answer to that need. It is open source so you can modify and expand it as you see fit. In the latest episode of the "Meet the MySQL Experts" podcast series, Chuck Bell, Sr. MySQL Software Developer at Oracle, introduces a variety of recently released MySQL Utilities, and how DBAs can save significant time using the utilities. Listen to the podcast and learn the highlights in 10 minutes. If you want to gain further details, attend the on-demand webinar for a more complete introduction, including: Use cases for each utility How to group utilities for even more usability How to modify utilities for your needs How to develop and contribute new utilities  Enjoy!

    Read the article

  • Ubuntu 12.04.2 Dual boot UEFI Windows 8 Preinstalled CX21903W Ultrabook

    - by user180782
    Hi i have a problem trying to install ubuntu. The machine is a CX Ultrabook model CX.21903W Intel I5 with 500GB hard disk, 8 GB ram and 32 GB SSD. From Installing Ubuntu on a Pre-Installed Windows 8 (64-bit) System (UEFI Supported), and according to the steps guide: 1 - We create a partition from Win8 (70 GB) from the own win8 program. 2 - Confirm-SecureBootUEFI=True. 3 - From Win8, shift + Restart and from special menu we selected the UEFI Firmware Setting. 4 - From BIOS Option: ------Option 1) Disable Secure Boot. ------Option 2) Disable UEFI (Not Available) from Option 1: Three ways is available. With Secure Boot enable - We can't even boot ubuntu. A red windows saying Soft unproper signed. With Secure Boot disable - and this config in boot device order: ----1: UEFI: USB ----2: Windows Boot Manger ----3: Others and CSM (Compatibility Support Module): enable - GRUB appears and selecting try Ubuntu then a black windows appears and nothing happens. The same result if install ubuntu is selected. With Secure Boot disable - and this config in boot device order: ----1: USB (No UEFI) ----2: Windows Boot Manger ----3: Others and CSM (Compatibility Support Module): enable - GRUB appears and selecting try Ubuntu, - Ubuntu boots and we can install it even. 5 - Rebooting and just changing the boot order as ----1: Ubuntu [] ----2: Windows Boot Manger ----3: Others then nothings happens. 6 - Booting from LiveUSB again and, as per instructed, making Boot-Repair (A warning windows: Ubuntu is working in legacy mode.). 7 - Saving changes and rebooting, Grub works but selecting Ubuntu, a black windows appears and nothing happens. Selecting Win8, Win8 boots and works. Untill now we can't make the ubuntu installation. Any suggestion will be welcomed. kind regards and thanks in advance.

    Read the article

  • Why is it that software is still easily pirated today?

    - by mohabitar
    I've always been curious about this. Now I wouldn't call myself a programmer yet, but I'm learning, so maybe the answer to this is obvious. It just seems a little hard to believe that with all of our technological advances and the billions of dollars spent on engineering the most unbelievable and mind-blowing software, we still have no other means of protecting against piracy then a "serial number/activation key". I'm sure a ton of money, maybe even billions, went into creating Windows 7 or Office and even Snow Leopard, yet I can get it for free in less than 20 minutes. Same for all of Adobe's products, which are probably the easiest. I guess my question is: can there exist a fool proof and hack proof method of protecting your software against piracy? If not realistically, how about theoretically possible? Or no matter what mechanisms these companies deploy, hackers can always find a way around it? EDIT: So apparently, the answer is no. There's pretty much no way. And so I'm sure these big companies have realized this as well. Should they adopt another sales model rather than charging a crapload for their software (I know its justified and they put a lot of hard work into their software, but its still a lot of money). Are there any alternative solutions that will benefit both the company and the user (i.e. if you purchase our product, we'll apply $X dollars to your account that will apply to future purchases from our company)?

    Read the article

  • .NET development on Macs

    - by Jeff
    I posted the “exciting” conclusion of my laptop trade-ins and issues on my personal blog. The links, in chronological order, are posted below. While those posts have all of the details about performance and software used, I wanted to comment on why I like using Macs in the first place. It started in 2006 when Apple released the first Intel-based Mac. As someone with a professional video past, I had been using Macs on and off since college (1995 graduate), so I was never terribly religious about any particular platform. I’m still not, but until recently, it was staggering how crappy PC’s were. They were all plastic, disposable, commodity crap. I could never justify buying a PowerBook because I was a Microsoft stack guy. When Apple went Intel, they removed that barrier. They also didn’t screw around with selling to the low end (though the plastic MacBooks bordered on that), so even the base machines were pretty well equipped. Every Mac I’ve had, I’ve used for three years. Other than that first one, I’ve also sold each one, for quite a bit of money. Things have changed quite a bit, mostly within the last year. I’m actually relieved, because Apple needs competition at the high end. Other manufacturers are finally understanding the importance of industrial design. For me, I’ll stick with Macs for now, because I’m invested in OS X apps like Aperture and the Mac versions of Adobe products. As a Microsoft developer, it doesn’t even matter though… with Parallels, I Cmd-Tab and I’m in Windows. So after three and a half years with a wonderful 17” MBP and upgraded SSD, it was time to get something lighter and smaller (traveling light is critical with a toddler), and I eventually ended up with a 13” MacBook Air, with the i7 and 8 gig upgrades, and I love it. At home I “dock” it to a Thunderbolt Display. A new laptop .NET development on a Retina MacBook Pro with Windows 8 Returning my MacBook Pro with Retina display .NET development on a MacBook Air with Windows 8

    Read the article

< Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >