Search Results

Search found 1517 results on 61 pages for 'peter burg'.

Page 3/61 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Creating a game engine in C++ and Python - Where do I start? [closed]

    - by Peter
    Yes, you read correctly, how does humble ole Peter here make an engine using the 2 languages he's proficient in to an extent. I have more than enough time and wish not to use any 3rd party "stuffs" (engine parts like methods, classes etc etc, fully from scratch). If anyone could PLEASE explain how this is done then i will love you forever. Thanks for reading, hoping for some productive answers. Thankyou very much. EDIT: Re read what i've said for the 4th time, forgot to mention; 2D sprite based, with voxels and physics. :D

    Read the article

  • EC2 Ubuntu - Force instance to use internal IP

    - by Peter
    I've just set up a micro instance on EC2 (AMI ID ami-e59ca991). I had hoped to avoid charges for a year as my usage falls well within the bound of the free tier. I have been charged $0.01 for "regional data transfer". I read here that this is because my instance is talking to its self via it's external IP address. From what I've Googled it looks like you can stop the charges by making sure that the instance uses its internal IP address. However, when I ping the hostname of my instance internally (via an ssh session) it resolves to the instances internal IP address. How can I configure my instance so that I do not get these charges? Is it as simple as adding a line to my hosts file? Additionally, is this the real reason for the charge? I'm concerned that I've misunderstood the pricing somewhere. I have Apace and MySQL (with phpmyadmin) running on the machine - could I be being charged for data transfer associated with these (I have only one flat HTML page and I have only logged in via phpmyadmin - I have no data in my database). Edit: Additionally, my user account on MySQL was declared as: grant all privileges on *.* to 'peter'@'localhost'; Should I have instead used the internal hostname for the instance? grant all privileges on *.* to '[email protected]'; Cheers, Pete

    Read the article

  • VERR_NOT_SUPPORTED when trying to create a windows 8 image in Virtualbox

    - by Bart Burg
    I'm trying to create a Virtualbox image of windows 8 consumer preview. I tried this tutorial: http://www.addictivetips.com/windows-tips/how-to-install-windows-8-on-virtualbox/ I did exactly what was said in that tutorial. At the step "Now navigate to the Windows 8 developer build ISO file that you downloaded and select it." I get the error "Could not get the storage format. (VERR_NOT_SUPPORTED)". I also get this if I use the regular wizard. Host OS: windows 7 Virtualbox version: 4.1.16 Both windows 8 64 bit and 32 bit tested.

    Read the article

  • Guidance: A Branching strategy for Scrum Teams

    - by Martin Hinshelwood
    Having a good branching strategy will save your bacon, or at least your code. Be careful when deviating from your branching strategy because if you do, you may be worse off than when you started! This is one possible branching strategy for Scrum teams and I will not be going in depth with Scrum but you can find out more about Scrum by reading the Scrum Guide and you can even assess your Scrum knowledge by having a go at the Scrum Open Assessment. You can also read SSW’s Rules to Better Scrum using TFS which have been developed during our own Scrum implementations. Acknowledgements Bill Heys – Bill offered some good feedback on this post and helped soften the language. Note: Bill is a VS ALM Ranger and co-wrote the Branching Guidance for TFS 2010 Willy-Peter Schaub – Willy-Peter is an ex Visual Studio ALM MVP turned blue badge and has been involved in most of the guidance including the Branching Guidance for TFS 2010 Chris Birmele – Chris wrote some of the early TFS Branching and Merging Guidance. Dr Paul Neumeyer, Ph.D Parallel Processes, ScrumMaster and SSW Solution Architect – Paul wanted to have feature branches coming from the release branch as well. We agreed that this is really a spin-off that needs own project, backlog, budget and Team. Scenario: A product is developed RTM 1.0 is released and gets great sales.  Extra features are demanded but the new version will have double to price to pay to recover costs, work is approved by the guys with budget and a few sprints later RTM 2.0 is released.  Sales a very low due to the pricing strategy. There are lots of clients on RTM 1.0 calling out for patches. As I keep getting Reverse Integration and Forward Integration mixed up and Bill keeps slapping my wrists I thought I should have a reminder: You still seemed to use reverse and/or forward integration in the wrong context. I would recommend reviewing your document at the end to ensure that it agrees with the common understanding of these terms merge (forward integration) from parent to child (same direction as the branch), and merge  (reverse integration) from child to parent (the reverse direction of the branch). - one of my many slaps on the wrist from Bill Heys.   As I mentioned previously we are using a single feature branching strategy in our current project. The single biggest mistake developers make is developing against the “Main” or “Trunk” line. This ultimately leads to messy code as things are added and never finished. Your only alternative is to NEVER check in unless your code is 100%, but this does not work in practice, even with a single developer. Your ADD will kick in and your half-finished code will be finished enough to pass the build and the tests. You do use builds don’t you? Sadly, this is a very common scenario and I have had people argue that branching merely adds complexity. Then again I have seen the other side of the universe ... branching  structures from he... We should somehow convince everyone that there is a happy between no-branching and too-much-branching. - Willy-Peter Schaub, VS ALM Ranger, Microsoft   A key benefit of branching for development is to isolate changes from the stable Main branch. Branching adds sanity more than it adds complexity. We do try to stress in our guidance that it is important to justify a branch, by doing a cost benefit analysis. The primary cost is the effort to do merges and resolve conflicts. A key benefit is that you have a stable code base in Main and accept changes into Main only after they pass quality gates, etc. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft The second biggest mistake developers make is branching anything other than the WHOLE “Main” line. If you branch parts of your code and not others it gets out of sync and can make integration a nightmare. You should have your Source, Assets, Build scripts deployment scripts and dependencies inside the “Main” folder and branch the whole thing. Some departments within MSFT even go as far as to add the environments used to develop the product in there as well; although I would not recommend that unless you have a massive SQL cluster to house your source code. We tried the “add environment” back in South-Africa and while it was “phenomenal”, especially when having to switch between environments, the disk storage and processing requirements killed us. We opted for virtualization to skin this cat of keeping a ready-to-go environment handy. - Willy-Peter Schaub, VS ALM Ranger, Microsoft   I think people often think that you should have separate branches for separate environments (e.g. Dev, Test, Integration Test, QA, etc.). I prefer to think of deploying to environments (such as from Main to QA) rather than branching for QA). - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   You can read about SSW’s Rules to better Source Control for some additional information on what Source Control to use and how to use it. There are also a number of branching Anti-Patterns that should be avoided at all costs: You know you are on the wrong track if you experience one or more of the following symptoms in your development environment: Merge Paranoia—avoiding merging at all cost, usually because of a fear of the consequences. Merge Mania—spending too much time merging software assets instead of developing them. Big Bang Merge—deferring branch merging to the end of the development effort and attempting to merge all branches simultaneously. Never-Ending Merge—continuous merging activity because there is always more to merge. Wrong-Way Merge—merging a software asset version with an earlier version. Branch Mania—creating many branches for no apparent reason. Cascading Branches—branching but never merging back to the main line. Mysterious Branches—branching for no apparent reason. Temporary Branches—branching for changing reasons, so the branch becomes a permanent temporary workspace. Volatile Branches—branching with unstable software assets shared by other branches or merged into another branch. Note   Branches are volatile most of the time while they exist as independent branches. That is the point of having them. The difference is that you should not share or merge branches while they are in an unstable state. Development Freeze—stopping all development activities while branching, merging, and building new base lines. Berlin Wall—using branches to divide the development team members, instead of dividing the work they are performing. -Branching and Merging Primer by Chris Birmele - Developer Tools Technical Specialist at Microsoft Pty Ltd in Australia   In fact, this can result in a merge exercise no-one wants to be involved in, merging hundreds of thousands of change sets and trying to get a consolidated build. Again, we need to find a happy medium. - Willy-Peter Schaub on Merge Paranoia Merge conflicts are generally the result of making changes to the same file in both the target and source branch. If you create merge conflicts, you will eventually need to resolve them. Often the resolution is manual. Merging more frequently allows you to resolve these conflicts close to when they happen, making the resolution clearer. Waiting weeks or months to resolve them, the Big Bang approach, means you are more likely to resolve conflicts incorrectly. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   Figure: Main line, this is where your stable code lives and where any build has known entities, always passes and has a happy test that passes as well? Many development projects consist of, a single “Main” line of source and artifacts. This is good; at least there is source control . There are however a couple of issues that need to be considered. What happens if: you and your team are working on a new set of features and the customer wants a change to his current version? you are working on two features and the customer decides to abandon one of them? you have two teams working on different feature sets and their changes start interfering with each other? I just use labels instead of branches? That's a lot of “what if’s”, but there is a simple way of preventing this. Branching… In TFS, labels are not immutable. This does not mean they are not useful. But labels do not provide a very good development isolation mechanism. Branching allows separate code sets to evolve separately (e.g. Current with hotfixes, and vNext with new development). I don’t see how labels work here. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   Figure: Creating a single feature branch means you can isolate the development work on that branch.   Its standard practice for large projects with lots of developers to use Feature branching and you can check the Branching Guidance for the latest recommendations from the Visual Studio ALM Rangers for other methods. In the diagram above you can see my recommendation for branching when using Scrum development with TFS 2010. It consists of a single Sprint branch to contain all the changes for the current sprint. The main branch has the permissions changes so contributors to the project can only Branch and Merge with “Main”. This will prevent accidental check-ins or checkouts of the “Main” line that would contaminate the code. The developers continue to develop on sprint one until the completion of the sprint. Note: In the real world, starting a new Greenfield project, this process starts at Sprint 2 as at the start of Sprint 1 you would have artifacts in version control and no need for isolation.   Figure: Once the sprint is complete the Sprint 1 code can then be merged back into the Main line. There are always good practices to follow, and one is to always do a Forward Integration from Main into Sprint 1 before you do a Reverse Integration from Sprint 1 back into Main. In this case it may seem superfluous, but this builds good muscle memory into your developer’s work ethic and means that no bad habits are learned that would interfere with additional Scrum Teams being added to the Product. The process of completing your sprint development: The Team completes their work according to their definition of done. Merge from “Main” into “Sprint1” (Forward Integration) Stabilize your code with any changes coming from other Scrum Teams working on the same product. If you have one Scrum Team this should be quick, but there may have been bug fixes in the Release branches. (we will talk about release branches later) Merge from “Sprint1” into “Main” to commit your changes. (Reverse Integration) Check-in Delete the Sprint1 branch Note: The Sprint 1 branch is no longer required as its useful life has been concluded. Check-in Done But you are not yet done with the Sprint. The goal in Scrum is to have a “potentially shippable product” at the end of every Sprint, and we do not have that yet, we only have finished code.   Figure: With Sprint 1 merged you can create a Release branch and run your final packaging and testing In 99% of all projects I have been involved in or watched, a “shippable product” only happens towards the end of the overall lifecycle, especially when sprints are short. The in-between releases are great demonstration releases, but not shippable. Perhaps it comes from my 80’s brain washing that we only ship when we reach the agreed quality and business feature bar. - Willy-Peter Schaub, VS ALM Ranger, Microsoft Although you should have been testing and packaging your code all the way through your Sprint 1 development, preferably using an automated process, you still need to test and package with stable unchanging code. This is where you do what at SSW we call a “Test Please”. This is first an internal test of the product to make sure it meets the needs of the customer and you generally use a resource external to your Team. Then a “Test Please” is conducted with the Product Owner to make sure he is happy with the output. You can read about how to conduct a Test Please on our Rules to Successful Projects: Do you conduct an internal "test please" prior to releasing a version to a client?   Figure: If you find a deviation from the expected result you fix it on the Release branch. If during your final testing or your “Test Please” you find there are issues or bugs then you should fix them on the release branch. If you can’t fix them within the time box of your Sprint, then you will need to create a Bug and put it onto the backlog for prioritization by the Product owner. Make sure you leave plenty of time between your merge from the development branch to find and fix any problems that are uncovered. This process is commonly called Stabilization and should always be conducted once you have completed all of your User Stories and integrated all of your branches. Even once you have stabilized and released, you should not delete the release branch as you would with the Sprint branch. It has a usefulness for servicing that may extend well beyond the limited life you expect of it. Note: Don't get forced by the business into adding features into a Release branch instead that indicates the unspoken requirement is that they are asking for a product spin-off. In this case you can create a new Team Project and branch from the required Release branch to create a new Main branch for that product. And you create a whole new backlog to work from.   Figure: When the Team decides it is happy with the product you can create a RTM branch. Once you have fixed all the bugs you can, and added any you can’t to the Product Backlog, and you Team is happy with the result you can create a Release. This would consist of doing the final Build and Packaging it up ready for your Sprint Review meeting. You would then create a read-only branch that represents the code you “shipped”. This is really an Audit trail branch that is optional, but is good practice. You could use a Label, but Labels are not Auditable and if a dispute was raised by the customer you can produce a verifiable version of the source code for an independent party to check. Rare I know, but you do not want to be at the wrong end of a legal battle. Like the Release branch the RTM branch should never be deleted, or only deleted according to your companies legal policy, which in the UK is usually 7 years.   Figure: If you have made any changes in the Release you will need to merge back up to Main in order to finalise the changes. Nothing is really ever done until it is in Main. The same rules apply when merging any fixes in the Release branch back into Main and you should do a reverse merge before a forward merge, again for the muscle memory more than necessity at this stage. Your Sprint is now nearly complete, and you can have a Sprint Review meeting knowing that you have made every effort and taken every precaution to protect your customer’s investment. Note: In order to really achieve protection for both you and your client you would add Automated Builds, Automated Tests, Automated Acceptance tests, Acceptance test tracking, Unit Tests, Load tests, Web test and all the other good engineering practices that help produce reliable software.     Figure: After the Sprint Planning meeting the process begins again. Where the Sprint Review and Retrospective meetings mark the end of the Sprint, the Sprint Planning meeting marks the beginning. After you have completed your Sprint Planning and you know what you are trying to achieve in Sprint 2 you can create your new Branch to develop in. How do we handle a bug(s) in production that can’t wait? Although in Scrum the only work done should be on the backlog there should be a little buffer added to the Sprint Planning for contingencies. One of these contingencies is a bug in the current release that can’t wait for the Sprint to finish. But how do you handle that? Willy-Peter Schaub asked an excellent question on the release activities: In reality Sprint 2 starts when sprint 1 ends + weekend. Should we not cater for a possible parallelism between Sprint 2 and the release activities of sprint 1? It would introduce FI’s from main to sprint 2, I guess. Your “Figure: Merging print 2 back into Main.” covers, what I tend to believe to be reality in most cases. - Willy-Peter Schaub, VS ALM Ranger, Microsoft I agree, and if you have a single Scrum team then your resources are limited. The Scrum Team is responsible for packaging and release, so at least one run at stabilization, package and release should be included in the Sprint time box. If more are needed on the current production release during the Sprint 2 time box then resource needs to be pulled from Sprint 2. The Product Owner and the Team have four choices (in order of disruption/cost): Backlog: Add the bug to the backlog and fix it in the next Sprint Buffer Time: Use any buffer time included in the current Sprint to fix the bug quickly Make time: Remove a Story from the current Sprint that is of equal value to the time lost fixing the bug(s) and releasing. Note: The Team must agree that it can still meet the Sprint Goal. Cancel Sprint: Cancel the sprint and concentrate all resource on fixing the bug(s) Note: This can be a very costly if the current sprint has already had a lot of work completed as it will be lost. The choice will depend on the complexity and severity of the bug(s) and both the Product Owner and the Team need to agree. In this case we will go with option #2 or #3 as they are uncomplicated but severe bugs. Figure: Real world issue where a bug needs fixed in the current release. If the bug(s) is urgent enough then then your only option is to fix it in place. You can edit the release branch to find and fix the bug, hopefully creating a test so it can’t happen again. Follow the prior process and conduct an internal and customer “Test Please” before releasing. You can read about how to conduct a Test Please on our Rules to Successful Projects: Do you conduct an internal "test please" prior to releasing a version to a client?   Figure: After you have fixed the bug you need to ship again. You then need to again create an RTM branch to hold the version of the code you released in escrow.   Figure: Main is now out of sync with your Release. We now need to get these new changes back up into the Main branch. Do a reverse and then forward merge again to get the new code into Main. But what about the branch, are developers not working on Sprint 2? Does Sprint 2 now have changes that are not in Main and Main now have changes that are not in Sprint 2? Well, yes… and this is part of the hit you take doing branching. But would this scenario even have been possible without branching?   Figure: Getting the changes in Main into Sprint 2 is very important. The Team now needs to do a Forward Integration merge into their Sprint and resolve any conflicts that occur. Maybe the bug has already been fixed in Sprint 2, maybe the bug no longer exists! This needs to be identified and resolved by the developers before they continue to get further out of Sync with Main. Note: Avoid the “Big bang merge” at all costs.   Figure: Merging Sprint 2 back into Main, the Forward Integration, and R0 terminates. Sprint 2 now merges (Reverse Integration) back into Main following the procedures we have already established.   Figure: The logical conclusion. This then allows the creation of the next release. By now you should be getting the big picture and hopefully you learned something useful from this post. I know I have enjoyed writing it as I find these exploratory posts coupled with real world experience really help harden my understanding.  Branching is a tool; it is not a silver bullet. Don’t over use it, and avoid “Anti-Patterns” where possible. Although the diagram above looks complicated I hope showing you how it is formed simplifies it as much as possible.   Technorati Tags: Branching,Scrum,VS ALM,TFS 2010,VS2010

    Read the article

  • Google I/O 2011: Fireside Chat with the App Engine Team

    Google I/O 2011: Fireside Chat with the App Engine Team Max Ross, Max is a Software Engineer on the App Engine team where he leads the development of the datastore & occasionally tinkers with the Java runtime. He is also the founder of the Hibernate Shards project. Alon Levi, Sean Lynch, Greg Dalesandre, Guido van Rossum, Brett Slatkin, Peter Magnusson, Mickey Kataria, Peter McKenzie Fireside chat with the App Engine team From: GoogleDevelopers Views: 2045 5 ratings Time: 01:01:25 More in Entertainment

    Read the article

  • NYC Silverlight FireStarter - June 5th 2010 at the NYC Microsoft Office

    - by Sam Abraham
    On Saturday June 5th, 2010, I spent my Saturday morning at the NYC Silverlight FireStarter. Presenting was Peter Laudati from Microsoft and Jason Beres, Matt Van Horn and Todd Snyder from Infragistics. I watched the Simulcast for the morning sessions as I was tied up with some work, but ended up finally making it to the Microsoft Office and had the opportunity to attend the last hour of the event in person.   For me, the quality of the Simulcast was as good as in-person attendance so far as sound/video quality and the interaction with speakers. In the background was a screen with tweets from remote attendees asking questions or commenting on the presentations. Presenters did periodically stop to answer the tweeted questions as well as questions from attendees. Only thing I missed was getting my hands on some of that swag that was (literally) flying in the air at the event floor.   Upon my arrival at the Microsoft Office Location in NYC, I spoke with Rachel Appel and Peter Laudati asking for permission to take a few photos to record the outstanding effort that took place in putting this event together. Both agreed and I started with putting my photography skills to work.   You can always gauge the quality of an event with the number of its attendees who opt to stay till the last minute as well as the level of interaction of the audience with the speaker. With most of the FireStarter attendees remaining till the very end of the talk, and with the many questions that were asked, one can simply judge the event as a success as per my aforementioned criteria.   Evaluation forms were passed around and Peter strongly encouraged the audience to openly speak their mind as they record their comments. I didn't get to submit my evaluation as I was busy recording the event in photos, so here it goes: I believe that lots of hard work was put into making this event a reality. Quality of speakers, topics and level of Geekiness at the event was outstanding.  Overall, aside from a minor issue with Lunch delivery time, this event was of high quality and I am very sure everyone's evaluation will be in line with my analysis of it being a great success. Below are a few photos of the event.   --Sam Abraham Site Director - West Palm Beach .Net User Group www.Fladotnet.com     NYC Silverlight FireStarter Speakers - From Left to right: Peter Laudati, Todd Snyder, Matt Van Horn & Jason Beres   As jason wasn't quiet visible in the above photo, a closeup was taken (It was Jason's birthday and he had to leave a bit early, so the Infagisticts team thought outside the box...)     Full Room - That was at the last hour of the event   Another view of full room   Discussions during the break   End-of-event Raffle

    Read the article

  • How easy is it to alter a browser fingerprint?

    - by JFig
    I am researching this question for a possible paper. Given the exploitation of user identities for risk management and market tracking, how easy is it to alter a browser enough to throw off fingerprinting techniques? My current sources are the EFF Panopticlick project- https://www.eff.org/deeplinks/2010/01/primer-information-theory-and-privacy and Peter Eckersly's follow-up presentation at Def Con 18- http://privacy-pc.com/articles/how-safe-is-your-browser-peter-ackersley-on-personally-identifiable-information-basics.html

    Read the article

  • The internal storage of a DATETIME2 value

    - by Peter Larsson
    Today I went for investigating the internal storage of DATETIME2 datatype. What I found out was that for a datetime2 value with precision 0 (seconds only), SQL Server need 6 bytes to represent the value, but stores 7 bytes. This is because SQL Server add one byte that holds the precision for the datetime2 value. Start with this very simple repro declare @now datetime2(7) = '2010-12-15 21:04:03.6934231'   select  cast(cast(@now as datetime2(0)) as binary(7)),         cast(cast(@now as datetime2(1)) as binary(7)),         cast(cast(@now as datetime2(2)) as binary(7)),         cast(cast(@now as datetime2(3)) as binary(8)),         cast(cast(@now as datetime2(4)) as binary(8)),         cast(cast(@now as datetime2(5)) as binary(9)),         cast(cast(@now as datetime2(6)) as binary(9)),         cast(cast(@now as datetime2(7)) as binary(9)) Now we are going to copy and paste these binary values and investigate which value is representing what time part. Prefix  Ticks       Ticks         Days    Days    Original value ------  ----------  ------------  ------  ------  -------------------- 0x  00  442801             75844  A8330B  734120  0x00442801A8330B 0x  01  A5920B            758437  A8330B  734120  0x01A5920BA8330B  0x  02  71BA73           7584369  A8330B  734120  0x0271BA73A8330B 0x  03  6D488504        75843693  A8330B  734120  0x036D488504A8330B 0x  04  46D4342D       758436934  A8330B  734120  0x0446D4342DA8330B 0x  05  BE4A10C401    7584369342  A8330B  734120  0x05BE4A10C401A8330B 0x  06  6FEBA2A811   75843693423  A8330B  734120  0x066FEBA2A811A8330B 0x  07  57325D96B0  758436934231  A8330B  734120  0x0757325D96B0A8330B Let us use the following color schema Red - Prefix Green - Time part Blue - Day part What you can see is that the date part is equal in all cases, which makes sense since the precision doesm't affect the datepart. What would have been fun, is datetime2(negative) just like round accepts a negative value. -1 would mean rounding to 10 second, -2 rounding to minute, -3 rounding to 10 minutes, -4 rounding to hour and finally -5 rounding to 10 hour. -5 is pretty useless, but if you extend this thinking to -6, -7 and so on, you could actually get a datetime2 value which is accurate to the month only. Well, enough ranting about this. Let's get back to the table above. If you add 75844 second to midnight, you get 21:04:04, which is exactly what you got in the select statement above. And if you look at it, it makes perfect sense that each following value is 10 times greater when the precision is increased one step too. //Peter

    Read the article

  • Does not recognize usb sticks and drives

    - by Peter
    When connecting any usb stick to my thinkpad ubuntu 10.10 does not recognize them. I don't see anything on the desktop. the output of "dmesg | tail -n10" gives me: [ 1965.696388] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1965.884537] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1966.072503] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1966.260349] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1966.506227] usb 1-1: new high speed USB device using ehci_hcd and address 9 [ 1966.572375] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1966.760379] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1966.948358] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1967.136335] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1967.325423] hub 1-0:1.0: unable to enumerate USB device on port 1 When connecting my usb scanner to the same port: [ 2008.480135] usb 1-1: new high speed USB device using ehci_hcd and address 65 [ 2008.548389] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 2008.736786] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 2008.924379] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 2009.112348] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 2009.300443] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 2009.488536] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 2009.732180] usb 1-1: new high speed USB device using ehci_hcd and address 71 [ 2014.796299] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 2018.000128] usb 2-1: new full speed USB device using uhci_hcd and address 3 And ubuntu 10.10 recognizes that scanner. So: What can i do to see my usb stick? BTW: on my other Thinkpad running fedora 14 it works perfectly... Cheers -Peter

    Read the article

  • My first encounter with SmartAssembly

    - by Peter Larsson
    Let me start by writing I am a supreme VB6 programmer, but I have very little experience with VB.Net, so I think I still need some more time learning SmartAssembly. SmartAssembly make obfuscating and merging dll files a piece of cake! With it's simple, straight forward and clean GUI I did make my tests work. With other obfuscators like Xenocode, Salamander etc which lets you (and in some cases forces you) control more advanced settings, you really have to know what you are doing. Especially when it comes to protecting code that uses external dependencies. My most annoying experience is that if you start checking radio buttons and activating different obfuscating features in SmartAssembly, you will end up breaking your working code as well, if you like me is not that experienced and don't know what you’re doing. SmartAssembly have some troubleshooting information on their website which explains why the application will fail in some scenarios. So why not extend these checks in some deeper analyzing stage on the dll's? By doing that I think more people could get fully functional dll's out of the box instead of trying different settings and then test the protected dll and see if it's working or not. //Peter

    Read the article

  • ubuntu 10.10 does not recognize usb sticks and drives

    - by Peter
    When connecting any usb stick to my thinkpad ubuntu 10.10 does not recognize them. I don't see anything on the desktop. the output of "dmesg | tail -n10" gives me: [ 1965.696388] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1965.884537] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1966.072503] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1966.260349] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1966.506227] usb 1-1: new high speed USB device using ehci_hcd and address 9 [ 1966.572375] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1966.760379] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1966.948358] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1967.136335] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1967.325423] hub 1-0:1.0: unable to enumerate USB device on port 1 When connecting my usb scanner to the same port: [ 2008.480135] usb 1-1: new high speed USB device using ehci_hcd and address 65 [ 2008.548389] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 2008.736786] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 2008.924379] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 2009.112348] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 2009.300443] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 2009.488536] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 2009.732180] usb 1-1: new high speed USB device using ehci_hcd and address 71 [ 2014.796299] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 2018.000128] usb 2-1: new full speed USB device using uhci_hcd and address 3 And ubuntu 10.10 recognizes that scanner. So: What can i do to see my usb stick? BTW: on my other Thinkpad running fedora 14 it works perfectly... Cheers -Peter

    Read the article

  • A story from SQLvdb and Idera

    - by Peter Larsson
    A year or so back, I struggled with some consistency problems so I figured out I needed a way to "mount" backup files as a virtual database. At the time (SQL Server 2005 and SQL Server 2008) my choice fell on Idera's SQLvdb because it felt easy enough to use. I used it a few times and it worked great. Some time later we upgraded to SQL Server 2008R2 and I didn't use SQLvbd for a long time. Until yesterday... I was upset that suddenly SQLvbd took more than 2 hours to mount the backup file (if it succeeded at all). I uninstalled the application and went for lunch. After lunch, I decided to give SQLvbd another chance so I emailed their tech support and got a response within 30 minutes or so. Now, since I am a SQL Server MVP, they gave me another serial number than my first and I downloaded and installed a newer version. But also this version was really slow. I emailed back to them with the additional information they requested and to my surprise I had got an email this morning when I came back to work, where Idera explained some of the issues (bugs) and asked my to test a newer version. I did, and now a fresh mount of a 100GB database (compressed to 20GB with native compression) located on our SAN takes less than 6 minutes! Thank you. //Peter

    Read the article

  • The internal storage of a DATETIMEOFFSET value

    - by Peter Larsson
    Today I went for investigating the internal storage of DATETIME2 datatype. What I found out was that for a datetime2 value with precision 0 (seconds only), SQL Server need 6 bytes to represent the value, but stores 7 bytes. This is because SQL Server add one byte that holds the precision for the datetime2 value. Start with this very simple repro declare    @now datetimeoffset(7) = '2010-12-15 21:04:03.6934231 +03:30'   select     cast(cast(@now as datetimeoffset(0)) as binary(9)),            cast(cast(@now as datetimeoffset(1)) as binary(9)),            cast(cast(@now as datetimeoffset(2)) as binary(9)),            cast(cast(@now as datetimeoffset(3)) as binary(10)),            cast(cast(@now as datetimeoffset(4)) as binary(10)),            cast(cast(@now as datetimeoffset(5)) as binary(11)),            cast(cast(@now as datetimeoffset(6)) as binary(11)),            cast(cast(@now as datetimeoffset(7)) as binary(11)) Now we are going to copy and paste these binary values and investigate which value is representing what time part. Prefix  Ticks       Ticks         Days    Days    Suffix  Suffix  Original value ------  ----------  ------------  ------  ------  ------  ------  ------------------------ 0x  00  0CF700             63244  A8330B  734120  D200       210  0x000CF700A8330BD200 0x  01  75A609            632437  A8330B  734120  D200       210 0x0175A609A8330BD200 0x  02  918060           6324369  A8330B  734120  D200       210  0x02918060A8330BD200 0x  03  AD05C503        63243693  A8330B  734120  D200       210  0x03AD05C503A8330BD200 0x  04  C638B225       632502470  A8330B  734120  D200       210  0x04C638B225A8330BD200 0x  05  BE37F67801    6324369342  A8330B  734120  D200       210  0x05BE37F67801A8330BD200 0x  06  6F2D9EB90E   63243693423  A8330B  734120  D200       210  0x066F2D9EB90EA8330BD200 0x  07  57C62D4093  632436934231  A8330B  734120  D200       210  0x0757C62D4093A8330BD200 Let us use the following color schema Red - Prefix Green - Time part Blue - Day part Purple - UTC offset What you can see is that the date part is equal in all cases, which makes sense since the precision doesn't affect the datepart. If you add 63244 seconds to midnight, you get 17:34:04, which is the correct UTC time. So what is stored is the UTC time and the local time can be found by adding "utc offset" minutes. And if you look at it, it makes perfect sense that each following value is 10 times greater when the precision is increased one step too. //Peter

    Read the article

  • Ubuntu One has high CPU usage but no syncing

    - by Peter
    over the weekend I updated my computer to Windows 8. So far Ubuntu One was running smoothly, but ever since the update (clean, new install) Ubuntu doesn't sync any more. In Windows 7 it would start to sync at full internet speed as soon as I drop a file. But now in Windows 8, as soon as I drop a new file into the Ubuntu One folder, CPU usage goes up to about 50 % and no network traffic occurs. This stays like that for a couple of minutes, CPU usage goes back to normal and then the client says that all is in sync - which isn't true. Is it too early for Windows 8? Do others have the same problem or is there something I can do about it? I try a couple of different things, and realized that if the file size is 20 MB the files get uploaded. The original file was 1.5 GB. I also didn't work with 200, 100 and 50 MB large files. But even with 20 MB large files, the upload is very slow and not steady. The log give plenty of this error: - twisted - ERROR - Failure: exceptions.TypeError About which I don't know the meaning. By the way, the account works just fine on the Ubuntu 12.04 partition. Any help is greatly appreciated. -Peter

    Read the article

  • Where can I find the yum repos files for centos 4.7 i386?

    - by Peter Kahn
    Does anyone know where I can find the i386 centos 4.7 repos files that would normally sit in /etc/yum.repos.d. Alas, it looks like someone copied the 5.5 edition over to a 4.7 system. I can setup a new VM, install 4.7 and extract the files from that system (but I was hoping for a faster approach. Please let me know if you know where these files live on the net. I'm off to RPMfind to see what I can locate. Thanks Peter

    Read the article

  • Feedback from SQLBits 8

    - by Peter Larsson
    This years SQLBits occurred in Brighton. Although I didn’t have the opportunity to attend the full conference, I did a presentation at Saturday. Getting to Brighton was easy. Drove to Copenhagen airport at 0415, flew 0605 and arrived at Gatwick 0735. Then I took the direct train to Brighton and showed up at 0830, just one hour before presenting. This was the easy part. Getting home was much worse. Presentation ended at 1030 and I had to rush to the train station to get back to London, change to tube for Heathrow. Made it at the gate just 15 seconds before closing. That included a half mile run in the airport… Anyway, yesterday I got the feedback for my presentation. It does look good, especially since English is not my first language. This is the first graph Seems to be just halfway between conference average and best session. I can live with that. Second graph shows more detail about attendees voting. It also look acceptable. A wider spread for the 9’s, but it is an inevitable effect from how attendees percept the session. I did get a lot of 8’s and the lower grades in an descending order. The two people voting 4 and 5 didn’t say why they voted this so I don’t know how to remedy this. Third graph is about each category of votes.   Again, I find this acceptable. The Session abstract and Speaker’s knowledge seems to follow attendees expectations compared to conference average. I seem to have met the attendees expectations (and some more) for the other four categories, also compared to conference average. Since this did encourage me, I believe I will present some more at future meetings. I do have a new presentation about something all developers are doing every day but they may not know it. I will also cover this new topic in the next Deep Dives II book. Stay tuned! //Peter

    Read the article

  • Full-text indexing? You must read this

    - by Kyle Hatlestad
    For those of you who may have missed it, Peter Flies, Principal Technical Support Engineer for WebCenter Content, gave an excellent webcast on database searching and indexing in WebCenter Content.  It's available for replay along with a download of the slidedeck.  Look for the one titled 'WebCenter Content: Database Searching and Indexing'. One of the items he led with...and concluded with...was a recommendation on optimizing your search collection if you are using full-text searching with the Oracle database.  This can greatly improve your search performance.  And this would apply to both Oracle Text Search and DATABASE.FULLTEXT search methods.  Peter describes how a collection can become fragmented over time as content is added, updated, and deleted.  Just like you should defragment your hard drive from time to time to get your files placed on the disk in the most optimal way, you should do the same for the search collection. And optimizing the collection is just a simple procedure call that can be scheduled to be run automatically.   beginctx_ddl.optimize_index('FT_IDCTEXT1','FULL', parallel_degree =>'1');end; When I checked my own test instance, I found my collection had a row fragmentation of about 80% After running the optimization procedure, it went down to 0% The knowledgebase article On Index Fragmentation and Optimization When Using OracleTextSearch or DATABASE.FULLTEXT [ID 1087777.1] goes into detail on how to check your current index fragmentation, how to run the procedure, and then how to schedule the procedure to run automatically.  While the article mentions scheduling the job weekly, Peter says he now is recommending this be run daily, especially on more active systems. And just as a reminder, be sure to involve your DBA with your WebCenter Content implementation as you go to production and over time.  We recently had a customer complain of slow performance of the application when it was discovered the database was starving for memory.  So it's always helpful to keep a watchful eye on your database.

    Read the article

  • Oracle ACEs in the House

    - by Justin Kestelyn
    As is customary, the Oracle ACEs have invaded the Oracle Develop Conference agenda.Why? Because Oracle ACE-dom inherently is a stamp of not only expertise, but a unique ability to make that expertise useful to others. Plus, they're a group of "fine blokes" (UK. subjects, educate me: is that really a word?)Perhaps if you're not able to catch one of these sessions, you will be able to see the applicable ACE in action elsewhere, at a conference or user group meeting near you. Session ID Session Title Speaker, Company S313355 Developing Large Oracle Application Development Framework 11g Applications Andrejus Baranovskis, Red Samurai Consulting S316641 Xenogenetics for PL/SQL: Infusing with Java Best Practices and Design Patterns Lucas Jellema, AMIS; Alex Nuijten, AMIS S317171 Building Secure Multimedia Web Applications: Tips and Techniques Marcel Kratochvil, Piction; Melliyal Annamalai, Oracle S315660 Database Applications Lifecycle Management Marcelo Ochoa, Facultad de Ciencias Exactas S315689 Building a High-Performance, Low-Bandwidth Web Architecture Paul Dorsey, Dulcian, Inc. S316003 Managing the Earthquake: Surviving Major Database Architecture Changes Paul Dorsey, Dulcian, Inc.; Michael Rosenblum, Dulcian, Inc. S314869 Introduction to Java: PL/SQL Developers Take Heart Peter Koletzke, Quovera S316184 Deploying Applications to Oracle WebLogic Server Using Oracle JDeveloper Peter Koletzke, Quovera; Duncan Mills, Oracle S316597 Using Collections in Oracle Application Express: The Definitive Intro Raj Mattamal, Niantic Systems, LLC S313382 Using Oracle Database 11g Release 2 in an Oracle Application Express Environment Roel Hartman, Logica S313757 Debugging with Oracle Application Express and Oracle SQL Developer Dimitri Gielis, Sumneva S313759 Using Oracle Application Express in Big Projects with Many Developers Dimitri Gielis, Sumneva S313982 Forms2Future: The Ongoing Journey into the Future for Oracle-Based Organizations Lucas Jellema, AMIS; Peter Ebell, AMIS

    Read the article

  • links for 2010-05-06

    - by Bob Rhubart
    Podcast: Collaborate 10 Wrap-Up - Conclusion #c10 More Collaborate 2010 Las Vegas highlights and hijinks from this ten-member panel, including OAUG and ODTUG board members, members of the Oracle ACE program, and OAUG President Dave Ferguson. (tags: otn oracle collaborate2010) Peter Scott: Realtime Data Warehouse Loading Rittman-Mead's Peter Scott looks at putting data in to a data warehouse in real time. (tags: oracle datawarehousing businessintelligence) Live Webcast: Social BPM - Integrating Enterprise 2.0 with Business Applications - May 12, 2010 at 11:00 a.m. PT Business Process Management with integrated Enterprise 2.0 collaboration can improve business responsiveness and enhance overall enterprise productivity. Learn how to take your business to the next level with a unified solution that fosters process-based collaboration between employees, partners, and customers. (tags: oracle otn bpm enterprise2.0 webcast) Management Pack for Identity Management Viewlet A screencast produced by the Grid Control team showing the features of the Identity Management Pack for Grid Control 11g. Grid Control 11g now works with Oracle Virtual Directory 11g. (tags: oracle otn security identitymanagement) @pevansgreenwood: Having too much SOA is a bad thing (and what we might do about it) "The problem is usually too much flexibility, as flexibility creates complexity, and complexity exponentially increases the effort required to manage and deliver the software." -- Peter Evans-Greenwood (tags: soa complexity flexibility) @vampbenepe: Integration patterns for social data: the Open Social Data Bus "The main point is about defining the right integration pattern for social data: is it a 'message bus' pattern or a 'shared database' pattern?" -- William Vampbenepe (tags: oracle otn enterprise2.0 enterprisearchitecture)

    Read the article

  • runas without asking for a password

    - by Gregory MOUSSAT
    On a Windows server which is in a domain, I have a script I run from scheduled tasks. I want this script to be run under a mydomain\peter user account. It is simple to do it with scheduled tasks, if you know Peter's password. And once done, the script stops when Peter decides to (or has to) change his password. On Linux, a cron job can be run with whatever user account without having to know the corresponding password. And root can run anything on behalf on another user (with su and sudo). Any way to do this with Windows? My need is for a old Windows 2003 server, but I can manage to run it from another computer.

    Read the article

  • ArchBeat Link-o-Rama for 2012-10-10

    - by Bob Rhubart
    Oracle's Analytics, Engineered Systems, and Big Data Strategy | Mark Rittman Part 1 of 3 in Oracle ACE Director Mark Rittman's series on Oracle Exalytics, Oracle R Enterprise and Endeca. Series: How to Kill the Architecture Department? Part 1 | Xebia Blog Don't let the title fool you. This is not an anti-architecture post. Rather, this post, part 1 of a now four-part series, offers suggestions for preserving architecture in a form that better supports agile organizations. BPM Suite configure BAM Adapter | Peter Paul van der Beek "To have the BPM server push events to BAM – Business Activity Monitoring – we have to configure the BPM suite to use the BAM Adapter," says Peter Paul van de Beek. "The BAM Adapter is configured (like other SOA Suite and BPM Adapters) in the WebLogic Server Console." Peter Paul shows you how in this brief post. A case for not installing your own software | James Gentsch "I look selfishly forward to cloud computing and engineered systems dramatically reducing the occurrence of problems triggered by unforeseen environmental situations in the software I am responsible for," says James Gentsch. "I think this is an evolutionary game changer that will be a huge benefit to the reliability and consistent performance of the software for my customers, and may make 'well, it works here' a well forgotten phase for future software developers." Thought for the Day "I'm a strong believer in being minimalistic. Unless you actually are going to solve the general problem, don't try and put in place a framework for solving a specific one, because you don't know what that framework should look like." — Anders Hejlsberg Source: SoftwareQuotes.com`

    Read the article

  • System.Net.WebClient doesn't work with Windows Authentication

    - by Peter Hahndorf
    I am trying to use System.Net.WebClient in a WinForms application to upload a file to an IIS6 server which has Windows Authentication as it only 'Authentication' method. WebClient myWebClient = new WebClient(); myWebClient.Credentials = new System.Net.NetworkCredential(@"boxname\peter", "mypassword"); byte[] responseArray = myWebClient.UploadFile("http://localhost/upload.aspx", fileName); I get a 'The remote server returned an error: (401) Unauthorized', actually it is a 401.2 Both client and IIS are on the same Windows Server 2003 Dev machine. When I try to open the page in Firefox and enter the same correct credentials as in the code, the page comes up. However when using IE8, I get the same 401.2 error. Tried Chrome and Opera and they both work. I have 'Enable Integrated Windows Authentication' enabled in the IE Internet options. The Security Event Log has a Failure Audit: Logon Failure: Reason: An error occurred during logon User Name: peter Domain: boxname Logon Type: 3 Logon Process: ÈùÄ Authentication Package: NTLM Workstation Name: boxname Status code: 0xC000006D Substatus code: 0x0 Caller User Name: - Caller Domain: - Caller Logon ID: - Caller Process ID: - Transited Services: - Source Network Address: 127.0.0.1 Source Port: 1476 I used Process Monitor and Fiddler to investigate but to no avail. Why would this work for 3rd party browsers but not with IE or System.Net.WebClient?

    Read the article

  • Running PowerShell from MSdeploy runcommand does not exit

    - by Peter Moberg
    Im am trying to get MSDeploy to execute a PowerShell script on a remote server. This is how i execute MSDeploy: msdeploy \ -verb:sync \ -source:runCommand='C:\temp\HelloWorld.bat', \ waitInterval=15000,waitAttempts=1 \ -dest:auto,computername=$WebDeployService$Credentials -verbose HelloWorld.bat contains: echo "Hello world!" powershell.exe C:\temp\WebDeploy\Package\HelloWorld.ps1 echo "Done" The HelloWorld.ps1 only contains: Write-Host "Hello world from PowerShell!" However, it seems like PowerShell never terminates. This is the output from running the msdeploy: Verbose: Performing synchronization pass #1. Verbose: Source runCommand (C:\temp\HelloWorld.bat) does not match destination (C:\temp\HelloWorld.bat) differing in attributes (isSource['True','False']). Update pending. Info: Updating runCommand (C:\temp\HelloWorld.bat). Info: Info: C:\temp>echo "Hello world!" "Hello world!" C:\temp\WebDeploy>powershell.exe C:\temp\HelloWorld.ps1 Info: Hello world from Powershell! Info: Warning: The process 'C:\Windows\system32\cmd.exe' (command line '/c "C:\Users\peter\AppData\Local\Temp\gaskgh55.b2q.bat "') is still running. Waiting for 15000 ms (attempt 1 of 1). Error: The process 'C:\Windows\system32\cmd.exe' (command line '/c "C:\Users\peter\AppData\Local\Temp\gaskgh55.b2q.bat"' ) was terminated because it exceeded the wait time. Error count: 1. Anyone knows a solution?

    Read the article

  • LaTeX printing only first two pages of a document

    - by Peter Flom
    I am working in LaTeX, and when I create a pdf file (using LaTeX button or pdfLaTeX button or using yap) the pdf has only the first two pages. No errors. It just stops. If I make the first page longer by adding text, it still stops at end of 2nd page. Any ideas? OK, responding to first comment, here is the code \documentclass{article} \title{Outline of Book} \author{Peter L. Flom} \begin{document} \maketitle \section*{Preface} \subsection*{Audience} \subsection*{What makes this book different?} \subsection*{Necessary background} \subsection*{How to read this book} \section{Introduction} \subsection{The purpose of logistic regression} \subsection{The need for logistic regression} \subsection{Types of logistic regression} \section{General issues in logistic regression} \subsection{Transforming independent and dependent variables} \subsection{Interactions} \subsection{Model selection} \subsection{Parameter estimates, confidence intervals, p values} \subsection{Summary and further reading} \section{Dichotomous logistic regression} \subsection{Introduction, theory, examples} \subsection{Exploratory plots and analysis} \subsection{Basic model fitting} \subsection{Advanced and special issues in model fitting} \subsection{Diagnostic and descriptive plots and analysis} \subsection{Traps and gotchas} \subsection{Power analysis} \subsection{Summary and further reading} \subsection{Exercises} \section{Ordinal logistic regression} \subsection{Introduction, theory, examples} \subsubsection{Introduction - what are ordinal variables?} \subsubsection{Theory of the model} \subsubsection{Examples for this chapter} \subsection{Exploratory plots and analysis} \subsection{Basic model fitting} \subsection{Advanced and special issues in model fitting} \subsection{Diagnostic and descriptive plots and analysis} \subsection{Traps and gotchas} \subsection{Power analysis} \subsection{Summary and further reading} \subsection{Exercises} \section{Multinomial logistic regression} \subsection{Introduction, theory, examples} \subsection{Exploratory plots and analysis} \subsection{Basic model fitting} \subsection{Advanced and special issues in model fitting} \subsection{Diagnostic and descriptive plots and analysis} \subsection{Traps and gotchas} \subsection{Power analysis} \subsection{Summary and further reading} \subsection{Exercises} \section{Choosing a model} \subsection{NOIR and its problems} \subsection{Linear vs. ordinal} \subsection{Ordinal vs. multinomial} \subsection{Summary and further reading} \subsection{Exercises} \section{Extensions and related models} \subsection{Other logistic models} \subsection{Multilevel models - PROC NLMIXED and GLIMMIX} \subsection{Loglinear models - PROC CATMOD} \section{Summary} \end{document} thanks Peter

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >