Search Results

Search found 21140 results on 846 pages for 'control theory'.

Page 64/846 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • Problem with wake after suspend using USB remote.

    - by Bod
    Hi, I'm a linux newbie looking for some help. I'm currently setting up an XBMC HTPC using a laptop and 10.10 and all works great except for waking from resume using the power button on the remote. The suspend works from remote works fine as does the resume using the power button on the laptop. I've checked /proc/acpi/wakeup which initially showed the following. Device S-state Status Sysfs node C096 S5 *disabled pci:0000:00:1e.0 C0F1 S3 *disabled pci:0000:00:1d.0 C0F8 S3 *disabled pci:0000:00:1d.1 C0F9 S3 *disabled pci:0000:00:1d.2 C0FA S3 *disabled pci:0000:00:1d.3 C0FB S3 *disabled pci:0000:00:1d.7 C102 S5 *disabled pci:0000:00:1c.0 C22B S5 *disabled pci:0000:08:00.0 C115 S5 *disabled pci:0000:00:1c.2 C22C S5 *disabled C118 S5 *disabled pci:0000:00:1c.3 C22C S5 *disabled I've since configured the above so that the S3 devices above are enabled. I've confirmed that they are the correct devices using lspci 00:1d.0 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #1 (rev 01) 00:1d.1 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #2 (rev 01) 00:1d.2 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #3 (rev 01) 00:1d.3 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #4 (rev 01) 00:1d.7 USB Controller: Intel Corporation N10/ICH 7 Family USB2 EHCI Controller (rev 01) None of this has worked unfortunately and I'm now stuck. It simply refuses to wakeup from the remote. The USB receiver shows no activity LED while suspended. Suspend/resume from the remote works fine from Windows 7 so I know the laptop is ok with it. Any ideas? I need to get this sorted to gain Wife Approval for this system. Thanks, Bod.

    Read the article

  • How much configurability to give to users regarding concurrency?

    - by rwong
    This question is a narrowing-down of these related questions: How much effort should we spend to programming for multiple cores? Concurrency: How do you approach the design and debug the implementation? Given that each user's computers may have different performance characteristics with respect to calculations, memory, disk I/O bandwidth and network I/O bandwidth, and that it is difficult to implement an automated self-tuning system in your software, how much configurability should we give to the end-users so that they can find ways (by trial-and-error?) to improve our software's efficiency? If we give users the ability to change these settings, how do we give visual feedback to users so they can measure the performance changes?

    Read the article

  • Using versioning for settings in home?

    - by maaartinus
    I planned to use git for the important files in my home directory, so I can revert bad settings or transfer them to another computer as needed. But there's too much chaos there, with each program mixing wildly temporary files, caches, logs, backups, and everything. Finding anything worth saving is hard, and when I've found any settings done by myself, there were mixed with informations specific to the computer (so I could hardly take them to another one) and timestamps (so tracking useful changes is hard). Is anybody doing it or is it just hopeless? How to filter out the garbage?

    Read the article

  • How can I refactor a code base while others rapidly commit to it?

    - by Incognito
    I'm on a private project that eventually will become open source. We have a few team members, talented enough with the technologies to build apps, but not dedicated developers who can write clean/beautiful and most importantly long-term maintainable code. I've set out to refactor the code base, but it's a bit unwieldy as someone in the team out in another country I'm not in regular contact with could be updating this totally separate thing. I know one solution is to communicate rapidly or adopt better PM practices, but we're just not that big yet. I just want to clean up the code and merge nicely into what he has updated. Would a branch be a suitable plan? A best-effort-merge? Something else?

    Read the article

  • Best practices for launching a new software version

    - by steve
    I rebuilt a web app to replace a version that we have been using for the last 3-4 years. We have a few thousand clients and a few hundred active users per day. The functionality is basically the same. The new version is a little bit faster with a few enhancement features and there are a lot of behind the scenes changes that the clients will never see. The UI is quite different but ultimately much easier to use and navigate. How should I go about having our clients stop using the old system and start using the new one? I am currently putting together a video that will play on the web site as well as within the app. The video will go through the pages and focus on some key changes. I was also thinking about an intro page that will display once the user logs in and explains some of the features.

    Read the article

  • Handling bugs, quirks, or annoyances in vendor-supplied headers

    - by supercat
    If the header file supplied by a vendor of something with whom one's code must interact is deficient in some way, in what cases is it better to: Work around the header's deficiencies in the main code Copy the header file to the local project and fix it Fix the header file in the spot where it's stored as a vendor-supplied tool Fix the header file in the central spot, but also make a local copy and try to always have the two match Do something else As an example, the header file supplied by ST Micro for the STM320LF series contains the lines: typedef struct { __IO uint32_t MODER; __IO uint16_t OTYPER; uint16_t RESERVED0; .... __IO uint16_t BSRRL; /* BSRR register is split to 2 * 16-bit fields BSRRL */ __IO uint16_t BSRRH; /* BSRR register is split to 2 * 16-bit fields BSRRH */ .... } GPIO_TypeDef; In the hardware, and in the hardware documentation, BSRR is described as a single 32-bit register. About 98% of the time one wants to write to BSRR, one will only be interested in writing the upper half or the lower half; it is thus convenient to be able to use BSSRH and BSSRL as a means of writing half the register. On the other hand, there are occasions when it is necessary that the entire 32-bit register be written as a single atomic operation. The "optimal" way to write it (setting aside white-spacing issues) would be: typedef struct { __IO uint32_t MODER; __IO uint16_t OTYPER; uint16_t RESERVED0; .... union // Allow BSRR access as 32-bit register or two 16-bit registers { __IO uint32_t BSRR; // 32-bit BSSR register as a whole struct { __IO uint16_t BSRRL, BSRRH; };// Two 16-bit parts }; .... } GPIO_TypeDef; If the struct were defined that way, code could use BSRR when necessary to write all 32 bits, or BSRRH/BSRRL when writing 16 bits. Given that the header isn't that way, would better practice be to use the header as-is, but apply an icky typecast in the main code writing what would be idiomatically written as thePort->BSRR = 0x12345678; as *((uint32_t)&(thePort->BSSRH)) = 0x12345678;, or would be be better to use a patched header file? If the latter, where should the patched file me stored and how should it be managed?

    Read the article

  • Where did the notion of "one return only" come from?

    - by FredOverflow
    I often talk to Java programmers who say "Don't put multiple return statements in the same method." When I ask them to tell me the reasons why, all I get is "The coding standard says so." or "It's confusing." When they show me solutions with a single return statement, the code looks uglier to me. For example: if (blablabla) return 42; else return 97; "This is ugly, you have to use a local variable!" int result; if (blablabla) result = 42; else result = 97; return result; How does this 50% code bloat make the program any easier to understand? Personally, I find it harder, because the state space has just increased by another variable that could easily have been prevented. Of course, normally I would just write: return (blablabla) ? 42 : 97; But the conditional operator gets even less love among Java programmers. "It's incomprehensible!" Where did this notion of "one return only" come from, and why do people adhere to it rigidly?

    Read the article

  • Restrictive routing best practices for Google App Engine with python?

    - by Aleksandr Makov
    Say I have a simple structure: app = webapp2.WSGIApplication([ (r'/', 'pages.login'), (r'/profile', 'pages.profile'), (r'/dashboard', 'pages.dash'), ], debug=True) Basically all pages require authentication except for the login. If visitor tries to reach a restrictive page and he isn't authorized (or lacks privileges) then he gets redirected to the login view. The question is about the routing design. Should I check the auth and ACL privs in each of the modules (pages.profile and pages.dash from example above), or just pass all requests through the single routing mechanism: app = webapp2.WSGIApplication([ (r'/', 'pages.login'), (r'/.+', 'router') ], debug=True) I'm still quite new to the GAE, but my app requires authentication as well as ACL. I'm aware that there's login directive on the server config level, but I don't know how it works and how I can tight it with my ACL logic and what's worse I cannot estimate time needed to get it running. Besides, it looks only to provide only 2 user groups: admin and user. In any case, that's the configuration I use: handlers: - url: /favicon.ico static_files: static/favicon.ico upload: static/favicon.ico - url: /static/* static_dir: static - url: .* script: main.app secure: always Or I miss something here and ACL can be set in the config file? Thanks.

    Read the article

  • How to reset setting of Wine, avoiding uninstalling all applications in it?

    - by cipricus
    Foobar2000 volume slider stopped working in Wine Sound is good but volume cannot be changed from the player's slider anymore. Is there a setting in Wine that might have entailed this? I have tested [Vineyard][1] (also) and then gave it up on which occasion some setting in Wine might have been altered but cannot see which. Edit: This affects the main installation (v.1.1.15) made in Wine, and also portable installations of the same version (as well as portable installations of v.1.1.14 and 1.1.17b that I tested) but does not affect older versions like 1.0.3. After testing more versions, it seems that the newest version without this problem is 1.1. (That is, before the version that changed the classic white-on-black Foobar2000 icon with the new white one.)

    Read the article

  • need help connecting to bitbucket repository with sourceTree on windows 8

    - by pythonian29033
    I'm having trouble adding and cloning my repo on bitbucket to the sourceTree app, we're only starting with this now and we're a small company, so there's not much knowledge around this. now I've gone through The documentation on sourceTree for help, but I've noticed when I select my repo on bitbucket, it uses the repo url I select and appends a .git at the end. Then a notice message says This is not a valid source path / URL, but when I click Details... I get a dialogBox with nothing in it and an ok button. and when I'm done entering the details the 'Clone' button remains disabled. Is this Windows 8 or am I actually doing something wrong? Now I usually use ubuntu, but we just got these new ASUS ultrabooks at work and it's a pain to install any linux Distro on here. So I'm stuck with windows 8

    Read the article

  • Should I manage authentication on my own if the alternative is very low in usability and I am already managing roles?

    - by rumtscho
    As a small in-house dev department, we only have experience with developing applications for our intranet. We use the existing Active Directory for user account management. It contains the accounts of all company employees and many (but not all) of the business partners we have a cooperation with. Now, the top management wants a technology exchange application, and I am the lead dev on the new project. Basically, it is a database containing our know-how, with a web frontend. Our employees, our cooperating business partners, and people who wish to become our cooperating business partners should have access to it and see what technologies we have, so they can trade for them with the department which owns them. The technologies are not patented, but very valuable to competitors, so the department bosses are paranoid about somebody unauthorized gaining access to their technology description. This constraint necessitates a nightmarishly complicated multi-dimensional RBAC-hybrid model. As the Active Directory doesn't even contain all the information needed to infer the roles I use, I will have to manage roles plus per-technology per-user granted access exceptions within my system. The current plan is to use Active Directory for authentication. This will result in a multi-hour registration process for our business partners where the database owner has to manually create logins in our Active Directory and send them credentials. If I manage the logins in my own system, we could improve the usability a lot, for example by letting people have an active (but unprivileged) account as soon as they register. It seems to me that, after I am having a users table in the DB anyway (and managing ugly details like storing historical user IDs so that recycled user IDs within the Active Directory don't unexpectedly get rights to view someone's technologies), the additional complexity from implementing authentication functionality will be minimal. Therefore, I am starting to lean towards doing my own user login management and forgetting the AD altogether. On the other hand, I see some reasons to stay with Active Directory. First, the conventional wisdom I have heard from experienced programmers is to not do your own user management if you can avoid it. Second, we have code I can reuse for connection to the active directory, while I would have to code the authentication if done in-system (and my boss has clearly stated that getting the project delivered on time has much higher priority than delivering a system with high usability). Third, I am not a very experienced developer (this is my first lead position) and have never done user management before, so I am afraid that I am overlooking some important reasons to use the AD, or that I am underestimating the amount of work left to do my own authentication. I would like to know if there are more reasons to go with the AD authentication mechanism. Specifically, if I want to do my own authentication, what would I have to implement besides a secure connection for the login screen (which I would need anyway even if I am only transporting the pw to the AD), lookup of a password hash and a mechanism for password recovery (which will probably include manual identity verification, so no need for complex mTAN-like solutions)? And, if you have experience with such security-critical systems, which one would you use and why?

    Read the article

  • What's the best way to explain branching (of source code) to a client?

    - by Jon Hopkins
    The situation is that a client requested a number of changes about 9 months ago which they then put on hold with them half done. They've now requested more changes without having made up their mind to proceed with the first set of changes. The two sets of changes will require alterations to the same code modules. I've been tasked with explaining why them not making a decision about the first set of changes (either finish them or bin them) may incur additional costs (essentially because the changes would need to be made to a branch then if they proceed with the first set of changes we'd have to merge them to the trunk - which will be messy - and retest them). The question I have is this: How best to explain branching of code to a non-technical client?

    Read the article

  • Develop in trunk and then branch off, or in release branch and then merge back?

    - by Torben Gundtofte-Bruun
    Say that we've decided on following a "release-based" branching strategy, so we'll have a branch for each release, and we can add maintenance updates as sub-branches from those. Does it matter whether we: develop and stabilize a new release in the trunk and then "save" that state in a new release branch; or first create that release branch and only merge into the trunk when the branch is stable? I find the former to be easier to deal with (less merging necessary), especially when we don't develop on multiple upcoming releases at the same time. Under normal circumstances we would all be working on the trunk, and only work on released branches if there are bugs to fix. What is the trunk actually used for in the latter approach? It seems to be almost obsolete, because I could create a future release branch based on the most recent released branch rather than from the trunk. Details based on comment below: Our product consists of a base platform and a number of modules on top; each is developed and even distributed separately from each other. Most team members work on several of these areas, so there's partial overlap between people. We generally work only on 1 future release and not at all on existing releases. One or two might work on a bugfix for an existing release for short periods of time. Our work isn't compiled and it's a mix of Unix shell scripts, XML configuration files, SQL packages, and more -- so there's no way to have push-button builds that can be tested. That's done manually, which is a bit laborious. A release cycle is typically half a year or more for the base platform; often 1 month for the modules.

    Read the article

  • Advanced subversion techniques, what am I missing?

    - by Derek Adair
    I started using SVN about 9 months ago and it's been a game changer to say the least. Although, I feel I'm still a bit lost. I feel like there is a lot more I need to take advantage of to really step up my application development. For example I would like to be able to quarantine any volatile/major changes into some kind of 'sub-repository' or something. I'm finding that major changes are impeding minor bug fixes that are quite urgent. How can I push one simple update without pushing incomplete or broken code?

    Read the article

  • What is a good way to test demand for a new game platform?

    - by user15256
    I'm working on a game platform that turns your iPhone, android or iPad into a steering wheel, for racing games (like need for speed and dirt 3) and flight simulators for example. I'd love to figure out smart ways to figure out whether gamers would like something like this. I originally asked this question over on the gaming SE and it was for getflypad.com. A lot of the tech is built and most of it is doable - the question here is how to test demand and know whether gamers actually want this.

    Read the article

  • Revamp application

    - by Rauf
    I am a software developer having an experience of 3 yrs. I want to play with latest technologies always. But this is not practical. Because say, I developed a web application in .Net 3.5, now its 30% done. After the release of .Net 4.0, my mind is always goes with .Net 4.0. I think like this, lot of features are in new version, so why shouln't I implement those versions in my application. When I worked with IT companies, most of them code with very old versions, some body use VB.NET, C, even Classic ASP. So what might be the points should I consider if I revamp an application?

    Read the article

  • TFS SQL Deployment Data Script

    - by Greg
    We are using TFS and SQL 2005 (looking to upgrade to SQL 2012 if that makes a difference). We store our database schema in a Visual Studio Database project (VS 2010). When code is released to live we currently use the Visual Studio Database Project to build a script for all our schema changes. The problem we have been getting is having to alter or add to that script to add/fix data for the deployment. For example if we add a new non-nullable column to an existing table we need to populate that column with data during the insert. Other times we may want to create new records in transactional tables (e.g. assign specific users to a new security access). Do Visual Studio Database Projects have a way to store these scripts that only need to be run once and somehow include them in the build? Does it know which scripts need to be run (for example if we are inserting default data we don't want to do that again a second time)? OR Is there a better way to manage these scripts?

    Read the article

  • How do I check that my tests were not removed by other developers?

    - by parxier
    I've just came across an interesting collaborative coding issue at work. I've written some unit/functional/integration tests and implemented new functionality into application that's got ~20 developers working on it. All tests passed and I checked in the code. Next day I updated my project and noticed (by chance) that some of my test methods were deleted by other developers (merging problems on their end). New application code was not touched. How can I detect such problem automatically? I mean, I write tests to automatically check that my code still works (or was not deleted), how do I do the same for tests? We're using Java, JUnit, Selenium, SVN and Hudson CI if it matters.

    Read the article

  • Is it bad practice for a module to contain more information than it needs?

    - by gekod
    I just wanted to ask for your opinion on a situation that occurs sometimes and which I don't know what would be the most elegant way to solve it. Here it goes: We have module A which reads an entry from a database and sends a request to module B containing ONLY the information from the entry module B would need to accomplish it's job (to keep modularity I just give it the information it needs - module B has nothing to do with the rest of the information from the read DB entry). Now after finishing it's job, module B has to reply to a module C if it succeeded or failed. To do this module B replies with the information it has gotten from module A and some variable meaning success or fail. Now here comes the problem: module C needs to find that entry again BUT the information it has gotten from module B is not enough to uniquely find the exact same entry again. I don't think that module A giving more information to module B which it doesn't need to do it's job but which it could then give back to module C would be a good practice because this would mean giving some module information it doesn't really need. What do you think?

    Read the article

  • Structure of a Git repository

    - by Luke Puplett
    Sorry if this is a duplicate, I looked. We're moving to Git. In Subversion, I'm used to having \trunk, \branches and \tags folders. With Git, switching between branches will replace the contents of the working directory, so am I right to assume that the way we used to work just doesn't apply with Git? My guess is that I'd have a repo folder with maybe a gitignore and readme.txt, then the folders for the projects that make up the repo, and that's it.

    Read the article

  • Why not commit unresolved changes?

    - by Explosion Pills
    In a traditional VCS, I can understand why you would not commit unresolved files because you could break the build. However, I don't understand why you shouldn't commit unresolved files in a DVCS (some of them will actually prevent you from committing the files). Instead, I think that your repository should be locked from pushing and pulling, but not committing. Being able to commit during the merging process has several advantages (as I see it): The actual merge changes are in history. If the merge was very large, you could make periodic commits. If you made a mistake, it would be much easier to roll back (without having to redo the entire merge). The files could remain flagged as unresolved until they were marked as resolved. This would prevent pushing/pulling. You could also potentially have a set of changesets act as the merge instead of just a single one. This would allow you to still use tools such as git rerere. So why is committing with unresolved files frowned upon/prevented? Is there any reason other than tradition?

    Read the article

  • Best practices in versioning

    - by Gerenuk
    I develop some scripts for data analysis in a small team. For the moment we use SVN, but not in a very structured way. We haven't even looked how to use branches even though we need this functionality. What do you suggest as the best practice to setup the following system: two code bases (core and plugins) versions can be incompatible to previous scripts sometimes individual features are being developed and not yet finished, while other fixes have to be done urgently to the code In the end we don't deliver the code as a package, but rather place the Python scripts in some directory (with version names?). Some other python script which serves as a configuration choses the desired version, sets the path to these libraries and then starts to import the modules. I saw stable releases to be named "trunk" so I did the same. However, no version numbers yet. Core and plugins are different repositories, however we have to match versions for compatibility. Can you suggest some best practices or reference to ease development and reduce chaos? :) Some suggested GIT. I haven't heard about it, but I'm free to change.

    Read the article

  • Linking application build number to svn revision

    - by ahenderson
    I am looking for a strategy to version an application with the following requirements. My requirements are given an exe with version number (major.minor.build-number) 1) I want to map the version to a svn source revision that made the exe 2) With the source and exe I should be able to attach and debug in vs2010 with no issue. 3) Once I check-out the source code for the exe I should be able to build the exe again with the version number without having to make any changes to a file.

    Read the article

  • Combine auto-syncing cloud and VCS

    - by ComFreek
    This question brought me to another question: is there any VCS/tool for a VCS which automatically backups your source code between the last checkout and current changes? I had the problem of loosing uncommited source code changes just one week ago. I did not want to commit yet because the changes were incomplete. But then, an error when moving the data to an USB stick caused the data loss. That's the opposite what a cloud service (like Google Drive, SkyDrive, DropBox, ...) does: it tracks each change you made! Have you lost your data? That's no problem because you have the latest version online. So what would a combined solution look like? It would offer full functionality of a VCS including auto-syncing of any intermediate changes between two commits/checkouts to a temporary online location.

    Read the article

  • What is the term for a really BIG source code commit?

    - by Ida
    Sometimes when we check the commit history of a software, we may see that there are a few commits that are really BIG - they may change 10 or 20 files with hundreds of changed source code lines (delta). I remember that there is a commonly used term for such BIG commit but I can't recall exactly what that term is. Can anyone help me? What is the term that programmers usually use to refer to such BIG and giant commit? BTW, is committing a lot of changes all together a good practice? UPDATE: thank you guys for the inspiring discussion! But I think "code bomb" is the term that I'm looking for.

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >