Search Results

Search found 23427 results on 938 pages for 'christopher done'.

Page 37/938 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • Problem Installing Xubuntu 12.04 Audio-Video codecs

    - by Seib
    I used Crouton to install Xubuntu 12.04.4 LTS with the XFCE environment on my HP Chromebook 11. I've gotten it all fully installed and everything; the only thing I'm doing now is the basic setup things, like adding codecs and other things like LibreOffice, VLC, Firefox, Ubuntu Software Center, etc. This information I got from 2 sources: http://www.efytimes.com/e1/fullnews.asp?edid=137269 http://www.binarytides.com/better-xubuntu-14-04/ . I'm currently on the same step at both URLs, which is #6 on Link 1 and #8 on Link 2. Per the articles, which both said the same thing, I typed in sudo apt-get install xubuntu-restricted-extras libavcodec-extra and it didn't do anything. It just kept on saying the same thing: Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package libavcodec-extra I've spend the last hour or so scouring the internet for a solution, for a hint even at what is going on. I don't want to do a clean reinstall, for two reasons: 1) it's takes like 1.5h just to get the croot fully installed, and 2) everything but this out of what I've done so far (up to #6 at Link 1 & up to #8 at Link 2) works except the audio. I've already installed flash, so YouTube works fine. It's just I can't hear any audio. Please help? Thanks in advance. I appreciate all the great help I've been getting from AskUbuntu lately. You all are great.

    Read the article

  • Brocken package for libavcodec54 & libx264-123 in ubuntu 14.04LTS

    - by Kachavarapu Ajay
    $ sudo apt-get install -f [sudo] password for ajay: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: libx264-123 The following NEW packages will be installed: libx264-123 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. 2 not fully installed or removed. Need to get 0 B/345 kB of archives. After this operation, 1,005 kB of additional disk space will be used. Do you want to continue? [Y/n] y (Reading database ... 166965 files and directories currently installed.) Preparing to unpack .../libx264-123_0.123.2189+git35cf912-1ubuntu4_amd64.deb ... Unpacking libx264-123:amd64 (2:0.123.2189+git35cf912-1ubuntu4) ... dpkg-deb (subprocess): decompressing archive member: lzma error: compressed data is corrupt dpkg-deb: error: subprocess <decompress> returned error exit status 2 dpkg: error processing archive /var/cache/apt/archives/libx264-123_0.123.2189+git35cf912-1ubuntu4_amd64.deb (--unpack): cannot copy extracted data for './usr/lib/x86_64-linux-gnu/libx264.so.123' to '/usr/lib/x86_64-linux-gnu/libx264.so.123.dpkg-new': unexpected end of file or stream Errors were encountered while processing: /var/cache/apt/archives/libx264-123_0.123.2189+git35cf912-1ubuntu4_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Who spotted the omission?

    - by olaf.heimburger
    In my entry OFM 11g: Install OAM 10.1.4.3 (32-bit) on 64-bit RedHat AS 5 I explained how to install OAM 10.1.4.3 (32-bit) on 64-bit RedHat. This is great and works. If you seriously want to use OAM 10.1.4.3 you should consider OHS 11g 32-bit. But this installation is a bit tricky. Nearly all tricks to get this done are described in the above mentioned entry. Today I realized that I missed a small bit to get the installation successfully done.The missing part is within the script to create a vital piece of the OHS 11g package. This part is called genclientsh and resides in $OHS_HOME/bin. This script uses gcc to link binaries. By default this script works great, but on a 64-bit Linux it fails. To get around this, find the variable LD and change the value of gcc to gcc -m32.Done. Caveat On support.oracle.com you will find a Note that suggests to build a small shell script named gcc and includes the -m32 switch. Actually, I consider this as dangerous, because we are humans and tend to forget things quickly. Building a globally available script that changes things for a single setup has side effects that will result in unpredictable results.

    Read the article

  • Grouping a comma separated value on common data [closed]

    - by Ankit
    I have a table with col1 id int, col2 as varchar (comma separated values) and column 3 for assigning group to them. Table looks like col1 col2 group .............................. 1 2,3,4 2 5,6 3 1,2,5 4 7,8 5 11,3 6 22,8 This is only the sample of real data, now I have to assign a group no to them in such a way that output looks like col1 col2 group .............................. 1 2,3,4 1 2 5,6 1 3 1,2,5 1 4 7,8 2 5 11,3 1 6 22,8 2 The logic for assigning group no is that every similar comma separated value of string in col2 have to be same group no as every where in col2 where '2' is there it has to be same group no but the complication is that 2,3,4 are together so they all three int value if found in any where in col2 will be assigned same group. The major part is 2,3,4 and 1,2,5 both in col2 have 2 so all int 1,2,3,4,5 have to assign same group no. Tried store procedure with match against on col2 but not getting desired result Most imp (I can't use normalization, because I can't afford to make new table from my original table which have millions of records), even normalization is not helpful in my context. This question is also on stackoverflow with bounty on this link Achieved so far:- I have set the group column auto increment and then wrote this procedure:- BEGIN declare cil1_new,col2_new,group_new int; declare done tinyint default 0; declare group_new varchar(100); declare cur1 cursor for select col1,col2,`group` from company ; DECLARE CONTINUE HANDLER FOR NOT FOUND SET done=1; open cur1; REPEAT fetch cur1 into col1_new,col2_new,group_new; update company set group=group_new where match(col2) against(concat("'",col2_new,"'")); until done end repeat; close cur1; select * from company; END This procedure is working, no syntax mistake but the problem is that I am not achieving the desired result exactly.

    Read the article

  • Contract Work - Lessons Learned

    - by samerpaul
    I thought I would write a post of a different nature today, but still relevant to the tech world. I do a lot of contract jobs myself and really enjoy it. It's nice to keep jumping from project to project, and not having to go to an office or keep regular hours, etc. I really enjoy it. I have learned a lot in the past few years of doing it (both from experience and from help given to me from others, and the internet) so I thought I'd share some of that knowledge/experience today.So here's my own personal "lesson's learned" that hopefully will help you if you find yourself doing contract work:Should I take the job?Ok, so this is the first step. Assuming you were given sufficient information about what they want, then you should really think about what you're capable of doing and whether or not you should take this job. Personally, my rule is, if I know it's possible, I'll say yes, even if I don't yet know how to do it. That's because the internet is such a great help, it would be rare to run into an issue that you can't figure out with some help. So if your clients are asking for something that you don't yet know how to program, but you know you can do it on the platform then go for it. How else are you going to learn?Use this rule with some limitation, however. If you're really lacking the expertise or foundation in something, then unless you have tons of time to complete the project, then I wouldn't say yes. For example, I haven't personally done any 3d/openGL programming yet so I wouldn't say yes to a project that extensively uses it. OK, so I want the job, but how much do I charge?This part can be tricky. There is no set formula really, but I have some tips for pricing that will hopefully give you a better idea on how to confidently ask your price and have them accept. Here are some personal guidelinesHow much time do you have to complete the project? If it's shorter than average, then charge more. You can even make a subtle note about this (or not so subtle if they still don't get it.) If it seems too short of a time (i.e. near impossible to complete), be sure to say that. It looks bad to promise a time that you can't keep--and it makes it less likely for them to return to you for work.Your Hourly rate: How long have you been working in that language? Do you have existing projects to back you up? Or previous contacts that can vouch for your work? Are there very few people with your particular skill set? All of these things will lend themselves to setting an hourly rate. I'd also try out a quick google search of what your line of work is, to see what the industry standard is at that point in time.I wouldn't price too low, because you want to make your time worth it. You also want them to feel like they're paying for quality work (assuming you can deliver it :) ). Finally, think about your client. If it's a small business, then don't price it too high if you want the job. If it's an enterprise (like a Fortune company), then don't be afraid to price higher. They have the budget for it.Fixed price: If they want a fixed price project, then you need to think about how many hours it will take you to complete it and multiply it by the hourly rate you set for yourself. Then, honestly, I would add 10-20% on top of that. Why? Because nothing ever works exactly how you want it to. There are lots of times that something "trivial" is way harder than it should be, or something that "should work" doesn't for hours and it eats away at your hourly rate. I can't count the number of times I encountered a logical bug that took away an entire's day work because debuggers don't help in those cases. By adding that padding in, it's still OK to have those days where you don't get as much done as you want. And another useful tip: Depending on your client, and the scope, you most likely want to set that you both sign off on a specification sheet before doing any work, and that any changes will result in a re-evaulation of the price. This is to help protect you from being handed a huge new addition to the project half-way in, without any extra payment.Scope of project: Finally, is it a huge project? Is it really small/fast? This affects how much your client will be willing to pay. If it sounds big, they will be willing to pay more for it. If it seems really small, then you won't be able to get away with a large asking price (as easily).Ok, I priced it, now what?So now that you have the price, you want to make sure it feels justified to your client. I never set a price before I can really think about everything. For example, if you're still in your introduction phase, and they want a price, don't give one! Just comment that you will send them a proposal sheet with all the features outlined, and a price for everything. You don't want to shout out a low number and then deliver something that is way higher. You also don't want to shock them with a big number before they feel like they are getting a great product.Make up a proposal document in a word editor. Personally, I leave the price till the very end. Why? Because by the time they reach the end, you've already discussed all the great features you plan to implement, and how it's the best product they'll ever use, etc etc...so your price comes off as a steal! If you hit them up front with a price, they will read through the document with a negative bias. Think about those commercials on TV. They always go on about their product, then at the end, ask "What would you pay for something like this? $100? $50? How about $20!!". This is not by accident.Scenario: I finished the job way earlier than expectedYou have two options then. You can either polish the hell out of the application, and even throw in a few bonus features (assuming they are in-line with the customer's needs) or you can sit and wait on it until you near your deadline. Why don't you want to turn it in too early? Because you should treat that extra time as a surplus. If you said it is going to take you 3 weeks, and it took you only 1, you have a surplus of 2 weeks. I personally don't want to let them know that I can do a 3 week project in 1 week. Why not? Because that may not always be the case! I may later have a 3 week project that takes all 3 weeks, but if I set a precedent of delivering super early, then the pressure is on for that longer project. It also makes it harder to quote longer times if you keep delivering too early.Feel free to deliver early, but again, don't do it too early. They may also wonder why they paid you for 3 weeks of work if you're done in 1. They may further wonder if the product sucks, or what is wrong with it, if it's done so early, etc.I would just polish the application. Everyone loves polish in their applications. The smallest details are what make an application go from "functional" to "fantastic". And since you are still delivering on time, then they are still going to be very happy with you.Scenario: It's taking way too long to finish this, and the deadline is nearing/here!So this is not a fun scenario to be in, but it'll happen. Sometimes the scope of the project gets out of hand. The best policy here is OPENNESS/HONESTY. Tell them that the project is taking longer than expected, and give a reasonable time for when you think you'll have it done. I typically explain it in a way that makes it sound like it isn't something that I did wrong, but it's just something about the nature of the project. This really goes for any scenario, to be honest. Just continue to stay open and communicative about your progress. This doesn't mean that you should email them every five minutes (unless they want you to), but it does mean that maybe every few days or once a week, give them an update on where you're at, and what's next. They'll be happy to know they are paying for progress, and it'll make it easier to ask for an extension when something goes wrong, because they know that you've been working on it all along.Final tips and thoughts:In general, contract work is really fun and rewarding. It's nice to learn new things all the time, as mandated by the project ,and to challenge yourself to do things you may not have done before. The key is to build a great relationship with your clients for future work, and for recommendations. I am always very honest with them and I never promise something I can't deliver. Again, under promise, over deliver!I hope this has proved helpful!Cheers,samerpaul

    Read the article

  • Run Bash Script Another Server

    - by psce
    I want to run command one by one, for change the names of the directories on the server. When I run script, directories renamed in server 1. But, directories are not found in server 2. What the error could be in the script? Script; #!/bin/bash mach_directory=/home/user/example erase_dir1=cache erase_dir2=tmp for i in {0..10} do user=user server=$(ssh $user@server$i hostname) ssh $user@$server find $mach_directory -type d -name $erase_dir1 ! -path "*Admin/$erase_dir1*" -print0 | while IFS= read -r -d '' file ; do mv "$file" "${file}_$(date +%d%m%Y)"; done ssh $user@$server find $mach_directory -type d -name $erase_dir2 ! -path "*Admin/$erase_dir2*" -print0 | while IFS= read -r -d '' file ; do mv "$file" "${file}_$(date +%d%m%Y)"; done done

    Read the article

  • Browser privacy improvement implications for websites

    - by phq
    On https://panopticlick.eff.org/ EFF let you test the number of uniquely identifying bits that the browser gives a website. Among these are HTTP header fields such as User-Agent, Accept, Accept-Language and later perhaps ETAG and If-Modified-Since. Also there is a lot of Information that javascript can get from the browser such as time-zone, screen resolution, complete list of fonts and plugins available. My first impression is, is all this information really usable/used on a majority of all websites? For example, how many sites does really send different content-types depending on the http accept header, or what fonts are available(I thought css had taken care of this)? Let's say of these headers/js functionality on day would be gone. Which ones would; never be noticed they were gone? impact user experience? impact server performance? immediately reimplemented because the Internet cannot work without it? Extra credit for differentiating between what can be done, what should be done and what is done in most situations.

    Read the article

  • What is the best type of c# timer to use with an Unity game that uses many timers simultaneously?

    - by Kyle Seidlitz
    I am developing a stand-alone 3d game in Unity that will have anywhere from 1 to 200 timers running simultaneously. For this game timer durations will range from 5 minutes to 4 days. There will not be any countdown displays or any UI for the timers. An object will be selected, a menu choice will then be selected, and the timer will start. Several events will occur at different intervals during the duration of the timer. The events will be confined to changing the material of the selected object, and calling a 1 second sound effect like a chime or a bell. If the user wants to save or end the game before all the timers are done, the start of the still running timers is to be saved to an XML file such that when the game is started again, any still running timers will have a calculation done to see if the timer is then done, where the game will change the materials appropriately. I am still trying to figure out what type of timer to use, and see also if there are any suggestions for saving and calculating times over several days. What class(es) of timers should I use? Are there any special issues I should look out for in terms of performance?

    Read the article

  • Ubuntu 12.04 upgrade and thunderbird

    - by Dcm1405
    After applying the suggested updates (179) an error message at the very end of the process suggested me to run apt-get install -f. Since it is a fairly new Ubuntu install (x86) I didn't setup anything in Thunderbird yet. Different error messages (see details) were generated with the -f process: ~$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: thunderbird Suggested packages: latex-xft-fonts The following packages will be upgraded: thunderbird 1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2 not fully installed or removed. Need to get 0 B/20.8 MB of archives. After this operation, 594 kB of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 170457 files and directories currently installed.) Preparing to replace thunderbird 11.0.1+build1-0ubuntu2 (using .../thunderbird_12.0.1+build1-0ubuntu0.12.04.1_i386.deb) ... Unpacking replacement thunderbird ... dpkg-deb (subprocess): data: internal gzip read error: '<fd:4>: invalid code lengths set' dpkg-deb: error: subprocess <decompress> returned error exit status 2 dpkg: error processing /var/cache/apt/archives /thunderbird_12.0.1+build1-0ubuntu0.12.04.1_i386.deb (--unpack): short read on buffer copy for backend dpkg-deb during `./usr/lib/thunderbird/libxul.so' Errors were encountered while processing: /var/cache/apt/archives/thunderbird_12.0.1+build1-0ubuntu0.12.04.1_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Is there such thing as a "theory of system integration"?

    - by Jeff
    There is a plethora of different programs, servers, and in general technologies in use in organizations today. We, programmers, have lots of different tools at our disposal to help solve various different data, and communication challenges in an organization. Does anyone know if anyone has done an serious thinking about how systems are integrated? Let me give an example: Hypothetically, let's say I own a company that makes specialized suits a'la Iron Man. In the area of production, I have CAD tools, machining tools, payroll, project management, and asset management tools to name a few. I also have nice design space, where designers show off their designs on big displays, some touch, some traditional. Oh, and I also have one of these new fangled LEED Platinum buildings and it has number of different computer controlled systems, like smart window shutters that close when people are in the room, a HVAC system that adjusts depending on the number of people in the building, etc. What I want to know is if anyone has done any scientific work on trying to figure out how to hook all these pieces together, so that say my access control system is hooked to my payroll system, and my phone system allowing my never to swipe a time card, and to have my phone follow me throughout the building. This problem is also more than a technology challenge. Every technology implementation enables certain human behaviours, so the human must also be considered as a part of the system. Has anyone done any work in how effectively weave these components together? FYI: I am not trying to build a system. I want to know if anyone has thoroughly studied the process of doing a large integration project, how they develop their requirements, how they studied the human behaviors, etc.

    Read the article

  • Functional testing in the verification

    - by user970696
    Yesterday my question How come verification does not include actual testing? created a lot of controversy, yet did not reveal the answer for related and very important question: does black box functional testing done by testers belong to verification or validation? ISO 12207:12208 here mentiones testing explicitly only as a validation activity, however, it speaks about validation of requirements of the intended use. For me its more high level, like UAT test cases written by business users ISO mentioned above does not mention any specific verification (7.2.4.3.2)except for Requirement verification, Design verification, Document and Code & Integration verification. The last two can be probably thought as unit and integrated testing. But where is then the regular testing done by testers at the end of the phase? The book I mentioned in the original question mentiones that verification is done by static techniques, yet on the V model graph it describes System testing against high level description as a verification, mentioning it includes all kinds of testing like functional, load etc. In the IEEE standard for V&V, you can read this: Even though the tests and evaluations are not part of the V&V processes, the techniques described in this standard may be useful in performing them. So that is different than in ISO, where validation mentiones testing as the activity. Not to mention a lot of contradicting information on the net. I would really appreciate a reference to e.g. a standard in the answer or explanation of what I missed in the ISO. For me, I am unable to tell where the testers work belong.

    Read the article

  • compile error in Ubuntu 10

    - by yozloy
    Hey guys I got a vps which run solusVM. I'm now trying to install ruby 1.9.2 in it. I follow this guide: after I run this command apt-get update apt-get -y install build-essential zlib1g zlib1g-dev libxml2 libxml2-dev libxslt-dev I got this error below root@makserver:/usr/local/src/ruby-1.9.2-p0# apt-get -f install Reading package lists... Done Building dependency tree... Done Correcting dependencies... Done The following extra packages will be installed: libc6 Suggested packages: glibc-doc The following packages will be upgraded: libc6 1 upgraded, 0 newly installed, 0 to remove and 80 not upgraded. Need to get 0B/4252kB of archives. After this operation, 4096B disk space will be freed. Do you want to continue [Y/n]? y debconf: apt-extracttemplates failed: Bad file descriptor (Reading database ... 21594 files and directories currently installed.) Preparing to replace libc6 2.11.1-0ubuntu7.2 (using .../libc6_2.11.1-0ubuntu7.8_amd64.deb) ... open2: fork failed: Cannot allocate memory at /usr/share/perl5/Debconf/ConfModule.pm line 59 dpkg: error processing /var/cache/apt/archives/libc6_2.11.1-0ubuntu7.8_amd64.deb (--unpack): subprocess new pre-installation script returned error exit status 12 Errors were encountered while processing: /var/cache/apt/archives/libc6_2.11.1-0ubuntu7.8_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) Anybody can tell me how can I correct this. Thanks

    Read the article

  • Can I upgrade a Windows 2000 domain to 2008 and demote the 2000 server without clients attached?

    - by techie007
    Hi all, We're planning to replace a Windows 2000 domain controller with a new 2008 DC (new hardware). We've elected to take the route of getting the 2000 domain schema up-to-snuff, join the 2008 server, upgrade it to a DC, and after replication demote the 2000 server (eventually to be taken off-line). The goal being to not have to visit all the workstations, and limited domain down-time. :) We want bring the old server here and do all the backups, Domain prep, migration and role transfers here, and then (hopefully) just plop the new 2008 back in place after it's done, and join the 2000 server back as a member server (so we can then do folder migrations, etc.). Can this server work be done off-site, without the workstations attached? If we do this will anything need to be done to the clients, once the new DC is physically in place, so they contact the new 2008 DC; or will they just 'know' and continue on using the existing domain settings/user profiles, etc.? Thanks in advance! :)

    Read the article

  • How to associate all file types within Wine with its corresponding native application?

    - by MestreLion
    This is easily done for a single file type, as answered in How to associate a file type within Wine with a native application?, by creating a .reg for the desired filetype. But this is for AVI only. I use some wine apps (uTorrent, Soulseek, Eudora, to name a few) that can launch a wide range of files. Email attachments, for example, can be JPG, DOC, PDF, PPS... its impossible (and not desirable) to track down all possible file types that one may receive in an email or download in a torrent. So I neeed a solution to be more generic and broad. I need the file association to honor whatever native app is currently configured. And I want this to be done for all file types configured in my system. I've already figured out how to make the solution generic. Simply replacing the launched app in .reg for winebrowser, like this: [HKEY_CLASSES_ROOT\.pdf] @="PDFfile" "Content Type"="application/pdf" [HKEY_CLASSES_ROOT\PDFfile\Shell\Open\command] @="C:\\windows\\system32\\winebrowser.exe \"%1\"" Ive tested this and it works correctly. Since winebrowser uses xdg-open as a backend, and converts my windows path to a Unix one, the correct (Linux) app is launched. So I need a "batch" updater to wine's registry, sort of a wine-update-associations script that I can run whenever a new app is installed. Maybe a tool that can: List all Mime Types types in my system that have a default, installed app associated Extract all the needed info (glob, mime type, etc) Generate the .REG file in the above format The tricky part is: i've searched a LOT to find info about how association is done in Ubuntu 10.10 onwards, and documentation is scarce and confusing, to say the least. Freedesktop.org has no complete spec, and even Gnome docs are obsolete. So far I've gathered 4 files that contain association info, but im clueless on which (or why) to use, or how to use them to generate the .reg file: ~/.local/share/applications/mimeapps.list ~/.local/share/applications/miminfo.cache /usr/share/applications/miminfo.cache /etc/gnome/defaults.list Any help, script or explanation would be greatly appreciated! Thanks!

    Read the article

  • Does Agile force developers to work more?

    - by Shooshpanchick
    Looking at common Agile practices it seems to me that they (intentionally or unintentionally?) force developer to spend more time actually working as opposed to reading blogs/articles, chatting, coffee breaks and just plain procrastinating. In particular: 1) Pair programming - the biggest work-forcer, just because it is inconvenient to do all that procrastination when there are two of you sitting together. 2) Short stories - when you have a HUGE chunk of work that must be done in e.g. a month, it is pretty common to slack off in the first three weeks and switch to OMG DEADLINE mode for the last one. And with the little chunks (that must be done in a day or less) it is exact opposite - you feel that time is tight, there is no space for maneuvering, and you will be held accountable for the task pretty soon, so you start working immediately. 3) Team communication and cohesion - when you underperform in a slow, distanced and silent environment it may feel ok, but when at the end of the day at Scrum meeting everyone boasts what they have accomplished and you have nothing to say you may actually feel ashamed. 4) Testing and feedback - again, it prevents you from keeping tasks "99% ready" (when it's actually around 20%) until the deadline suddenly happens. Do you feel that under Agile you work more than under "conventional" methodologies? Is this pressure compensated by the more comfortable environment and by the feeling of actually getting right things done quickly?

    Read the article

  • New Mac OS X Server setup, when i send mail to gmail it goes straight to Spam. Why is that?

    - by basilmir
    New Mac OS X Server setup, when i send mail to gmail it goes straight to Spam. Why is that? My setup: DNS - done (A records PTR are ok) Mail Setup - done Webmail - done Also there seems to be a naming problem. They all come from [email protected] instead of [email protected]. I must be missing an alias somewhere. I've read an entire book on setting this up so don't throw stones :) The GUI is masking a lot of this up for me, so explanations via GUI is appreciated.

    Read the article

  • Automated testing tool development challenges (for embedded software)

    - by Karthi prime
    My boss want to come up with the proposal for the following tool: An IDE: Able to build, compile, debug, via JTAG programming for the micro-controller. A Test Suite, reads the code in the IDE, auto generates the test cases, and it gives the in-target unit testing results(which is done by controlling code execution in the micro-controller via IDE). A no-overhead code coverage tool which interacts with the test suite and IDE. My work is to obtain the high level architecture of this tool, so as to proceed further. My current knowledge: There are tool-chains available from the chip manufacturer for the micro-controllers which can be utilized along with an open-source IDE like Eclipse, and along with an open-source burner, a complete IDE for a micro-controller can be done. Test cases can be auto-generated by reading the source file through the process of parsing, scripting, based on keywords. Test suite must be able to command the IDE to control, through breakpoints, and read the register contents from the microcontroller - This enables the in-target unit testing. An no-overhead code coverage should be done by no-overhead code instrumentation so as to execute those in the resource constraint environment of the micro-controller. I have the following questions: Any advice on the validity of my understanding? What are the challenges I will have during the development? What are the helpful open-source tools regarding this? What is the development time for this software? Thanks

    Read the article

  • Is realtime validation of username good or bad?

    - by iamserious
    I have a simple form for the user to sign up to my site; with email, username and password fields. We are now trying to implement an ajax validation so the user doesn't have to post the form to find out if the username is already taken. I can do this either on keyup event or on text blur event. My question is, which of these is really the best way to do? Keyup From the user POV, it would be good if the validation is done as and when they are typing, (on key up event) - of course, I am waiting for half a second to see if the user stops typing before firing off the request, and user can make any adjustments immediately. But this means I am sending way more requests than if I validated the username on Blur event. Blur The number of requests will be much lower when the validation is done on blur event, But this means the user has to actually go away from the textbox, look at the validation result, and if necessary go back to it to make any changes and repeat the whole process until he gets it right. I had a quick look at google, tumblr, twitter and no one actually does username validations on keyup events, (heck, tubmlr waits for the form to be posted) but I can swear I have seen keyup validations in a lot of places too. So, coming back to the question, will keyup validations be too many for server, is it an unnecessary overhead? or is it worth taking these hits to give user a better experience? ps: all my regex validations etc are already done on javascript and only when it passes all these other criteria does it send a request to server to check if a username already exists. (And the server is doing a select count(1) from user where username = '' - nothing substantial, but still enough to occupy some resource) pps: I'm on asp.net, MS SQL stack., if that matters.

    Read the article

  • Programming habits, patterns, and standards that have developed out of appeal to tradition/by mistake? [closed]

    - by user828584
    Being self-taught, the vast majority of what I know about programming has come from reading other peoples' code on websites like this. I'm starting to wonder if I've developed bad or otherwise pointless habits from other people, or even just made invalid assumptions. For example, in javascript, void 0 is used in a lot of places, and until I saw this, I just assumed it was necessary and that 0 had some significance. Also, the http header, referer is misspelled but hasn't been changed because it would break a lot of applications. Also mentioned in Code Complete 2: The architecture should describe the motivations for all major decisions. Be wary of “we’ve always done it that way” justifications. One story goes that Beth wanted to cook a pot roast according to an award-winning pot roast recipe handed down in her husband’s family. Her husband, Abdul, said that his mother had taught him to sprinkle it with salt and pepper, cut both ends off, put it in the pan, cover it, and cook it. Beth asked, “Why do you cut both ends off?” Abdul said, “I don’t know. I’ve always done it that way. Let me ask my mother.” He called her, and she said, “I don’t know. I’ve always done it that way. Let me ask your grandmother.” She called his grandmother, who said, “I don’t know why you do it that way. I did it that way because it was too big to fit in my pan.” What are some other examples of this?

    Read the article

  • Problem with dpkg-preconfigure, how to correct?

    - by Eric Wilson
    I was trying to install TeamViewer, and I followed the instructions here even though they specify 11.10 instead of 12.04 (what I'm running). In particular, I executed. $ wget http://www.teamviewer.com/download/teamviewer_linux.deb $ sudo dpkg -i teamviewer_linux.deb The dpkg command failed, and after this point my packaging system has been broken. The software center instructs me to try: $ sudo apt-get -f install which leads to Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages will be REMOVED: teamviewer7:i386 0 upgraded, 0 newly installed, 1 to remove and 17 not upgraded. 9 not fully installed or removed. Need to get 89.0 kB of archives. After this operation, 81.9 MB disk space will be freed. Do you want to continue [Y/n]? y Get:1 http://us.archive.ubuntu.com/ubuntu/ precise/main dash amd64 0.5.7-2ubuntu2 [89.0 kB] Fetched 89.0 kB in 1s (83.9 kB/s) E: Sub-process /usr/sbin/dpkg-preconfigure --apt || true returned an error code (100) E: Failure running script /usr/sbin/dpkg-preconfigure --apt || true At this point I'm stumped.

    Read the article

  • Ubuntu 12.04 server froze during the first boot after it was installed.

    - by user69021
    I installed Ubuntu server 12.04 to my new server and it failed on the first boot. It just stopped and I can't proceed any further. Server's specifications: Dell PowerEdge T620 CPU : Xeon E5-2665 2.4G x 2 RAM : 8GB RDIMM, 1333MHz x 12 HDD : 3TB Near Line SAS 7.2K x 8 RAID controller : PERC H710 GPU : NVIDIA Tesla C2075 x 4 I have a screenshot of the screen it stopped on but I cannot attach it because my privilege level is currently too low. ![freeze on boot][1] Here are the last messages while booting. [5.048743] Freeing unused kernel memory : 920k freed [5.049046] Write protecting the kernel read-only data : 12288k [5.052973] Freeing unused kernel memory : 1608k freed [5.056132] Freeing unused kernel memory : 1196k freed Loading, please wait... [5.070236] udevd[218]: starting version 175 Begin: Loading essential drivers ... done. Begin: Running /scripts/init-premount ... done. Begin: Mounting root file system ... Begin: Running /scripts/local-top ... done. [5.089030] megasas: 00.00.06.12-rc1 Wed. Oct. 5 17:00:00 PDT 2011 [5.089518] megasas: 0x1000:0x005b:0x1028:0x1f35: bus 1:slot 0:func 0 [5.089739] megaraid_sas 0000:01:00.0: PCI INT A -> GSI 34 (level, low) -> IRQ 34 [5.089937] megasas: FW now in Ready state [5.090427] dca service started, version 1.12.1 [5.091463] Intel(R) Gigabit Ethernet Network Driver - version 3.2.10-k [5.091578] Copyright (c) 2007-2011 Intel Corporation. [5.091712] igb 0000:06:00.0: PCI INT A -> GSI 16 (level,low) -> IRQ 16 [5.111090] megasas:IOC Init cmd success [5.123124] usb 1-1:new high-speed USB device number 2 using ehci_hcd What can I do about this?

    Read the article

  • when I type apt-get -f install, I get the error message

    - by gene
    xserver-xorg-core (2:1.11.4-0ubuntu10.8) breaks xserver-xorg-video-5 and is installed. Also I can not upgrade my software, It said that the package system is broken, with detail information: The following packages have unmet dependencies: xserver-xorg-core: Depends: xserver-common (>= 2:1.11.4-0ubuntu10.8) but 2:1.11.4-0ubuntu10.8 is installed when I issue sudo apt-get update, the output seems fine the source is(sorry the output has too many links that I can not post in);http://archive.ubuntu.com Reading package lists... Done ====================== when I issue sudo apt-get dist-upgrade, the output is: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: xserver-xorg-core : Breaks: xserver-xorg-video-5 E: Unmet dependencies. Try using -f. ================== when I issue 'sudo apt-get -f install', the output is: dpkg: dependency problems prevent configuration of xserver-xorg-video-radeon: xserver-xorg-core (2:1.11.4-0ubuntu10.8) breaks xserver-xorg-video-5 and is installed. xserver-xorg-video-radeon (1:6.12.1-0ubuntu2) provides xserver-xorg-video-5. dpkg: error processing xserver-xorg-video-radeon (--configure):dependency problems leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: xserver-xorg-video-radeon E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • What went wrong with my curl install?

    - by Danjah
    I'm fresher than the prince to Linux, I've been following the instructions here: http://chrisfulstow.com/running-node-js-on-windows-with-virtualbox-and-ubuntu (the link tells what I am generally trying to do). I'm all up and running in VBox, and am at the curl install part, I may have done the curl part a week ago I forget. So I ran this command anyway: danjah@danjah-VirtualBox:~$ sudo apt-get install curl Result: [sudo] password for danjah: Reading package lists... Done Building dependency tree Reading state information... Done curl is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Then: $ curl http://npmjs.org/install.sh | sudo npm_install=rc sh Result: fetching: { gzip: stdin: unexpected end of file /bin/tar: Child returned status 1 /bin/tar: Error is not recoverable: exiting now It failed Should I be concerned? How can I test curl? How can I avoid these situations? Perhaps there's a generic way of checking to see if I've already installed packages/etc? Case specific answers and general advice most appreciated. cheers, d

    Read the article

  • Dependencies not met on 12.04?

    - by Mochan
    Now I'm very aware that there are many questions out there that are quite similar to what I'm experiencing, but I have looked through many and I have not found a suitable answer. You are welcome to suggest questions that are similar, but I doubt that it will help. Getting on to the issue at hand, whenever I do anything that involves installation, whether it be codecs for videos, new programs or whatever the latter, I always get the 'Dependencies not met' error. In addition, I also get this notification in the panel: When clicked, the menu says this: "An error occurred. Please run Package Manager from the right-click menu or run apt-get in a terminal to see what is wrong. The error message was: ' Error: Broken Count 0'. This usually means your installed packages have unmet dependencies." It gives me three items to click: Show Updates Install all updates Check for Updates And then finally: Show Notifications (with a tick) Preferences When I try 'Install all Updates' (also Check Updates Install) it says this: and also this: As well as 'Ubuntu has experienced an internal error' and 'Did this error occur when moving from one version of Ubuntu to another?' (I clicked NO, because it didn't). So I took it's advice and ran sudo apt-get install -f This is what results: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: libapt-pkg4.12:i386 The following packages will be upgraded: libapt-pkg4.12:i386 1 upgraded, 0 newly installed, 0 to remove and 87 not upgraded. 1 not fully installed or removed. Need to get 0 B/941 kB of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? Y E: Internal Error, No file name for libapt-pkg4.12 When running sudo apt-get update it's all fine, but running sudo apt-get install -f still results in the same thing. I really have no idea what to do... can anyone help me?

    Read the article

  • Project frozen - what should I leave to the people after me?

    - by Maistora
    So the project I've been working on is now going to be frozen indefinitely. It is possible that if and when the project unfreezes again, it won't be assigned to me or anybody from the current team. Actually, we inherited the project after it had been frozen before, but there was nothing left by the prior team to help us understand even the basic needs of the project, so we wasted a lot of time getting to know the project well. My question is what do you think we should do to help the people after us to best understand the needs of the project, what we have done, why we've done it, etc. I am open to other ideas of why should we leave some tracks to the others that will work on this project also. Some steps we already have taken: technical documentation (not full but at least there is some); source-control system history; estimations on which parts of the project need improvement and why we think so; bunch of unit tests. issue tracker with all the tickets we've done (EDIT) What do you think of what we've already prepared and what else can we do?

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >