Search Results

Search found 21053 results on 843 pages for 'out of process'.

Page 502/843 | < Previous Page | 498 499 500 501 502 503 504 505 506 507 508 509  | Next Page >

  • Why can't I record 16khz sampling audio using my laptop?

    - by KayKay
    I want to know why my laptop can't record 16khz sampling audio. The sampling rates I can have using my laptop are higher than 16khz. e.g, 44khz, 48khz, 192khz, and so on... I need to record 16khz sampling audio using my laptop. Sound card in my laptop is Conexant 20671 SmartAudio HD Although I can record 16khz sampling by Sound Forge 8.0, I am doubt whether the recorded audio is really 16khz sampling or not. Because the sound card can't record 16khz sampling, I think there may be some problems on the recording process. Could you give me any hint why the sound card can't record 16khz? and any method to identify whether the recorded audio by Sound Forge 8.0 is really 16khz? Thanks.

    Read the article

  • A Web Service to collect data from local servers every hour

    - by anilerduran
    I'm trying to find a way to collect data from different servers around the world. Here are the details: There is only one single PowerShell script on servers that encrypts data (simple csv file) and sends with preferred method (HTTP/HTTPS Post could be) There is no more control on that servers. Can't install any service, process etc. Just I can configure script to execute every hour. This script also will have encrypted username/password/license key for every server. Script will compress data and send to me with these information. So I need a service (I'm not sure if Web Service is the rigth solution) on the cloud that will help me to: Will get data that is sent from servers using a method. Will authenticate request to recognize sender using license key/username/password and most importantly, Will redirect/send this filecab to my SQL Server on the cloud (Azure). Also it should seperate data according to customer information in license key. So every data for every customer will be stored in dedicated DB/Tables on my SQL All the processes above should be completed automatically. No way for manual steps. Question: A Web Service (SOAP or Restful) is the rigth solution for that?

    Read the article

  • Join Us!! Live Webinar: Using UPK for Testing

    - by Di Seghposs
    Create Manual Test Scripts 50% Faster with Oracle User Productivity Kit  Thursday, March 29, 2012 11:00 am – 12:00 pm ET Click here to register now for this informative webinar. Oracle UPK enhances the testing phase of the implementation lifecycle by reducing test plan creation time, improving accuracy, and providing the foundation for reusable training documentation, application simulations, and end-user performance support—all critical assets to support an enterprise application implementation. With Oracle UPK: Reduce manual test plan development time - Accelerate the testing cycle by significantly reducing the time required to create the test plan. Improve test plan accuracy - Capture test steps automatically using Oracle UPK and import those steps directly to any of these testing suites eliminating many of the errors that occur when writing manual tests. Create the foundation for reusable assets - Recorded simulations can be used for other lifecycle phases of the project, such as knowledge transfer for training and support. With its integration to Oracle Application Testing Suite, IBM Rational, and HP Quality Center, Oracle UPK allows you to deploy high-quality applications quickly and effectively by providing a consistent, repeatable process for gathering requirements, planning and scheduling tests, analyzing results, and managing  issues. Join this live webinar and learn how to decrease your time to deployment and enhance your testing plans today! 

    Read the article

  • How can I access my mini-pc (RaspberryPi / MK802 / Mele A1000 / VIA APC) via ethernet/wifi without having Monitor?

    - by sky770
    Soon I will be getting my own mini-PC (RaspberryPi / MK802 / Mele A1000 / VIA APC). But I was wondering whether is there any possibility that I can just power up and access my mini-pc's OS by connecting it to wifi/ethernet link and remotely access it over the LAN without actually needing a monitor (throughout the process?) ? I currently own a laptop and need a download box and later will be getting a HDTV for converting to a HTPC :D So, I don't really own a spare monitor now but I do have an extra keyboard and mouse. Is there exists any linux distro for the same? which I can use to directly fireup my mini-pc and hook it up across my LAN to remotely access through my laptop? Any suggestions appreciated :) Regards, sky770

    Read the article

  • HAProxy being killed with more that 54,000 connections

    - by Olly
    I am trying to run HAProxy (1.4.8) on a EC2 machine running Ubuntu 10.04. I need HAProxy to be able to handle many thousands of long-running persistent connections (websockets). With the current setup HAProxy gets killed at around 54,300 connections (roughly). If I am running HAProxy in the foreground, the only output is "Killed". Am I right in thinking this is the Kernel killing the process? Is this because it is out of resources? Can I increase the resources? The CPU and memory consumption are low with 50,000 connections, so I don't suspect either of these. How can I prevent this from happening?

    Read the article

  • Ubuntu apt-get install (--download-only) executed from another machine on behalf of mine

    - by Maroloccio
    I have a server on a network segment with no direct or indirect access to the Internet. I want to perform an: apt-get install <package_name> Is there a way to somehow delegate the process of downloading the required files to another machine by exporting the server configuration so as to satisfy all dependencies while running: apt-get install --download-only <package_name> Can, in effect, apt-get install read a configuration from an exported archive rather than from the local package database? Can the list of packages to be downloaded be retrieved, along with an installation script to perform the installation, instead of the actual packages? (a further level of indirection which would help me schedule this with wget at appropriate times...)

    Read the article

  • Replicate portion of an LDAP directory to external server

    - by colemanm
    We're in the process of setting up a Jabber server on Amazon EC2 right now, and we'd like to have our internal users authenticate via LDAP so we don't have to create/manage a separate set of user accounts than the master directory in the office. My question is: is there a way to copy, unidirectionally, a segment of our internal LDAP directory (the user accounts OU) to an external LDAP server and authenticate Jabber against that? We're trying to work around having our externally hosted machines out in the cloud accessing our internal network directly... If we can replicate in one direction only a subset of the user accounts, then if that gets compromised we don't necessarily have a critical security breach into our internal network.

    Read the article

  • Improved Customer Experience, but at what Cost? See the DELL Computer experience with RTD

    - by Richard Lefebvre
    We can all probably agree that improving your customers' experience is a good thing. But a key question many people are asking is will it help your organization and, in particular, what are the financial benefits? That's a good question, especially when companies ARE experiencing phenomenal return on investment (ROI). Of course, there are many factors that impact ROI or other measures of success, but we'd like to share some success stories as examples of customer experience in action and delivering positive results. If you would like to learn more about the economics of customer experience, see Brian Curran's presentation at the Oracle Customer Experience Summit last month. In this series of blog posts, we'll share actual customer stories. Today's example is Dell, which uses Oracle Real-Time Decisions (RTD) and Siebel CRM as part of their customer experience portfolio to better understand their customers' needs and wants and provide consistent interactions. Regular readers of this blog are probably familiar with Siebel, but RTD may be new to many of you. RTD is a complete decision management solution that delivers real-time decisions and recommendations and automatically renders decisions within a business process to create tailored messaging for every customer interaction. What does that mean? In the video below, Dell describes how customer experience is important not just for one interaction channel, but across all "vehicles." RTD is helping Dell understand customer behavior and communicate with the customer in a more relevant manner, across all communication  or interaction channels including sales and service call centers, email marketing and online. Dell continues to expand use of RTD because the benefits are showing up in sales, service and marketing results including 19% increase in close rates, faster issue resolution and 40% improvement in revenue per click in email marketing. Video link By Tony Berk on Nov 15, 2012

    Read the article

  • How to use psexec without admin privileges on target machine?

    - by HighCommander4
    Is it possible to use psexec to execute a command on a remote machine without having admin privileges on the remote machine? I tried running psexec \\<machine> -u <username> -p <password>, where <username> and <password> are non-admin credentials, but I get an "access denied" error I can remote desktop into the remote machine with the same credentials without any problems. My local machine is running Windows 7 Enterprise 64-bit, and the remote machine is running Windows Server 2008 64-bit. I do have admin privileges on the local machine. EDIT: To all the people who are downvoting this question: I am not trying to circumvent any sort of security measure. I can already run the process on the remote machine by remote desktop-ing into the remote machine and running it. I'm simply looking for a command-line way to do something I can already do through a GUI.

    Read the article

  • Learn More About the PO Approvals Analyzer

    - by LuciaC
    You may think that the PO Approvals Analyzer for Release 12 is only for diagnosing problems when you have a single Purchase Order or Requisition stuck in process, but it offers valuable information to keep your Procurement environment healthy.  Consider this:     The analyzer will list all Procurement critical patches that have not been applied.     It will provide Procurement invalid objects with error messages and provides solutions.     Validations of setup and database conditions for example max extents and space issues. Also the analyzer can be run on all Purchasing documents starting from a date you enter.  This multiple document check provides validations on:     Data corruption issues.     Workflow errors with generic messages i.e. document manager errors.     Documents with workflows in error that cannot be progressed via the application. And, unlike other diagnostics, the analyzer provides known solutions to the problems indicated! So access the Analyzer today and run it on your instance!  Access it now via Doc ID 1525670.1.

    Read the article

  • e2fsck extremly slow, although enough memory exists

    - by kaefert
    I've got this external USB-Disk: kaefert@blechmobil:~$ lsusb -s 2:3 Bus 002 Device 003: ID 0bc2:3320 Seagate RSS LLC As can be seen in this dmesg output, there are some problems that prevents that disk from beeing mounted: kaefert@blechmobil:~$ dmesg | grep sdb [ 114.474342] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.475089] sd 5:0:0:0: [sdb] Write Protect is off [ 114.475092] sd 5:0:0:0: [sdb] Mode Sense: 43 00 00 00 [ 114.475959] sd 5:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 114.477093] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.501649] sdb: sdb1 [ 114.502717] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.504354] sd 5:0:0:0: [sdb] Attached SCSI disk [ 116.804408] EXT4-fs (sdb1): ext4_check_descriptors: Checksum for group 3976 failed (47397!=61519) [ 116.804413] EXT4-fs (sdb1): group descriptors corrupted! So I went and fired up my favorite partition manager - gparted, and told it to verify and repair the partition sdb1. This made gparted call e2fsck (version 1.42.4 (12-Jun-2012)) e2fsck -f -y -v /dev/sdb1 Although gparted called e2fsck with the "-v" option, sadly it doesn't show me the output of my e2fsck process (bugreport https://bugzilla.gnome.org/show_bug.cgi?id=467925 ) I started this whole thing on Sunday (2012-11-04_2200) evening, so about 48 hours ago, this is what htop says about it now (2012-11-06-1900): PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 3704 root 39 19 1560M 1166M 768 R 98.0 19.5 42h56:43 e2fsck -f -y -v /dev/sdb1 Now I found a few posts on the internet that discuss e2fsck running slow, for example: http://gparted-forum.surf4.info/viewtopic.php?id=13613 where they write that its a good idea to see if the disk is just that slow because maybe its damaged, and I think these outputs tell me that this is not the case in my case: kaefert@blechmobil:~$ sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 3562 MB in 2.00 seconds = 1783.29 MB/sec Timing buffered disk reads: 82 MB in 3.01 seconds = 27.26 MB/sec kaefert@blechmobil:~$ sudo hdparm /dev/sdb /dev/sdb: multcount = 0 (off) readonly = 0 (off) readahead = 256 (on) geometry = 364801/255/63, sectors = 5860533160, start = 0 However, although I can read quickly from that disk, this disk speed doesn't seem to be used by e2fsck, considering tools like gkrellm or iotop or this: kaefert@blechmobil:~$ iostat -x Linux 3.2.0-2-amd64 (blechmobil) 2012-11-06 _x86_64_ (2 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 14,24 47,81 14,63 0,95 0,00 22,37 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0,59 8,29 2,42 5,14 43,17 160,17 53,75 0,30 39,80 8,72 54,42 3,95 2,99 sdb 137,54 5,48 9,23 0,20 587,07 22,73 129,35 0,07 7,70 7,51 16,18 2,17 2,04 Now I researched a little bit on how to find out what e2fsck is doing with all that processor time, and I found the tool strace, which gives me this: kaefert@blechmobil:~$ sudo strace -p3704 lseek(4, 41026998272, SEEK_SET) = 41026998272 write(4, "\212\354K[_\361\3nl\212\245\352\255jR\303\354\312Yv\334p\253r\217\265\3567\325\257\3766"..., 4096) = 4096 lseek(4, 48404766720, SEEK_SET) = 48404766720 read(4, "\7t\260\366\346\337\304\210\33\267j\35\377'\31f\372\252\ffU\317.y\211\360\36\240c\30`\34"..., 4096) = 4096 lseek(4, 41027002368, SEEK_SET) = 41027002368 write(4, "\232]7Ws\321\352\t\1@[+5\263\334\276{\343zZx\352\21\316`1\271[\202\350R`"..., 4096) = 4096 lseek(4, 48404770816, SEEK_SET) = 48404770816 read(4, "\17\362r\230\327\25\346//\210H\v\311\3237\323K\304\306\361a\223\311\324\272?\213\tq \370\24"..., 4096) = 4096 lseek(4, 41027006464, SEEK_SET) = 41027006464 write(4, "\367yy>x\216?=\324Z\305\351\376&\25\244\210\271\22\306}\276\237\370(\214\205G\262\360\257#"..., 4096) = 4096 lseek(4, 48404774912, SEEK_SET) = 48404774912 read(4, "\365\25\0\21|T\0\21}3t_\272\373\222k\r\177\303\1\201\261\221$\261B\232\3142\21U\316"..., 4096) = 4096 ^CProcess 3704 detached around 16 of these lines every second, so 4 read and 4 write operations every second, which I don't consider to be a lot.. And finally, my question: Will this process ever finish? If those numbers from fseek (48404774912) represent bytes, that would be something like 45 gigabytes, with this beeing a 3 terrabyte disk, which would give me 134 days to go, if the speed stays constant, and he scans the disk like this completly and only once. Do you have some advice for me? I have most of the data on that disk elsewhere, but I've put a lot of hours into sorting and merging it to this disk, so I would prefer to getting this disk up and running again, without formatting it anew. I don't think that the hardware is damaged since the disk is only a few months and since I can't see any I/O errors in the dmesg output. UPDATE: I just looked at the strace output again (2012-11-06_2300), now it looks like this: lseek(4, 1419860611072, SEEK_SET) = 1419860611072 read(4, "3#\f\2447\335\0\22A\355\374\276j\204'\207|\217V|\23\245[\7VP\251\242\276\207\317:"..., 4096) = 4096 lseek(4, 43018145792, SEEK_SET) = 43018145792 write(4, "]\206\231\342Y\204-2I\362\242\344\6R\205\361\324\177\265\317C\334V\324\260\334\275t=\10F."..., 4096) = 4096 lseek(4, 1419860615168, SEEK_SET) = 1419860615168 read(4, "\262\305\314Y\367\37x\326\245\226\226\320N\333$s\34\204\311\222\7\315\236\336\300TK\337\264\236\211n"..., 4096) = 4096 lseek(4, 43018149888, SEEK_SET) = 43018149888 write(4, "\271\224m\311\224\25!I\376\16;\377\0\223H\25Yd\201Y\342\r\203\271\24eG<\202{\373V"..., 4096) = 4096 lseek(4, 1419860619264, SEEK_SET) = 1419860619264 read(4, ";d\360\177\n\346\253\210\222|\250\352T\335M\33\260\320\261\7g\222P\344H?t\240\20\2548\310"..., 4096) = 4096 lseek(4, 43018153984, SEEK_SET) = 43018153984 write(4, "\360\252j\317\310\251G\227\335{\214`\341\267\31Y\202\360\v\374\307oq\3063\217Z\223\313\36D\211"..., 4096) = 4096 So this number of the lseeks before the reads, like 1419860619264 are already a lot bigger, standing for 1.29 terabytes if the numbers are bytes, so it doesn't seem to be a linear progress on a big scale, maybe there are only some areas that need work, that have big gaps in between them. (times are in CET)

    Read the article

  • Kubuntu apt-get -f install error

    - by ShaggyInjun
    I am seeing an error while running apt-get -f install. Can somebody help me out .. venkat@ubuntu:~/Downloads$ sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: libjack-jackd2-0 Suggested packages: jackd2 The following packages will be upgraded: libjack-jackd2-0 1 upgraded, 0 newly installed, 0 to remove and 256 not upgraded. 109 not fully installed or removed. Need to get 0 B/197 kB of archives. After this operation, 3,072 B of additional disk space will be used. Do you want to continue [Y/n]? Y (Reading database ... 274641 files and directories currently installed.) Preparing to replace libjack-jackd2-0 1.9.8~dfsg.1-1ubuntu1 (using .../libjack-jackd2- 0_1.9.8~dfsg.2-1precise1_amd64.deb) ... Unpacking replacement libjack-jackd2-0 ... dpkg: error processing /var/cache/apt/archives/libjack-jackd2-0_1.9.8~dfsg.2- 1precise1_amd64.deb (--unpack): './usr/share/doc/libjack-jackd2-0/buildinfo.gz' is different from the same file on the system dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Errors were encountered while processing: /var/cache/apt/archives/libjack-jackd2-0_1.9.8~dfsg.2-1precise1_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Is version history really sacred or is it better to rebase?

    - by dukeofgaming
    I've always agreed with Mercurial's mantra, however, now that Mercurial comes bundled with the rebase extension and it is a popular practice in git, I'm wondering if it could really be regarded as a "bad practice", or at least bad enough to avoid using. In any case, I'm aware of rebasing being dangerous after pushing. OTOH, I see the point of trying to package 5 commits in a single one to make it look niftier (specially at in a production branch), however, personally I think would be better to be able to see partial commits to a feature where some experimentation is done, even if it is not as nifty, but seeing something like "Tried to do it way X but it is not as optimal as Y after all, doing it Z taking Y as base" would IMHO have good value to those studying the codebase and follow the developers train of thought. My very opinionated (as in dumb, visceral, biased) point of view is that programmers like rebase to hide mistakes... and I don't think this is good for the project at all. So my question is: have you really found valuable to have such "organic commits" (i.e. untampered history) in practice?, or conversely, do you prefer to run into nifty well-packed commits and disregard the programmers' experimentation process?; whichever one you chose, why does that work for you? (having other team members to keep history, or alternatively, rebasing it).

    Read the article

  • is wisdom of what happens 'behind scenes' (in compiler, external DLLs etc.) important?

    - by I_Question_Things_Deeply
    I have been a computer-fanatic for almost a decade now. I've always loved and wondered how computers work, even from the purest, lowest hardware level to the very smallest pixel on the screen, and all the software around that. That seems to be my problem though ... as I try to write code (I'm pretty fluent at C++) I always sit there enormous amounts of time in front of a text-editor wondering how every line, statement, datum, function, etc. will correspond to every Assembly and machine instruction performed to do absolutely everything necessary for the kernel to allocate memory to run my compiled program, and all of the other hardware being used as well. For example ... I would write cout << "Before memory changed" << endl; and run the debugger to get the Assembly for this, and then try and reverse disassemble the Assembly to machine code based on my ISA, and then research every .dll, library file, linked library, linking process, linker source code of the program, the make file, the kernel I'm using's steps of processing this compilation, the hardware's part aside from the processor (e.g. video card, sound card, chipset, cache latency, byte-sized registers, calling convention use, DDR3 RAM and disk drive, filesystem functioning and so many other things). Am I going about programming wrong? I mean I feel I should know everything that goes on underneath English syntax on a computer program. But the problem is that the more I research every little thing the less I actually accomplish at all. I can never finish anything because of this mentality, yet I feel compelled to know everything... what should I do?

    Read the article

  • Android - big game universe

    - by user1641923
    I am new to an Android development, though I have much experience with Java, C++, PHP programming and a bit experience with vector graphics too (basic 3d Studio Max, Flash, etc). I am starting to work on an Android game. It is going to be a 2D space shooter/RPG, and I am not going to use any game engines and any 3D party libs. I really want to create a very large game universe, or even pseudo-infinite (without visible borders, as if it were a 2D projection of a sphere). It should include 10-12 clusters of 7-8 planets/other space objects and random amount of single asteroids/comets, which player can interact with and also not interactive background. I am looking for a least complicated aproach to create such a universe. My current ideas are: Simply create bitmaps with space scenery background so that they can be tiled seamlessly repeated and construct my 2D universe of this tiles, then place interactive objects (planets, other spaceships) on it. Using vector graphics. I would have a solid color background, some random background objects and gradients here and there. My problems here: Lack of knowledge of how well vector graphics is integrated in Android. Performance? Memory usage? Does Android manage big bitmaps well? Do all of the bitmaps have to be in memory during all game process? I am interested in technical details regarding each of the ideas and a suggestion, which I should go with.

    Read the article

  • Corporate Efficiency

    - by AndyScott
    Thoughts on streamlining the process of getting someone up to speed when they join a project as a new hire; or as is common in some companies, switch from one project to another: Has anyone heard of a strategy (including emphasis towards consistent, ongoing documentation) that would bring a user up to speed quickly? Has there been any thought given to focused documentation, specific to a role within a project? Or formalized mentoring within a project, that goes beyond a “system walkthrough”?   Often it's overlooked what time is wasted when a senior level worker is brought on board.  It's assumed that they will know the right questions to ask. They are the type of people that normally learn quickly, and in their own ways, so let them get by with what's out there.   Having a user without a computer will cost you measurable worker hours, making it an easy target to shoot at (and rightly so). Not getting them up to speed as quickly as possible is an efficiency issue, that seems to have become an industry standard as an accepted loss. Given the complexity of the projects within most companies, and the frequency with which users are shifted from one project to another based on need; I think this is an area that bears consideration.

    Read the article

  • What is a good web interface for remote linux load monitoring?

    - by Jakobud
    I'm looking for some type of remote linux monitoring software that you can view using a web interface. And I'm not just looking for the basic load information. I'm also looking for process information, similar to the info that you get from TOP. Like I'd just like to be able to pop open this webpage to view whats going on with the server at a moments notice. For example, perhaps just a basic PHP page that is on the server that uses basic AJAX to display and refresh results from the TOP command in the page. I was thinking about writing something like this, but I don't want to reinvent the wheel.

    Read the article

  • scrape data from a website and post it on the blog (wordpress)

    - by Pennf0lio
    This could be in DocType But I'm looking for a software or just a plugin for wordpress. I wanted to fetch those data from a website and automatically post it on my blog (Wordpress powered). It doesn't have rss or api to get those data, so I need to manually copy and paste it one-by-one and post it on wordpress. Do you know an alternative options on my process? or you know a software or a plugin that does the job? Thanks!

    Read the article

  • ISO 12207 - testing being only validation activity? [closed]

    - by user970696
    Possible Duplicate: How come verification does not include actual testing? ISO norm 12207 states that testing is only validation activity, while all static inspections are verification (that requirement, code.. is complete, correct..). I did found some articles saying its not correct but you know, it is not "official". I would like to understand because there are two different concepts (in books & articles): 1) Verification is all testing except for UAT (because only user can really validate the use). E.g. here OR 2) Verification is everything but testing. All testing is validation. E.g. here Definitions are mostly the same, as Sommerville's: The aim of verification is to check that the software meets its stated functional and non-functional requirements. Validation, however, is a more general process. The aim of validation is to ensure that the software meets the customer’s expectations. It goes beyond simply checking conformance with the specification to demonstrating that the software does what the customer expects it to do It is really bugging me because I tend to agree that functional testing done on a product (SIT) is still verification because I just follow the requirements. But ISO does not agree..

    Read the article

  • Installing Color-Theme with GNU Emacs 23.2 on OS X Snow Leopard

    - by idclark
    Hi all, I've just started using emacs a week ago and I've been unsuccessful in installing color-theme using GNU Emacs 23.2 on OS X. With Ubuntu the whole process took maybe a few minutes with the package manager, but I'm completely at a loss with OS X, what the heck is a "tarball"? I don't have any experience compiling source code. I know Carbon Emacs comes with color-theme packaged, what would i lose by reverting to Emacs 22? I'd prefer staying with GNU Emacs 23 across both systems. Any input is greatly appreciated!!

    Read the article

  • Why does my iTunes use so much CPU time?

    - by bikesandcode
    I have a roughly 2 year old Macbook (10.5). I have iTunes 10. When iTunes is playing MP3s, I see CPU usage of the iTunes process in the system monitor ranging from 65%-75%. When I pause the music, I see CPU usage of about 65%-75%. I do not have any visualisations going, to my knowledge I have not turned on any CPU destroying features, my music library isn't tiny, but it's hardly huge (3GB). This is mildly annoying when I'm plugged into the wall as I only have slightly longer compile times, but if I am out and about, this is a major drain on the battery. Using VLC I see CPU loads of ~= 10% at the most when listening to music and generally lower. What the heck is iTunes doing?

    Read the article

  • Catalyst Control Center removal

    - by Allan
    I've recently tested a graphics card on a machine that required different drivers than my onboard integrated graphics, so I had catalyst control center installed. After I pulled out the graphics card and went back to using the onboard graphics, I deleted the catalyst center through control panel - programs.. which was fine. But apparently it didn't really remove it. it doesn't show up under programs anymore but when I reboot windows I get an error saying "The catalyst Control Center is not supported by the driver version of your enabled graphics adapter" and its still in the process bar... how do I permanently remove it?

    Read the article

  • My computer won't go into standby or hibernate

    - by Thomas B.
    Hi. I have a problem that I first noticed yesterday. Whenever I would press the half moon standby button on my keyboard, my computer would go to sleep. I also have a shortcut on my desktop configured to put my computer into hibernate. But now whenever I try to put my pc in sleep or hibernate mode, my monitor goes black for a few seconds but then comes back on at the login screen. I haven't installed or changed anything other than create a couple logical partitions in the hfs+ filesystem. (still in the process of trying to triple-boot) Any help would be great, but for now I'm going to bed. Will check back in the morning.

    Read the article

  • Auto-scaling EC2 Servers and Updating Code

    - by jstats
    We've come to the point where we need to set up autoscaling for our web server and I'm unsure how to go about the process of scaling servers and updating the the existing code without remaking a new AMI and changing the autoscale config to use it. I've read a bit about people bundling the new code and uploading it to s3 and having new servers grab the bundle on boot up but that doesn't seem all that pleasant either. Currently the web app's files live in a git repo, and when we update the code, we push it to github, ssh into the web app and run a hook to bring down the latest code. So I was thinking that another option could be to just run that hook on an hourly or daily cron task. Unfortunately that doesn't cover everything type of update (for example new blog posts' images and such which aren't included in the git repo) but it's something. Could anyone provide some advice on what a common solution is or anything as to why my proposed solution is a bad idea? Thanks all

    Read the article

  • Is it appropriate to run a complex enterprise-system configuration and migration project in a similar way to a Scrum development project?

    - by AndyM
    I'm just starting out on the implementation of a large enterprise-wide system, which has complex requirements and many stakeholders. The company has been through high-level evaluation and tender process and determined to purchase a highly configurable "off-the-shelf" product rather than building an entirely bespoke system. The system will replace several existing systems and will require a significant amount of data migration. I'm thinking that the implementation of this system (which is expected to take over 2 years) could be run in a similar way to a Scrum software development project. With the first sprints targeted at building the minimal possible functionality needed (across all functional areas), and then iteratively deepening the level of functionality according the stakeholder feedback. I think this will de-risk the project and help ensure a balance of stakeholder needs within the available time. The user stories are still the same, it's just that to implement them we have work within the constraints of the pre-purchased system. When it comes to 'building stuff', instead of writing custom code the team will be configuring the off-the-shelf package, writing data conversion scripts and the like (and it should be a lot quicker!). Does this sound like a sensible approach? Does the Agile approach makes sense here?

    Read the article

< Previous Page | 498 499 500 501 502 503 504 505 506 507 508 509  | Next Page >