Search Results

Search found 592 results on 24 pages for 'rolling stones'.

Page 18/24 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • No sound through headset - only mic is working

    - by Kristis
    I noticed that no sound is being played to my headphones. The laptop has a conexant sound card together with the sounds apps provided. But the thing is that I also noticed that now instead of one playback device - 2 are presented: speakers and headphones. And while speakers do play sound nicely - even test sound are not played through headphone output. Also, headphone output does not have a jack assigned to it, while speakers have L R Rear panel Analog Jack(my laptop does not have a jack on the back - only on the right). Also - my headphones have a mic as well - and when I plug it in - the mic is working(using top panel digital jack),but the headphones themselves are not. And laptop is recognizing when audio device is plugged in. And I checked the headset on other devices - the headphones are working. I have tried updating drivers, rolling back drivers and completely uninstalling drivers and then restarting - nothing helped. I imagine that I somehow need to reconfigure the jack configuration just have no idea how and where. Any suggestions? Thanks

    Read the article

  • rsnapshot - not correctly archiving mysql databases

    - by Tiffany Walker
    My rsnapshot configuration: snapshot_root /.snapshots/ backup /home/user localhost/ backup_script /usr/local/backup_mysql.sh localhost/mysql/ Using this file: NOW=$(date +"%m-%d-%Y") # mm-dd-yyyy format FILE="" # used in a loop ### Server Setup ### #* MySQL login user name *# MUSER="root" #* MySQL login PASSWORD name *# MPASS="YOUR-PASSWORD" #* MySQL login HOST name *# MHOST="127.0.0.1" #* MySQL binaries *# MYSQL="$(which mysql)" MYSQLDUMP="$(which mysqldump)" GZIP="$(which gzip)" # get all database listing DBS="$($MYSQL -u $MUSER -h $MHOST -p$MPASS -Bse 'show databases')" # start to dump database one by one for db in $DBS do FILE=$BAK/mysql-$db.$NOW-$(date +"%T").gz # gzip compression for each backup file $MYSQLDUMP --single-transaction -u $MUSER -h $MHOST -p$MPASS $db | $GZIP -9 > $FILE done It dumps the databases under / I then tried with the following: http://bash.cyberciti.biz/backup/rsnapshot-remote-mysql-backup-shell-script/ I got: rsnapshot hourly ---------------------------------------------------------------------------- rsnapshot encountered an error! The program was invoked with these options: /usr/bin/rsnapshot hourly ---------------------------------------------------------------------------- ERROR: backup_script /usr/local/backup_mysql.sh returned 1 WARNING: Rolling back "localhost/mysql/" ls -la /.snapshots/hourly.0/localhost/mysql total 8 drwxr-xr-x 2 root root 4096 Nov 23 17:43 ./ drwxr-xr-x 4 root root 4096 Nov 23 18:20 ../ What exactly am I doing wrong? EDIT: # /usr/local/backup_mysql.sh *** Dumping MySQL Database *** Database> information_schema..cphulkd..eximstats..horde..leechprotect..logaholicDB_ns1..modsec..mysql..performance_schema..roundcube..test.. *** Backup done [ files wrote to /.snapshots/tmp/mysql] *** root@ns1 [~]# ls -la /.snapshots/tmp/mysql total 8040 drwxr-xr-x 2 root root 4096 Nov 23 18:41 ./ drwxr-xr-x 3 root root 4096 Nov 23 18:41 ../ -rw-r--r-- 1 root root 1409 Nov 23 18:41 cphulkd.18_41_45pm.gz -rw-r--r-- 1 root root 113522 Nov 23 18:41 eximstats.18_41_45pm.gz -rw-r--r-- 1 root root 4583 Nov 23 18:41 horde.18_41_45pm.gz -rw-r--r-- 1 root root 71757 Nov 23 18:41 information_schema.18_41_45pm.gz -rw-r--r-- 1 root root 692 Nov 23 18:41 leechprotect.18_41_45pm.gz -rw-r--r-- 1 root root 2603 Nov 23 18:41 logaholicDB_ns1.18_41_45pm.gz -rw-r--r-- 1 root root 745 Nov 23 18:41 modsec.18_41_45pm.gz -rw-r--r-- 1 root root 138928 Nov 23 18:41 mysql.18_41_45pm.gz -rw-r--r-- 1 root root 1831 Nov 23 18:41 performance_schema.18_41_45pm.gz -rw-r--r-- 1 root root 3610 Nov 23 18:41 roundcube.18_41_45pm.gz -rw-r--r-- 1 root root 436 Nov 23 18:41 test.18_41_47pm.gz MySQL Backup seems fine.

    Read the article

  • error: unexplained error (code 130) at rsync.c(541) [sender=3.0.7]

    - by brazorf
    This error: unexplained error (code 130) at rsync.c(541) [sender=3.0.7] error is happening after i changed router. Actually, i found out that this error just happens on a ctrl+c signal, so it could be not representative about the error itself. The command i run is very basic: rsync -avz --delete /local/path/ username@host:/path/to/remote/directory Basically, the rsync just stuck there and nothing's happening, until i ctrl+c. After interrupting the process i got the error in subject. I past the whole thing here: rsync -avvvvz --delete /source/path/ username@host:/path/to/direectory cmd=<NULL> machine=HOSTNAME user=username path=/path/to/direectory cmd[0]=ssh cmd[1]=-l cmd[2]=username cmd[3]=HOSTNAME cmd[4]=rsync cmd[5]=--server cmd[6]=-vvvvlogDtprze.iLsf cmd[7]=--delete cmd[8]=. cmd[9]=/path/to/direectory opening connection using: ssh -l username HOSTNAME rsync --server -vvvvlogDtprze.iLsf --delete . /path/to/direectory note: iconv_open("UTF-8", "UTF-8") succeeded. ^C[sender] _exit_cleanup(code=20, file=rsync.c, line=541): entered rsync error: unexplained error (code 130) at rsync.c(541) [sender=3.0.7] [sender] _exit_cleanup(code=20, file=rsync.c, line=541): about to call exit(130) The authentication runs on ssh via rsa key. I tried basic troubleshoot such as: ping the remote host ssh -l username remote.host check software firewall logs i asked the remote host sysadmin to check for logs, and when i run that command a ssh connection is actually being established and i can state there is no comunication/authentication/name resolution issue here. Rolling back to old router make this work again. Both client and server are running ubuntu 10.04. Try to take a look at my router configuration, where i'm no experienced at all, but i didnt see any "suspect" (what i was looking for is firewall blocking something) setting. The router itself is DLINK DVA-G3670B. Any suggestion? Thank You F.

    Read the article

  • Win Svr 2003 DHCP Bad Addresses

    - by VinceM
    After looking at other posts I still can figure this out. I'll start at the beginning... I inherited this network and I'm not the most knowledgeable about networking... We have a AD DHCP Server that is also our DNS server, We were having some VPN issues (on the same server) and my boss decided to disable routing and remote access, which cleared the settings. We couldn't get it set back up correctly so we rolled back to a backup drive they created a number of months ago. Since rolling back I've had Bad_Address listings in DHCP and there is a number of duplicate records in the DNS Forward Lookup Zones. We have less than 50 devices on the network but I have over 90 Bad Addresses showing. This server is currently running but we get IP address conflicts all the time on pretty much all the computers. I have had people do release and renew but it didn't help... I have also deleted and re-added the scope to no avail either. Any help or ideas would be greatly appreciated and I apologize if I missed another post that has information to help. Thanks, Vince

    Read the article

  • What is the quickest and safest way to test new software and revert all changes, if needed?

    - by calbar
    I'm looking for Windows software that will allow me to quickly create a "checkpoint", do whatever I might need to do to my computer - install programs/drivers/updates, create/delete personal files, reboot the system multiple times, open questionable attachments - and then revert the entire system back to when the checkpoint was created. Essentially I want Windows Restore Points that save my personal files and partitions, too. It sounds like disk imaging might be the ticket, but creating them is much too slow and the restore process too involved... I'm hoping to sacrifice full disaster recovery for speed. Creating a checkpoint should be as close to one-click as possible, and rolling back should be a matter of selecting a restore point and rebooting. Ding! I'm familiar with Sandboxie, True Image Home "Try and Decide", Returnil, and a number of other "virtual system" apps that actively "catch" changes and allow you to commit or reject them. I'm not interested in these for a number of reasons - I prefer the "cut and dry" restore point approach. Finally, I'll note that I've just recently become aware of Comodo Time Machine. It sounds absolutely perfect, however, a quick skim through the user forums show more than a few horror stories of corrupted, unbootable systems. Any positive personal experience with the software to suppress my superstitions, or suggestions for more established alternatives would be greatly appreciated - Comodo Time Machine seems relatively new. Thanks for your help!

    Read the article

  • Internal Code Signing: Key Distribution, or Certificate Server?

    - by Myrddin Emrys
    I should first note that we have nobody in IT with significant familiarity with self-signed certification. We have a moderately sprawling network (one forest, many locations), and we are now rolling out internal code signing; until now users have run untrusted code, or we even disabled(!) the warnings. Intranet applications, scripts, and sites will now be signed with self certification. I am aware of two obvious ways we can deploy this: Distributing the keys directly via a group policy, and setting up a cert server. Can someone explain the trade-offs between these two methods? How many certs before the group policy method is unwieldy? Are they large enough that remote users will have issues? Does the group policy method distribute duplicates on every login? Is there a better method I am not aware of? I can find a lot of documentation on certifications and various ways to create them, but I have not been able to find something that summarizes the difference between the distribution methods and what criteria make one or the other superior.

    Read the article

  • central apache log analysis of many hosts

    - by Jason Antman
    We have 30+ apache httpd servers, and are looking to perform analysis on the logs both for historical trending and near "real time" monitoring/alerting. I'm mainly interested in things like error rates (4xx/5xx), response time, overall request rate, etc. but it would also be very useful to pull out more compute-intensive statistics like unique client IPs and user agents per unit of time. I'm leaning towards building this as a centralized collector/server/storage, and am also considering the possibility of storing non-apache logs (i.e. general syslog, firewall logs, etc.) in the same system. Obviously a large part of this will probably have to be custom (at least the connection between pieces and the parsing/analysis we do), but I haven't been able to find much information on people who have done stuff like this, at least at shops smaller than Google/Facebook/etc. who can throw their log data into a hundred-node compute cluster and run Map/Reduce on it. The main things I'm looking for are: - All open source - Some way of collecting logs from apache machines that isn't too resource-intensive, and transports them relatively quickly over the network - Some way of storing them (NoSQL? key-value store?) on the backend, for a given amount of time (and then rolling them up into historical averages) - In the middle of this, a way of graphing in near-real-time (probably also with some statistical analysis on it) and hopefully alerting off of those graphs. Any suggestions/pointers/ideas, to either "products"/projects or descriptions of how other people do this would be greatly helpful. Unfortunately, we're not exactly a new-age-y devops shop, lots of old stuff, homogeneous infrastructure, and strained boxes.

    Read the article

  • Why are all of my ZFS snapshot directories empty?

    - by growse
    I'm running an Oracle 11 box as a ZFS storage appliance, and I'm taking regular snapshots of the ZFS filesystems, via cron. In the past, I know that if I wanted to grab a particular file from a snapshot, a read-only copy was kept in .zfs/snapshot/{name}/ and I could just navigate there and pull the file out. This is documented on Oracle's website. However, I went to do this the other day, and noticed that the ZFS directories within the snapshot directories are all empty. zfs list -t snapshot correctly shows the list of snapshots that should be present, and .zfs/snapshots correctly contains a directory for each snapshot, and in each snapshot there is a directory present for each ZFS filesystem. However, these directories appear to be empty. I just tested a restore by touching a file in a little-used share and rolling back to the latest hourly snapshot, and this appears to have worked fine. So the rollback functionality is there. Did Oracle change how snapshots are done? Or is something seriously wrong here?

    Read the article

  • MySQL Tables Missing/Corrupt After Recreation

    - by Synetech inc.
    Hi, Yesterday I dumped my MySQL databases to an SQL file and renamed the ibdata1 file. I then recreated it and imported the SQL file and moved the new ibdata1 file to my MySQL data directory, deleting the old one. I’ve done it before without issue, however this time something is not right. When I examine the (personal, not MySQL config) databases, they are all there, but they are empty… sort of. The data directory still has the .ibd files with the correct content in them and I can view the table list in the databases, but not the tables themselves. (I have file-per-table enabled, and am using InnoDB as default for everything.) For example with the urls database and its urls table, I can successfully open mysql.exe or phpMyAdmin and use urls;. I can even show tables; to see the expected table, but then when I try to describe urls; or select * from urls;, it complains that the table does not exist (even though it just listed it). (The MySQL Administrator lists the databases, but does not even list the tables, it indicates that the dbs are completely empty.) The problem now is that I have already deleted the SQL file (and cannot recover it even after scouring my hard-drive). So I am trying to figure out a way to repair these databases/tables. I can’t use the table repair function since it complains that the table does not exist, and I can’t dump them because again, it complains that the tables don’t exist. Like I’ve said, the data itself is still present in the .ibd files and the table names are present. I just need a way to get MySQL to recognize that the tables exist in the databases (I can find the column names of the tables in question in the ibdata1 file using a hex-editor). Any idea how I can repair this type of corruption? I don’t mind rolling up my sleeves, digging in, and taking a bunch of steps to fix it. Thanks a lot.

    Read the article

  • Tuning Linux + HAProxy

    - by react
    I'm currently rolling out HAProxy on Centos 6 which will send requests to some Apache HTTPD servers and I'm having issues with performance. I've spent the last couple of days googling and still can't seem to get past 10k/sec connections consistently when benchmarking (sometimes I do get 30k/sec though). I've pinned the IRQ's of the TX/RX queues for both the internal and external NICS to separate CPU cores and made sure HAProxy is pinned to it's own core. I've also made the following adjustments to sysctl.conf: # Max open file descriptors fs.file-max = 331287 # TCP Tuning net.ipv4.tcp_tw_reuse = 1 net.ipv4.ip_local_port_range = 1024 65023 net.ipv4.tcp_max_syn_backlog = 10240 net.ipv4.tcp_max_tw_buckets = 400000 net.ipv4.tcp_max_orphans = 60000 net.ipv4.tcp_synack_retries = 3 net.core.somaxconn = 40000 net.ipv4.tcp_rmem = 4096 8192 16384 net.ipv4.tcp_wmem = 4096 8192 16384 net.ipv4.tcp_mem = 65536 98304 131072 net.core.netdev_max_backlog = 40000 net.ipv4.tcp_tw_reuse = 1 If I use AB to hit the a webserver directly I easily get 30k/s connections. If I stop the webservers and use AB to hit HAProxy then I get 30k/s connections but obviously it's useless. I've also disabled iptables for now since I read that nf_conntrack can slow everything down, no change. I've also disabled the irqbalance service. The fact that I can hit each individual device with 30k/s makes me believe the tuning of the servers is OK and that it must be some HAProxy config? Here's the config which I've built from reading tuning articles, etc http://pastebin.com/zsCyAtgU The server is a dual Xeon CPU E5-2620 (6 cores) with 32GB of RAM. Running Centos 6.2 x64. The private and public interfaces are on separate NICS. Anyone have any ideas? Thanks.

    Read the article

  • Disk operations in windows 7 are slow

    - by Skadlig
    My computer started lagging last Sunday. I tried to reboot it and it failed. Trying to boot into failsafe mode takes around two hours. It mainly freezes on two files: scsiport.sys and classpnp.sys When it finally has started all disc operations are really slow. When it has run for a while it goes faster, probably due to data moved into RAM instead. It froze on an other file before that was associated with Avast but uninstalling it didn't really help. A critical windows update was installed on Sunday but rolling back the update didn't help. I had a guess about the sound card but disabling the sound card drivers also didn’t help. I have an inkling of an idea that it might be Intel rapid storage technology that might be acting up but it doesn't allow me to reinstall it from failsafe mode and I haven't been able to log into normal mode for a while. I would appreciate suggestions regarding how to get into normal mode again and/or what can be the root cause.

    Read the article

  • Should I update the kernel on a Linux machine?

    - by Legate
    As I understand it, updating to a new kernel (with the normal linux-image... package, not by rolling my own) requires a server restart. However, one of our servers (Ubuntu 10.04) is running several extensive screen sessions. Restarting kills those which is always a major hassle to their owners (mostly because of lost session histories). What should I do? I see several possibilites: Not doing anything, that is update only non-kernel packages (perhaps use apt-pinning?) Update the kernel, but not restart. (Is that smart? I seem to remember there might be some problems with loading kernel modules.) Updating the kernel and restarting. Is there perhaps some way to preserve the screen sessions? I guess it ultimately boils down to this question: How important is it to update the kernel? I posted this question here instead of askubuntu.com as I think this is not an Ubuntu-specific issue though this server is running Ubuntu.

    Read the article

  • Windows 7 Taskbar Icons Randomly Disappearing

    - by Ryker
    This is a problem that i'm totally stumped on. Taskbar Icons that are pinned, as well as open program thumbs keep disappearing. This is totally random, no set pattern. I've tried updating drivers for the vid card, rolling back to old drivers, nothing seems to work. I've also tried the usual uncheck hide the taskbar and recheck, changing the start menu, keeping commonly used programs in start menu, etc. This is using an add-in card which is a necessity. Here is the specs on the machine: New Dell Optiplex 390 i3 2100 Proc 3gb Ram, Windows 7 32-bit. The add in card is an Nvidia 8400 GS model. I've tried different versions of the 8400gs, from MSI, or from other manufactureres and regardless it keeps happening. After the Icons disappear, I can log out and log back in and they return, but disappear again. I can reinstall the drivers for the vid card and it fixes it for a few days, and then they disappear again. There isn't any set pattern of days that it happens, software being used, etc. Another user in the office had the same problem with the exact same machine, but updating the vid card drivers fixed the issue. Anyone?

    Read the article

  • CSS window height problem with dynamic loaded css

    - by Michael Mao
    Hi all: Please go here and use username "admin" and password "endlesscomic" (without wrapper quotes) to see the web app demo. Basically what I am trying to do is to incrementally integrate my work to this web app, say, every nightly, for the client to check the progress. Also, he would like to see, at the very beginning, a mockup about the page layout. I am trying to use the 960 grid system to achieve this. So far, so good. Except one issue that when the "mockup.css" is loaded dynamically by jQuery, it "extends" the window to the bottom, something I do not wanna have... As an inexperienced web developer, I don't know which part is wrong. Below is my js: /* master.js */ $(document).ready(function() { $('#addDebugCss').click(function() { alertMessage('adding debug css...'); addCssToHead('./css/debug.css'); $('.grid-insider').css('opacity','0.5');//reset mockup background transparcy }); $('#addMockupCss').click(function() { alertMessage('adding mockup css...'); addCssToHead('./css/mockup.css'); $('.grid-insider').css('opacity','1');//set semi-background transparcy for mockup }); $('#resetCss').click(function() { alertMessage('rolling back to normal'); rollbackCss(new Array("./css/mockup.css", "./css/debug.css")); }); }); function alertMessage(msg) //TODO find a better modal prompt { alert(msg); } function addCssToHead(path_to_css) { $('<link rel="stylesheet" type="text/css" href="' + path_to_css + '" />').appendTo("head"); } function rollbackCss(set) { for(var i in set) { $('link[href="'+ set[i]+ '"]').remove(); } } Something should be added to the exteral mockup.css? Or something to change in my master.js? Thanks for any hints/suggestions in advance.

    Read the article

  • LINQ into SortedList

    - by Chris Simmons
    I'm a complete LINQ newbie, so I don't know if my LINQ is incorrect for what I need to do or if my expectations of performance are too high. I've got a SortedList of objects, keyed by int; SortedList as opposed to SortedDictionary because I'll be populating the collection with pre-sorted data. My task is to find either the exact key or, if there is no exact key, the one with the next higher value. If the search is too high for the list (e.g. highest key is 100, but search for 105), return null. // The structure of this class is unimportant. Just using // it as an illustration. public class CX { public int KEY; public DateTime DT; } static CX getItem(int i, SortedList<int, CX> list) { var items = (from kv in list where kv.Key >= i select kv.Key); if (items.Any()) { return list[items.Min()]; } return null; } Given a list of 50,000 records, calling getItem 500 times takes about a second and a half. Calling it 50,000 times takes over 2 minutes. This performance seems very poor. Is my LINQ bad? Am I expecting too much? Should I be rolling my own binary search function?

    Read the article

  • Mercurial hg Subrepository Problem - "abort: unknown revision'

    - by Tex
    Note: I asked this yesterday over at kiln.stackexchange.com, but haven't gotten an answer, and it's holding up my work. So I figured I'd give it a shot here. My main mercurial repository has a bunch of subrepositories in it. During initial setup, I made a mistake in my .hgsub. Namely, I pointed two subrepositories to the same directory. What I should have had: sites/1=sites/1 sites/2=sites/2 sites/3=sites/3 What I actually had: sites/1=sites/1 sites/2=sites/2 sites/2=sites/3 Stupid copy/paste error. I committed the incorrect .hgsub, not realizing my error. A few revisions later, while adding a some new subrespositories to .hgsub, I noticed the mistake and fixed it inside .hgsub. I committed and kept rolling along. I've committed a reasonable amount of work that I'd prefer not to redo since I 'fixed' the mistake in .hgsub. Now we come to the actual problem: I've made some changes inside the subrepository sites/3, and when I try to commit the main repository, I get the following error: abort: unknown revision 'LongGUIDLookingString' I found this discussion, which seems to address the same problem I'm having, but I can't quite work out how bos fixed it. What do I need to do in order to fix this?

    Read the article

  • Prepared transactions with Postgres 8.4.3 on CentOS

    - by peter
    I have set 'max_prepared_transactions' to 20 in the local postgres.config and yet the transaction fails with the following error trace (but only on Linux). Since in Windows the same code works seamlessly I am wandering if this isn't an issue of permission. What would be the solution? Thanks Peter 372300 [Atomikos:7] WARN atomikos - XA resource 'XADBMS': rollback for XID '3137332E3230332E3132362E3139302E746D30303030313030303037:3137332E3230332E3132362E3139302E746D31' raised -3: the XA resource detected an internal error org.postgresql.xa.PGXAException: Error rolling back prepared transaction at org.postgresql.xa.PGXAConnection.rollback(PGXAConnection.java:357) at com.atomikos.datasource.xa.XAResourceTransaction.rollback(XAResourceTransaction.java:873) at com.atomikos.icatch.imp.RollbackMessage.send(RollbackMessage.java:90) at com.atomikos.icatch.imp.PropagationMessage.submit(PropagationMessage.java:86) at com.atomikos.icatch.imp.Propagator$PropagatorThread.run(Propagator.java:62) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:651) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:676) at java.lang.Thread.run(Thread.java:595) Caused by: org.postgresql.util.PSQLException: ERROR: prepared transaction with identifier "1096044365_MTczLjIwMy4xMjYuMTkwLnRtMDAwMDEwMDAwNw==_MTczLjIwMy4xMjYuMTkwLnRtMQ==" does not exist at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2062) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1795) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:257) at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:479) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:353) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeUpdate(AbstractJdbc2Statement.java:299) at org.postgresql.xa.PGXAConnection.rollback(PGXAConnection.java:347)

    Read the article

  • Regression Testing and Deployment Strategy

    - by user279516
    I'd like some advice on a deployment strategy. If a development team creates an extensive framework, and many (20-30) applications consume it, and the business would like application updates at least every 30 days, what is the best deployment strategy? The reason I ask is that there seems to be a lot of waste (and risk) in using an agile approach of deploying changes monthly, if 90% of the applications don't change. What I mean by this is that the framework can change during the month, and so can a few applications. Because the framework changed, all applications should be regression-tested. If, say, 10 of the applications don't change at all during the year, then those 10 applications are regression-tested EVERY MONTH, when they didn't have any feature changes or hot fixes. They had to be tested simply because the business is rolling updates every month. And the risk that is involved... if a mission-critical application is deployed, that takes a few weeks, and multiple departments, to test, is it realistic to expect to have to constantly regression-test this application? One option is to make any framework updates backward-compatible. While this would mean that applications don't need to change their code, they would still need to be tested because the underlying framework changed. And the risk involved is great; a constantly changing framework (and deploying this framework) means the mission-critical app can never just enjoy the same code base for a long time. These applications share the same database, hence the need for the constant testing. I'm aware of TDD and automated tests, but that doesn't exist at the moment. Any advice?

    Read the article

  • VS 2008 Service Pack 1 problem

    - by Compiler
    Hi, My OPS is XP and service pack 3 installed.I cant install vs2008 service pack1,In log file i see 'Visual C++ 2008 SP1 Design-Time Components for x86 - KB947888' cant be installed. Error code is 1603.Last part of Installation file is here. Returning IDOK. INSTALLMESSAGE_ERROR [Error 1335. The cabinet file 'patch.cab' required for this installation is corrupt and cannot be used. This could indicate a network error, an error reading from the CD-ROM, or a problem with this package.] [1/12/2009, 10:14:50] (IronSpigot::MsiExternalUiHandler::UiHandler) Returning IDOK. INSTALLMESSAGE_ACTIONSTART [Action 10:14:50: Rollback. Rolling back action:] [1/12/2009, 10:17:29] (IronSpigot::MspInstallerT<class ATL::CStringT<unsigned short,class ATL::StrTraitATL<unsigned short,class ATL::ChTraitsCRT<unsigned short ::PerformMsiOperation) Patch (C:\DOCUME~1\Cem\LOCALS~1\Temp\Microsoft Visual Studio 2008 SP1\VS90sp1-KB945140-X86-ENU.msp; C:\DOCUME~1\Cem\LOCALS~1\Temp\Microsoft Visual Studio 2008 SP1\VC90sp1-KB947888-x86-enu.msp) install failed on product (Microsoft Visual Studio 2008 Professional Edition - ENU). Msi Log: Microsoft Visual Studio 2008 SP1_20090112_100005671-Microsoft Visual Studio 2008 Professional Edition - ENU-MSP0.txt [1/12/2009, 10:17:29] (IronSpigot::MspInstallerT<class ATL::CStringT<unsigned short,class ATL::StrTraitATL<unsigned short,class ATL::ChTraitsCRT<unsigned short ::PerformMsiOperation) MsiApplyMultiplePatches returned 0x643

    Read the article

  • In Mercurial, what is the exact step that Peter or me has to do so that he gets back the rolled back

    - by Jian Lin
    The short question is: if I hg rollback, how does Peter get my rolled back version if he cloned from me? What are the exact steps he or me has to do or type? This is related to http://stackoverflow.com/questions/3034793/in-mercurial-when-peter-hg-clone-me-and-i-commit-and-he-pull-and-update-he-g The details: After the following steps, Mary has 7 and Peter has 11. My repository is 7 What are the exact steps Peter or me has to do or type SO THAT PETER GETS 7 back? F:\>mkdir hgme F:\>cd hgme F:\hgme>hg init F:\hgme>echo the code is 7 > code.txt F:\hgme>hg add code.txt F:\hgme>hg commit -m "this is version 1" F:\hgme>cd .. F:\>hg clone hgme hgpeter updating to branch default 1 files updated, 0 files merged, 0 files removed, 0 files unresolved F:\>cd hgpeter F:\hgpeter>type code.txt the code is 7 F:\hgpeter>cd .. F:\>cd hgme F:\hgme>notepad code.txt [now i change 7 to 11] F:\hgme>hg commit -m "this is version 2" F:\hgme>cd .. F:\>cd hgpeter F:\hgpeter>hg pull pulling from f:\hgme searching for changes adding changesets adding manifests adding file changes added 1 changesets with 1 changes to 1 files (run 'hg update' to get a working copy) F:\hgpeter>hg update 1 files updated, 0 files merged, 0 files removed, 0 files unresolved F:\hgpeter>type code.txt the code is 11 F:\hgpeter>cd .. F:\>cd hgme F:\hgme>hg rollback rolling back last transaction F:\hgme>cd .. F:\>hg clone hgme hgmary updating to branch default 1 files updated, 0 files merged, 0 files removed, 0 files unresolved F:\>cd hgmary F:\hgmary>type code.txt the code is 7 F:\hgmary>cd .. F:\>cd hgpeter F:\hgpeter>hg pull pulling from f:\hgme searching for changes no changes found F:\hgpeter>hg update 0 files updated, 0 files merged, 0 files removed, 0 files unresolved F:\hgpeter>type code.txt the code is 11 F:\hgpeter>

    Read the article

  • Windows system monitoring and profiling

    - by Aris
    I have several dozen 64-bit Windows 2003 servers in a high performance environment with very bursty system utilization. I am looking for a tool (or tools) to monitor and analyze system performance (eg, CPU utilization, bandwidth, etc). This tool can either query servers from a central location (SNMP) or require installation of a component on each server. It should poll on a 1-second interval. It should be able to generate pretty graphs which show trends over time. As a nice-to-have, it should be able to send out emails or IMs when certain thresholds are exceeded. The tools I have investigated so far, including Solarwinds and PRTG, are not designed to poll this frequently. They seem to be designed for a ~30-second interval. Solarwinds wouldn't go lower than 3-sec, and PRTG chokes at 1-sec. They both default to 1-min. All three tools seem more focused on outage monitoring and reporting than metric collection. Given the bursty nature of our applications, such infrequent polling would result in a very inaccurate picture of performance. I am considering rolling my own solution using perfmon. This would be a lot of work, and seems like reinventing the wheel. Are there any tools out there that meet my needs?

    Read the article

  • Machine Learning Algorithm for Predicting Order of Events?

    - by user213060
    Simple machine learning question. Probably numerous ways to solve this: There is an infinite stream of 4 possible events: 'event_1', 'event_2', 'event_4', 'event_4' The events do not come in in completely random order. We will assume that there are some complex patterns to the order that most events come in, and the rest of the events are just random. We do not know the patterns ahead of time though. After each event is received, I want to predict what the next event will be based on the order that events have come in in the past. The predictor will then be told what the next event actually was: Predictor=new_predictor() prev_event=False while True: event=get_event() if prev_event is not False: Predictor.last_event_was(prev_event) predicted_event=Predictor.predict_next_event(event) The question arises of how long of a history that the predictor should maintain, since maintaining infinite history will not be possible. I'll leave this up to you to answer. The answer can't be infinte though for practicality. So I believe that the predictions will have to be done with some kind of rolling history. Adding a new event and expiring an old event should therefore be rather efficient, and not require rebuilding the entire predictor model, for example. Specific code, instead of research papers, would add for me immense value to your responses. Python or C libraries are nice, but anything will do. Thanks! Update: And what if more than one event can happen simultaneously on each round. Does that change the solution?

    Read the article

  • Looking for Recommendations on Version Control System with Intuitive (.NET) API?

    - by John L.
    I'm working on a project which generates (composite) Microsoft Word documents which are comprised of one or more child documents. There are tens of thousands of permutations of the composite documents. Far too many for users to easily manage. Users will need to view/edit the child documents through the app which hides all of the nasty implementation details. A requirement of the system is that the child documents must be version controlled. That is what has been tripping me up. I've been torn between using an off-the-shelf solution or rolling my own. At a minimum, the system needs to support get latest, get specific version, add new, rename and possibly delete. I’ve whiteboarded it enough to realize it won’t be a trivial task to create my own. As far as commercial systems I have VSS and TFS at my disposal. I've played with the TFS API some, but it isn’t as intuitive or well documented as I had hoped. I'm not averse to an open source solution (e.g. SVN), but I have less familiarity with them. Which approach or tool would you recommend? Why? Do you have any links to API documentation you would recommend? Environment: C#, VS2008, SQL Server 2005/2008, low volume (a few hundred operations per day)

    Read the article

  • New Perl user: using a hash of arrays

    - by Zach H
    I'm doing a little datamining project where a perl script grabs info from a SQL database and parses it. The data consists of several timestamps. I want to find how many of a particular type of timestamp exist on any particular day. Unfortunately, this is my first perl script, and the nature of perl when it comes to hashes and arrays is confusing me quite a bit. Code segment: my %values=();#A hash of the total values of each type of data of each day. #The key is the day, and each key stores an array of each of the values I need. my @proposal; #[drafted timestamp(0), submitted timestamp(1), attny approved timestamp(2),Organiziation approved timestamp(3), Other approval timestamp(4), Approved Timestamp(5)] while(@proposal=$sqlresults->fetchrow_array()){ #TODO: check to make sure proposal is valid #Increment the number of timestamps of each type on each particular date my $i; for($i=0;$i<=5;$i++) $values{$proposal[$i]}[$i]++; #Update rolling average of daily #TODO: To check total load, increment total load on all dates between attourney approve date and accepted date for($i=$proposal[1];$i<=$proposal[2];$i++) $values{$i}[6]++; } I keep getting syntax errors inside the for loops incrementing values. Also, considering that I'm using strict and warnings, will Perl auto-create arrays of the right values when I'm accessing them inside the hash, or will I get out-of bounds errors everywhere? Thanks for any help, Zach

    Read the article

  • CSS Parser - Insert mtimes

    - by brad
    What command line tool can I use to automatically insert mtimes into urls in my css files for the purposes of breaking the cache? /* before */ .example { background: url(example.jpg); } /* after */ .example { background: url(example.jpg?1271298451); } Also, I would like this tool to spit out the latest mtime as the css files mtime. (If the css file is still cached then the new urls will not get to the client.) In searching the web, I have found very few tools that can do this. I am even considering rolling my own, but have found very little in the way of css parsers that are actively maintained. A candidate should be: fast (I don't want to wait 30 seconds on deployment) command line accessible (something like "cat foo.css bar.css | cssmtime out.css") What I've found so Far yui compressor - initially I thought I would extend the yui compressor to do this, but found that it is implemented as a bunch of regex's and not a parser. csstidy - last release was in 2007 and development has been suspended, but does have an option for inserting mtimes (also written in php, something I have no experience in) cssutils - python sac implementation - seems to be actively maintained, but also seems like overkill for my needs. Also, written in python which I have experience with csspool - ruby sac implementation - I don't know much ruby, but would like to learn other sac implementations - There are several java implementations, and a c implementation neither of which I know much about What's your experience? Have you used any of these libraries? Was the experience positive? Would you recommend I go with them for my purposes?

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24  | Next Page >