Search Results

Search found 560 results on 23 pages for 'rolling'.

Page 17/23 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • central apache log analysis of many hosts

    - by Jason Antman
    We have 30+ apache httpd servers, and are looking to perform analysis on the logs both for historical trending and near "real time" monitoring/alerting. I'm mainly interested in things like error rates (4xx/5xx), response time, overall request rate, etc. but it would also be very useful to pull out more compute-intensive statistics like unique client IPs and user agents per unit of time. I'm leaning towards building this as a centralized collector/server/storage, and am also considering the possibility of storing non-apache logs (i.e. general syslog, firewall logs, etc.) in the same system. Obviously a large part of this will probably have to be custom (at least the connection between pieces and the parsing/analysis we do), but I haven't been able to find much information on people who have done stuff like this, at least at shops smaller than Google/Facebook/etc. who can throw their log data into a hundred-node compute cluster and run Map/Reduce on it. The main things I'm looking for are: - All open source - Some way of collecting logs from apache machines that isn't too resource-intensive, and transports them relatively quickly over the network - Some way of storing them (NoSQL? key-value store?) on the backend, for a given amount of time (and then rolling them up into historical averages) - In the middle of this, a way of graphing in near-real-time (probably also with some statistical analysis on it) and hopefully alerting off of those graphs. Any suggestions/pointers/ideas, to either "products"/projects or descriptions of how other people do this would be greatly helpful. Unfortunately, we're not exactly a new-age-y devops shop, lots of old stuff, homogeneous infrastructure, and strained boxes.

    Read the article

  • Why are all of my ZFS snapshot directories empty?

    - by growse
    I'm running an Oracle 11 box as a ZFS storage appliance, and I'm taking regular snapshots of the ZFS filesystems, via cron. In the past, I know that if I wanted to grab a particular file from a snapshot, a read-only copy was kept in .zfs/snapshot/{name}/ and I could just navigate there and pull the file out. This is documented on Oracle's website. However, I went to do this the other day, and noticed that the ZFS directories within the snapshot directories are all empty. zfs list -t snapshot correctly shows the list of snapshots that should be present, and .zfs/snapshots correctly contains a directory for each snapshot, and in each snapshot there is a directory present for each ZFS filesystem. However, these directories appear to be empty. I just tested a restore by touching a file in a little-used share and rolling back to the latest hourly snapshot, and this appears to have worked fine. So the rollback functionality is there. Did Oracle change how snapshots are done? Or is something seriously wrong here?

    Read the article

  • MySQL Tables Missing/Corrupt After Recreation

    - by Synetech inc.
    Hi, Yesterday I dumped my MySQL databases to an SQL file and renamed the ibdata1 file. I then recreated it and imported the SQL file and moved the new ibdata1 file to my MySQL data directory, deleting the old one. I’ve done it before without issue, however this time something is not right. When I examine the (personal, not MySQL config) databases, they are all there, but they are empty… sort of. The data directory still has the .ibd files with the correct content in them and I can view the table list in the databases, but not the tables themselves. (I have file-per-table enabled, and am using InnoDB as default for everything.) For example with the urls database and its urls table, I can successfully open mysql.exe or phpMyAdmin and use urls;. I can even show tables; to see the expected table, but then when I try to describe urls; or select * from urls;, it complains that the table does not exist (even though it just listed it). (The MySQL Administrator lists the databases, but does not even list the tables, it indicates that the dbs are completely empty.) The problem now is that I have already deleted the SQL file (and cannot recover it even after scouring my hard-drive). So I am trying to figure out a way to repair these databases/tables. I can’t use the table repair function since it complains that the table does not exist, and I can’t dump them because again, it complains that the tables don’t exist. Like I’ve said, the data itself is still present in the .ibd files and the table names are present. I just need a way to get MySQL to recognize that the tables exist in the databases (I can find the column names of the tables in question in the ibdata1 file using a hex-editor). Any idea how I can repair this type of corruption? I don’t mind rolling up my sleeves, digging in, and taking a bunch of steps to fix it. Thanks a lot.

    Read the article

  • Tuning Linux + HAProxy

    - by react
    I'm currently rolling out HAProxy on Centos 6 which will send requests to some Apache HTTPD servers and I'm having issues with performance. I've spent the last couple of days googling and still can't seem to get past 10k/sec connections consistently when benchmarking (sometimes I do get 30k/sec though). I've pinned the IRQ's of the TX/RX queues for both the internal and external NICS to separate CPU cores and made sure HAProxy is pinned to it's own core. I've also made the following adjustments to sysctl.conf: # Max open file descriptors fs.file-max = 331287 # TCP Tuning net.ipv4.tcp_tw_reuse = 1 net.ipv4.ip_local_port_range = 1024 65023 net.ipv4.tcp_max_syn_backlog = 10240 net.ipv4.tcp_max_tw_buckets = 400000 net.ipv4.tcp_max_orphans = 60000 net.ipv4.tcp_synack_retries = 3 net.core.somaxconn = 40000 net.ipv4.tcp_rmem = 4096 8192 16384 net.ipv4.tcp_wmem = 4096 8192 16384 net.ipv4.tcp_mem = 65536 98304 131072 net.core.netdev_max_backlog = 40000 net.ipv4.tcp_tw_reuse = 1 If I use AB to hit the a webserver directly I easily get 30k/s connections. If I stop the webservers and use AB to hit HAProxy then I get 30k/s connections but obviously it's useless. I've also disabled iptables for now since I read that nf_conntrack can slow everything down, no change. I've also disabled the irqbalance service. The fact that I can hit each individual device with 30k/s makes me believe the tuning of the servers is OK and that it must be some HAProxy config? Here's the config which I've built from reading tuning articles, etc http://pastebin.com/zsCyAtgU The server is a dual Xeon CPU E5-2620 (6 cores) with 32GB of RAM. Running Centos 6.2 x64. The private and public interfaces are on separate NICS. Anyone have any ideas? Thanks.

    Read the article

  • Disk operations in windows 7 are slow

    - by Skadlig
    My computer started lagging last Sunday. I tried to reboot it and it failed. Trying to boot into failsafe mode takes around two hours. It mainly freezes on two files: scsiport.sys and classpnp.sys When it finally has started all disc operations are really slow. When it has run for a while it goes faster, probably due to data moved into RAM instead. It froze on an other file before that was associated with Avast but uninstalling it didn't really help. A critical windows update was installed on Sunday but rolling back the update didn't help. I had a guess about the sound card but disabling the sound card drivers also didn’t help. I have an inkling of an idea that it might be Intel rapid storage technology that might be acting up but it doesn't allow me to reinstall it from failsafe mode and I haven't been able to log into normal mode for a while. I would appreciate suggestions regarding how to get into normal mode again and/or what can be the root cause.

    Read the article

  • Should I update the kernel on a Linux machine?

    - by Legate
    As I understand it, updating to a new kernel (with the normal linux-image... package, not by rolling my own) requires a server restart. However, one of our servers (Ubuntu 10.04) is running several extensive screen sessions. Restarting kills those which is always a major hassle to their owners (mostly because of lost session histories). What should I do? I see several possibilites: Not doing anything, that is update only non-kernel packages (perhaps use apt-pinning?) Update the kernel, but not restart. (Is that smart? I seem to remember there might be some problems with loading kernel modules.) Updating the kernel and restarting. Is there perhaps some way to preserve the screen sessions? I guess it ultimately boils down to this question: How important is it to update the kernel? I posted this question here instead of askubuntu.com as I think this is not an Ubuntu-specific issue though this server is running Ubuntu.

    Read the article

  • Windows 7 Taskbar Icons Randomly Disappearing

    - by Ryker
    This is a problem that i'm totally stumped on. Taskbar Icons that are pinned, as well as open program thumbs keep disappearing. This is totally random, no set pattern. I've tried updating drivers for the vid card, rolling back to old drivers, nothing seems to work. I've also tried the usual uncheck hide the taskbar and recheck, changing the start menu, keeping commonly used programs in start menu, etc. This is using an add-in card which is a necessity. Here is the specs on the machine: New Dell Optiplex 390 i3 2100 Proc 3gb Ram, Windows 7 32-bit. The add in card is an Nvidia 8400 GS model. I've tried different versions of the 8400gs, from MSI, or from other manufactureres and regardless it keeps happening. After the Icons disappear, I can log out and log back in and they return, but disappear again. I can reinstall the drivers for the vid card and it fixes it for a few days, and then they disappear again. There isn't any set pattern of days that it happens, software being used, etc. Another user in the office had the same problem with the exact same machine, but updating the vid card drivers fixed the issue. Anyone?

    Read the article

  • CSS window height problem with dynamic loaded css

    - by Michael Mao
    Hi all: Please go here and use username "admin" and password "endlesscomic" (without wrapper quotes) to see the web app demo. Basically what I am trying to do is to incrementally integrate my work to this web app, say, every nightly, for the client to check the progress. Also, he would like to see, at the very beginning, a mockup about the page layout. I am trying to use the 960 grid system to achieve this. So far, so good. Except one issue that when the "mockup.css" is loaded dynamically by jQuery, it "extends" the window to the bottom, something I do not wanna have... As an inexperienced web developer, I don't know which part is wrong. Below is my js: /* master.js */ $(document).ready(function() { $('#addDebugCss').click(function() { alertMessage('adding debug css...'); addCssToHead('./css/debug.css'); $('.grid-insider').css('opacity','0.5');//reset mockup background transparcy }); $('#addMockupCss').click(function() { alertMessage('adding mockup css...'); addCssToHead('./css/mockup.css'); $('.grid-insider').css('opacity','1');//set semi-background transparcy for mockup }); $('#resetCss').click(function() { alertMessage('rolling back to normal'); rollbackCss(new Array("./css/mockup.css", "./css/debug.css")); }); }); function alertMessage(msg) //TODO find a better modal prompt { alert(msg); } function addCssToHead(path_to_css) { $('<link rel="stylesheet" type="text/css" href="' + path_to_css + '" />').appendTo("head"); } function rollbackCss(set) { for(var i in set) { $('link[href="'+ set[i]+ '"]').remove(); } } Something should be added to the exteral mockup.css? Or something to change in my master.js? Thanks for any hints/suggestions in advance.

    Read the article

  • LINQ into SortedList

    - by Chris Simmons
    I'm a complete LINQ newbie, so I don't know if my LINQ is incorrect for what I need to do or if my expectations of performance are too high. I've got a SortedList of objects, keyed by int; SortedList as opposed to SortedDictionary because I'll be populating the collection with pre-sorted data. My task is to find either the exact key or, if there is no exact key, the one with the next higher value. If the search is too high for the list (e.g. highest key is 100, but search for 105), return null. // The structure of this class is unimportant. Just using // it as an illustration. public class CX { public int KEY; public DateTime DT; } static CX getItem(int i, SortedList<int, CX> list) { var items = (from kv in list where kv.Key >= i select kv.Key); if (items.Any()) { return list[items.Min()]; } return null; } Given a list of 50,000 records, calling getItem 500 times takes about a second and a half. Calling it 50,000 times takes over 2 minutes. This performance seems very poor. Is my LINQ bad? Am I expecting too much? Should I be rolling my own binary search function?

    Read the article

  • Mercurial hg Subrepository Problem - "abort: unknown revision'

    - by Tex
    Note: I asked this yesterday over at kiln.stackexchange.com, but haven't gotten an answer, and it's holding up my work. So I figured I'd give it a shot here. My main mercurial repository has a bunch of subrepositories in it. During initial setup, I made a mistake in my .hgsub. Namely, I pointed two subrepositories to the same directory. What I should have had: sites/1=sites/1 sites/2=sites/2 sites/3=sites/3 What I actually had: sites/1=sites/1 sites/2=sites/2 sites/2=sites/3 Stupid copy/paste error. I committed the incorrect .hgsub, not realizing my error. A few revisions later, while adding a some new subrespositories to .hgsub, I noticed the mistake and fixed it inside .hgsub. I committed and kept rolling along. I've committed a reasonable amount of work that I'd prefer not to redo since I 'fixed' the mistake in .hgsub. Now we come to the actual problem: I've made some changes inside the subrepository sites/3, and when I try to commit the main repository, I get the following error: abort: unknown revision 'LongGUIDLookingString' I found this discussion, which seems to address the same problem I'm having, but I can't quite work out how bos fixed it. What do I need to do in order to fix this?

    Read the article

  • Prepared transactions with Postgres 8.4.3 on CentOS

    - by peter
    I have set 'max_prepared_transactions' to 20 in the local postgres.config and yet the transaction fails with the following error trace (but only on Linux). Since in Windows the same code works seamlessly I am wandering if this isn't an issue of permission. What would be the solution? Thanks Peter 372300 [Atomikos:7] WARN atomikos - XA resource 'XADBMS': rollback for XID '3137332E3230332E3132362E3139302E746D30303030313030303037:3137332E3230332E3132362E3139302E746D31' raised -3: the XA resource detected an internal error org.postgresql.xa.PGXAException: Error rolling back prepared transaction at org.postgresql.xa.PGXAConnection.rollback(PGXAConnection.java:357) at com.atomikos.datasource.xa.XAResourceTransaction.rollback(XAResourceTransaction.java:873) at com.atomikos.icatch.imp.RollbackMessage.send(RollbackMessage.java:90) at com.atomikos.icatch.imp.PropagationMessage.submit(PropagationMessage.java:86) at com.atomikos.icatch.imp.Propagator$PropagatorThread.run(Propagator.java:62) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:651) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:676) at java.lang.Thread.run(Thread.java:595) Caused by: org.postgresql.util.PSQLException: ERROR: prepared transaction with identifier "1096044365_MTczLjIwMy4xMjYuMTkwLnRtMDAwMDEwMDAwNw==_MTczLjIwMy4xMjYuMTkwLnRtMQ==" does not exist at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2062) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1795) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:257) at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:479) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:353) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeUpdate(AbstractJdbc2Statement.java:299) at org.postgresql.xa.PGXAConnection.rollback(PGXAConnection.java:347)

    Read the article

  • Regression Testing and Deployment Strategy

    - by user279516
    I'd like some advice on a deployment strategy. If a development team creates an extensive framework, and many (20-30) applications consume it, and the business would like application updates at least every 30 days, what is the best deployment strategy? The reason I ask is that there seems to be a lot of waste (and risk) in using an agile approach of deploying changes monthly, if 90% of the applications don't change. What I mean by this is that the framework can change during the month, and so can a few applications. Because the framework changed, all applications should be regression-tested. If, say, 10 of the applications don't change at all during the year, then those 10 applications are regression-tested EVERY MONTH, when they didn't have any feature changes or hot fixes. They had to be tested simply because the business is rolling updates every month. And the risk that is involved... if a mission-critical application is deployed, that takes a few weeks, and multiple departments, to test, is it realistic to expect to have to constantly regression-test this application? One option is to make any framework updates backward-compatible. While this would mean that applications don't need to change their code, they would still need to be tested because the underlying framework changed. And the risk involved is great; a constantly changing framework (and deploying this framework) means the mission-critical app can never just enjoy the same code base for a long time. These applications share the same database, hence the need for the constant testing. I'm aware of TDD and automated tests, but that doesn't exist at the moment. Any advice?

    Read the article

  • In Mercurial, what is the exact step that Peter or me has to do so that he gets back the rolled back

    - by Jian Lin
    The short question is: if I hg rollback, how does Peter get my rolled back version if he cloned from me? What are the exact steps he or me has to do or type? This is related to http://stackoverflow.com/questions/3034793/in-mercurial-when-peter-hg-clone-me-and-i-commit-and-he-pull-and-update-he-g The details: After the following steps, Mary has 7 and Peter has 11. My repository is 7 What are the exact steps Peter or me has to do or type SO THAT PETER GETS 7 back? F:\>mkdir hgme F:\>cd hgme F:\hgme>hg init F:\hgme>echo the code is 7 > code.txt F:\hgme>hg add code.txt F:\hgme>hg commit -m "this is version 1" F:\hgme>cd .. F:\>hg clone hgme hgpeter updating to branch default 1 files updated, 0 files merged, 0 files removed, 0 files unresolved F:\>cd hgpeter F:\hgpeter>type code.txt the code is 7 F:\hgpeter>cd .. F:\>cd hgme F:\hgme>notepad code.txt [now i change 7 to 11] F:\hgme>hg commit -m "this is version 2" F:\hgme>cd .. F:\>cd hgpeter F:\hgpeter>hg pull pulling from f:\hgme searching for changes adding changesets adding manifests adding file changes added 1 changesets with 1 changes to 1 files (run 'hg update' to get a working copy) F:\hgpeter>hg update 1 files updated, 0 files merged, 0 files removed, 0 files unresolved F:\hgpeter>type code.txt the code is 11 F:\hgpeter>cd .. F:\>cd hgme F:\hgme>hg rollback rolling back last transaction F:\hgme>cd .. F:\>hg clone hgme hgmary updating to branch default 1 files updated, 0 files merged, 0 files removed, 0 files unresolved F:\>cd hgmary F:\hgmary>type code.txt the code is 7 F:\hgmary>cd .. F:\>cd hgpeter F:\hgpeter>hg pull pulling from f:\hgme searching for changes no changes found F:\hgpeter>hg update 0 files updated, 0 files merged, 0 files removed, 0 files unresolved F:\hgpeter>type code.txt the code is 11 F:\hgpeter>

    Read the article

  • VS 2008 Service Pack 1 problem

    - by Compiler
    Hi, My OPS is XP and service pack 3 installed.I cant install vs2008 service pack1,In log file i see 'Visual C++ 2008 SP1 Design-Time Components for x86 - KB947888' cant be installed. Error code is 1603.Last part of Installation file is here. Returning IDOK. INSTALLMESSAGE_ERROR [Error 1335. The cabinet file 'patch.cab' required for this installation is corrupt and cannot be used. This could indicate a network error, an error reading from the CD-ROM, or a problem with this package.] [1/12/2009, 10:14:50] (IronSpigot::MsiExternalUiHandler::UiHandler) Returning IDOK. INSTALLMESSAGE_ACTIONSTART [Action 10:14:50: Rollback. Rolling back action:] [1/12/2009, 10:17:29] (IronSpigot::MspInstallerT<class ATL::CStringT<unsigned short,class ATL::StrTraitATL<unsigned short,class ATL::ChTraitsCRT<unsigned short ::PerformMsiOperation) Patch (C:\DOCUME~1\Cem\LOCALS~1\Temp\Microsoft Visual Studio 2008 SP1\VS90sp1-KB945140-X86-ENU.msp; C:\DOCUME~1\Cem\LOCALS~1\Temp\Microsoft Visual Studio 2008 SP1\VC90sp1-KB947888-x86-enu.msp) install failed on product (Microsoft Visual Studio 2008 Professional Edition - ENU). Msi Log: Microsoft Visual Studio 2008 SP1_20090112_100005671-Microsoft Visual Studio 2008 Professional Edition - ENU-MSP0.txt [1/12/2009, 10:17:29] (IronSpigot::MspInstallerT<class ATL::CStringT<unsigned short,class ATL::StrTraitATL<unsigned short,class ATL::ChTraitsCRT<unsigned short ::PerformMsiOperation) MsiApplyMultiplePatches returned 0x643

    Read the article

  • Windows system monitoring and profiling

    - by Aris
    I have several dozen 64-bit Windows 2003 servers in a high performance environment with very bursty system utilization. I am looking for a tool (or tools) to monitor and analyze system performance (eg, CPU utilization, bandwidth, etc). This tool can either query servers from a central location (SNMP) or require installation of a component on each server. It should poll on a 1-second interval. It should be able to generate pretty graphs which show trends over time. As a nice-to-have, it should be able to send out emails or IMs when certain thresholds are exceeded. The tools I have investigated so far, including Solarwinds and PRTG, are not designed to poll this frequently. They seem to be designed for a ~30-second interval. Solarwinds wouldn't go lower than 3-sec, and PRTG chokes at 1-sec. They both default to 1-min. All three tools seem more focused on outage monitoring and reporting than metric collection. Given the bursty nature of our applications, such infrequent polling would result in a very inaccurate picture of performance. I am considering rolling my own solution using perfmon. This would be a lot of work, and seems like reinventing the wheel. Are there any tools out there that meet my needs?

    Read the article

  • Machine Learning Algorithm for Predicting Order of Events?

    - by user213060
    Simple machine learning question. Probably numerous ways to solve this: There is an infinite stream of 4 possible events: 'event_1', 'event_2', 'event_4', 'event_4' The events do not come in in completely random order. We will assume that there are some complex patterns to the order that most events come in, and the rest of the events are just random. We do not know the patterns ahead of time though. After each event is received, I want to predict what the next event will be based on the order that events have come in in the past. The predictor will then be told what the next event actually was: Predictor=new_predictor() prev_event=False while True: event=get_event() if prev_event is not False: Predictor.last_event_was(prev_event) predicted_event=Predictor.predict_next_event(event) The question arises of how long of a history that the predictor should maintain, since maintaining infinite history will not be possible. I'll leave this up to you to answer. The answer can't be infinte though for practicality. So I believe that the predictions will have to be done with some kind of rolling history. Adding a new event and expiring an old event should therefore be rather efficient, and not require rebuilding the entire predictor model, for example. Specific code, instead of research papers, would add for me immense value to your responses. Python or C libraries are nice, but anything will do. Thanks! Update: And what if more than one event can happen simultaneously on each round. Does that change the solution?

    Read the article

  • Looking for Recommendations on Version Control System with Intuitive (.NET) API?

    - by John L.
    I'm working on a project which generates (composite) Microsoft Word documents which are comprised of one or more child documents. There are tens of thousands of permutations of the composite documents. Far too many for users to easily manage. Users will need to view/edit the child documents through the app which hides all of the nasty implementation details. A requirement of the system is that the child documents must be version controlled. That is what has been tripping me up. I've been torn between using an off-the-shelf solution or rolling my own. At a minimum, the system needs to support get latest, get specific version, add new, rename and possibly delete. I’ve whiteboarded it enough to realize it won’t be a trivial task to create my own. As far as commercial systems I have VSS and TFS at my disposal. I've played with the TFS API some, but it isn’t as intuitive or well documented as I had hoped. I'm not averse to an open source solution (e.g. SVN), but I have less familiarity with them. Which approach or tool would you recommend? Why? Do you have any links to API documentation you would recommend? Environment: C#, VS2008, SQL Server 2005/2008, low volume (a few hundred operations per day)

    Read the article

  • New Perl user: using a hash of arrays

    - by Zach H
    I'm doing a little datamining project where a perl script grabs info from a SQL database and parses it. The data consists of several timestamps. I want to find how many of a particular type of timestamp exist on any particular day. Unfortunately, this is my first perl script, and the nature of perl when it comes to hashes and arrays is confusing me quite a bit. Code segment: my %values=();#A hash of the total values of each type of data of each day. #The key is the day, and each key stores an array of each of the values I need. my @proposal; #[drafted timestamp(0), submitted timestamp(1), attny approved timestamp(2),Organiziation approved timestamp(3), Other approval timestamp(4), Approved Timestamp(5)] while(@proposal=$sqlresults->fetchrow_array()){ #TODO: check to make sure proposal is valid #Increment the number of timestamps of each type on each particular date my $i; for($i=0;$i<=5;$i++) $values{$proposal[$i]}[$i]++; #Update rolling average of daily #TODO: To check total load, increment total load on all dates between attourney approve date and accepted date for($i=$proposal[1];$i<=$proposal[2];$i++) $values{$i}[6]++; } I keep getting syntax errors inside the for loops incrementing values. Also, considering that I'm using strict and warnings, will Perl auto-create arrays of the right values when I'm accessing them inside the hash, or will I get out-of bounds errors everywhere? Thanks for any help, Zach

    Read the article

  • Should the entity framework + self tracking entities be saving me time

    - by sipwiz
    I've been using the entity framework in combination with the self tracking entity code generation templates for my latest silverlight to WCF application. It's the first time I've used the entity framework in a real project and my hope was that I would save myself a lot of time and effort by being able to automatically update the whole data access layer of my project when my database schema changed. Happily I've found that to be the case, updating my database schema by adding a new table, changing column names, adding new columns etc. etc. can be propagated to my business object classes by using the update from database option on the entity framework model. Where I'm hurting is the CRUD operations within my WCF service in response to actions on my Silverlight client. I use the same self tracking entity framework business objects in my Silverlight app but I find I'm continually having to fight against problems such as foreign key associations not being handled correctly when updating an object or the change tracker getting confused about the state of an object at the Silverlight end and the data access operation within the WCF layer throwing a wobbly. It's got to a point where I have now spent more time dealing with this quirks than I have on my previous project where I used Linq-to-SQL as the starting point for rolling my own business objects. Is it just me being hopeless or is the self tracking entities approach something that should be avoided until it's more mature?

    Read the article

  • CSS Parser - Insert mtimes

    - by brad
    What command line tool can I use to automatically insert mtimes into urls in my css files for the purposes of breaking the cache? /* before */ .example { background: url(example.jpg); } /* after */ .example { background: url(example.jpg?1271298451); } Also, I would like this tool to spit out the latest mtime as the css files mtime. (If the css file is still cached then the new urls will not get to the client.) In searching the web, I have found very few tools that can do this. I am even considering rolling my own, but have found very little in the way of css parsers that are actively maintained. A candidate should be: fast (I don't want to wait 30 seconds on deployment) command line accessible (something like "cat foo.css bar.css | cssmtime out.css") What I've found so Far yui compressor - initially I thought I would extend the yui compressor to do this, but found that it is implemented as a bunch of regex's and not a parser. csstidy - last release was in 2007 and development has been suspended, but does have an option for inserting mtimes (also written in php, something I have no experience in) cssutils - python sac implementation - seems to be actively maintained, but also seems like overkill for my needs. Also, written in python which I have experience with csspool - ruby sac implementation - I don't know much ruby, but would like to learn other sac implementations - There are several java implementations, and a c implementation neither of which I know much about What's your experience? Have you used any of these libraries? Was the experience positive? Would you recommend I go with them for my purposes?

    Read the article

  • asp.net Membership : Extending Role membership?

    - by mark smith
    Hi there, I am been taking a look at asp.net membership and it seems to provide everything that i need but i need some kind of custom Role functionality. Currently i can add user to a role, great. But i also need to be able to add Permissions to Roles.. i.e. Role: Editor Permissions: Can View Editor Menu, Can Write to Editors Table, Can Delete Entries in Editors Table. Currently it doesn't support this, The idea behind this is to create a admin option in my program to create a role and then assign permissions to a role to say "allow the user to view a certain part of the application", "allow the user to open a menu item" Any ideas how i would implement soemthing like this? I presume a custom ROLE provider but i was wondering if some kind of framework extension existed already without rolling my own? Or anybody knows a good tutorial of how to tackle this issue? I am quite happy with what asp.net SQL provider has created in terms of tables etc... but i think i need to extend this by adding another table called RolesPermissions and then I presume :-) adding some kind of enumeration into the table for each valid permission?? THanks in advance

    Read the article

  • Strange execution times in t-sql

    - by TonyP
    Hi All I have two stored procedures, the first one calls the second .. If I execute the second one alone it takes over 5 minutes to complete.. But when executed within the first one it takes little over 1 minute.. What is the reason ! Here is the first one ALTER procedure [dbo].[schRefreshPriceListItemGroups] as begin tran delete from PriceListItemGroups if @@error !=0 goto rolback Insert PriceListItemGroups(comno,t$cuno,t$cpls,t$cpgs,t$dsca,t$cpru) SELECT distinct c.comno,c.t$cuno, c.t$cpls,I.t$cpgs,g.t$dsca,g.t$cpru FROM TTCCOM010nnn C JOIN TTDSLS032nnn PL ON PL.comno = c.Comno and PL.t$cpls = c.t$cpls JOIN TTIITM001nnn I ON I.t$item = pl.t$item AND I.comno = pl.comNo JOIN TTCMCS024nnn G ON g.T$cprg = I.t$cpgs AND g.comno = I.Comno WHERE c.t$cpls !='' order by comno desc, t$cuno, t$cpgs if @@error !=0 goto rolback ----------------------------------------------------- Exec scrRefreshCustomersCatalogs ----------------------------------------------------- commit tran return rolback: Rollback tran And the second one Alter proc scrRefreshCustomersCatalogs as declare @baanIds table(id int identity(1,1),baanId varchar(12)) declare @baanId varchar(12),@i int, @n int Insert @baanIds(BaanId) select baanId from ftElBaanIds() SELECT @I=1,@n=max(id) from @baanIds select @i,@n Begin tran if @@error !=0 goto xRollBack WHILE @I <=@n Begin select @baanId=baanId from @baanIds where id=@i if @@error !=0 goto xRollBack Delete from customersCatalogs where comno+'-'+t$cuno=@baanId print Convert(varchar,@i)+' baanId='+@baanId Insert customersCatalogs exec customersCatalog @baanId if @@error !=0 goto xRollBack set @i=@i+1; end Commit Tran Update statistics customersCatalogs with fullscan Return xRollBack: Print '*****Rolling back*************' Rollback tran

    Read the article

  • Query eror handling in CodeIgniter

    - by Sajith S Narayanan
    Hi All, I am trying to execute an MySql query using the CI Active methods. If the query is malformed, then CI invokes internal server error 500 and quits without processing the next steps.. I need to roll back all the other queries processed before that error statement, and the roll back is also not happening.. can you help pls. The code snippets is as below: function dbInsertInformationToDB($data_array) { $returnID = ""; $uniqueDataArray = array(); // I prepare a array of values here $uniqueTableList = filter_unique_tables($data_array[2]); $this->db->trans_begin(); // inserting is done here // when there is a query error in $this->db->insert().. it is not rolling back the previous query executed foreach($uniqueTableList as $table_name) { $uniqueDataArray = filterDataArray($data_array,$table_name,2); $this->db->insert($table_name,$uniqueDataArray); if ($this->db->_error_message()) { $error = "I am caught!!"; } $returnID = $this->db->affected_rows(); } if ($this->db->trans_status() === FALSE) { $this->db->trans_rollback(); } else { $this->db->trans_commit(); } return "ERROR"; }

    Read the article

  • Application log aggregation, management and notifications...

    - by Matthew Savage
    I'm wondering what everyone is using for logging, log management and log aggregation on their systems. I am working in a company which uses .NET for all it's applications and all systems are Windows based. Currently each application looks after its own logging and notifications of failures (e.g. if app A fails it will send out its own 'call for help' to an admin). While this current practice works its a bit hacky and hard to manage. I've been trying to find some options for making this work better and I've come up with the following: log4net & Chainsaw (ah, if it works). Logging via log4net or another framework into a central database & rolling our own management tool. Logging to the Windows event log and using MOM or System Center Operations Manager to aggregate and manage each of these servers & their apps. A hand-rolled solution to suck all the log files into one point and work some magic across them. Essentially what we are after is something which can pull log entries all together and allow for some analytics to be run across them, plus use a kind of event based system to, for example, send out a warning email when there have been 30+ warning level logs for an application in the last x minutes. So is there anything I've missed, or something someone else can suggest?

    Read the article

  • Debug NAudio MP3 reading difference?

    - by Conrad Albrecht
    My code using NAudio to read one particular MP3 gets different results than several other commercial apps. Specifically: My NAudio-based code finds ~1.4 sec of silence at the beginning of this MP3 before "audible audio" (a drum pickup) starts, whereas other apps (Windows Media Player, RealPlayer, WavePad) show ~2.5 sec of silence before that same drum pickup. The particular MP3 is "Like A Rolling Stone" downloaded from Amazon.com. Tested several other MP3s and none show any similar difference between my code and other apps. Most MP3s don't start with such a long silence so I suspect that's the source of the difference. Debugging problems: I can't actually find a way to even prove that the other apps are right and NAudio/me is wrong, i.e. to compare block-by-block my code's results to a "known good reference implementation"; therefore I can't even precisely define the "error" I need to debug. Since my code reads thousands of samples during those 1.4 sec with no obvious errors, I can't think how to narrow down where/when in the input stream to look for a bug. The heart of the NAudio code is a P/Invoke call to acmStreamConvert(), which is a Windows "black box" call which I can't think how to error-check. Can anyone think of any tricks/techniques to debug this?

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23  | Next Page >