Search Results

Search found 125 results on 5 pages for 'correlate'.

Page 1/5 | 1 2 3 4 5  | Next Page >

  • Search engine to correlate events [closed]

    - by Lee B
    Are there any web search tools that help with statistical research, like correlation? For instance, if I wanted to see the union of bloggers who drink (or talk about) tea/coffee with the bloggers who experience (or talk about) various diseases?

    Read the article

  • In C# is there a function that correlates sequential values on a IEnumerable

    - by Mike Q
    Hi all, I have a IEnumerable. I have a custom Interval class which just has two DateTimes inside it. I want to convert the IEnumerable to IEnumerable where n DateTimes would enumerate to n-1 Intervals. So if I had 1st Jan, 1st Feb and 1st Mar as the DateTime then I want two intervals out, 1st Jan/1st Feb and 1st Feb/1st March. Is there an existing C# Linq function that does this. Something like the below Correlate... IEnumerable<Interval> intervals = dttms.Correlate<DateTime, Interval>((dttm1, dttm2) => new Interval(dttm1, dttm2)); If not I'll just roll my own.

    Read the article

  • What would be the best way to correlate logs and events on several hosts?

    - by user220746
    I'm trying to build a log correlation system on multiple hosts. SEC seems interesting but I don't know if it will cover my needs. How could I correlate system events, logs, network events, etc. on multiple hosts at the same time, in real time? Examples: If 5 failed logins happened on host A the last minute and if firewall B has denied lots of access on differents ports on A, then we assume there is a potential attack in progress on A. If the Apache service on host A didn't receive any request for the last N minutes and Apache service on host B did, then the load balancing could be faulty.

    Read the article

  • How do I correlate build configurations in dependant vcproj files with different names?

    - by Tim
    I have a solution file that requires a third party library (open source). The containing solution uses the typical configuration names of "Debug" and "release". The 3rd party one has debug and release configs for both DLL and static libs - their names are not "Debug" and "Release". How do I tell the solution to build the dependency first and how do I correlate which config to the dependant config? i.e. MyProject:Debug should build either 3rdParty:debug_shared or 3rdParty:debug_static

    Read the article

  • How can I correlate a wall jack to a user/machine on the domain?

    - by harryfino
    After reading Valve's new employee handbook, I was really interested in setting up a company map like they described on page 6: "The fact that everyone is always moving around within the company makes people hard to find. That’s why we have http://user — check it out. We know where you are based on where your machine is plugged in, so use this site to see a map of where everyone is right now." What I'm trying to figure out is: how I can tell which machine or domain user (either will do) is connected to a particular wall jack?

    Read the article

  • How can I make Excel correlate data from two data sets into a single graph?

    - by Tom Ritter
    I have two datasets, one being sparser than the other. They look like this: Data Set 1: 4 50 5 55 6 60 7 70 8 80 Data Set 2: 4 10 6 20 8 30 I have several hundred points instead of this few. I want them in the same graph, the X axis being 4-8, the y axis being 0-100ish, and two lines, one for each data set. What I get is two lines, not correlated at all along the X axis, and the X axis being labeled from one of the two datasets, with the labels being wrong for the other. The smaller data set is one-point-per-tick on the x axis, when I need it to skip ticks and actually line up with the other data set. Not married to excel, willing to try this in something else if it's free.

    Read the article

  • Correlate GROUP BY and LEFT JOIN on multiple criteria to show latest record?

    - by Sunbird
    In a simple stock management database, quantity of new stock is added and shipped until quantity reaches zero. Each stock movement is assigned a reference, only the latest reference is used. In the example provided, the latest references are never shown, the stock ID's 1,4 should have references charlie, foxtrot respectively, but instead show alpha, delta. How can a GROUP BY and LEFT JOIN on multiple criteria be correlated to show the latest record? http://sqlfiddle.com/#!2/6bf37/107 CREATE TABLE stock ( id tinyint PRIMARY KEY, quantity int, parent_id tinyint ); CREATE TABLE stock_reference ( id tinyint PRIMARY KEY, stock_id tinyint, stock_reference_type_id tinyint, reference varchar(50) ); CREATE TABLE stock_reference_type ( id tinyint PRIMARY KEY, name varchar(50) ); INSERT INTO stock VALUES (1, 10, 1), (2, -5, 1), (3, -5, 1), (4, 20, 4), (5, -10, 4), (6, -5, 4); INSERT INTO stock_reference VALUES (1, 1, 1, 'Alpha'), (2, 2, 1, 'Beta'), (3, 3, 1, 'Charlie'), (4, 4, 1, 'Delta'), (5, 5, 1, 'Echo'), (6, 6, 1, 'Foxtrot'); INSERT INTO stock_reference_type VALUES (1, 'Customer Reference'); SELECT stock.id, SUM(stock.quantity) as quantity, customer.reference FROM stock LEFT JOIN stock_reference AS customer ON stock.id = customer.stock_id AND stock_reference_type_id = 1 GROUP BY stock.parent_id

    Read the article

  • How do I get a preference to correlate to a variable?

    - by Dan T
    I have my menu button bringing up a Settings option, which brings up numerous ListPreferences such as weight and various sizes for glasses (it's a BAC calculator app). I'll pick one example... weight will work. Depending on how much you weigh it will affect your BAC. I have a int for Weight, set at 180. I would like someone to be able to go into the menu Settings, pick the "Weight" ListPreference, and choose between 100, 130, 150, 180, 210, 240, 270, and 300. I already have the numbers show up (all of the arrays have been created) and I can choose one, but it doesn't do anything because it's not linked up with the int Weight variable. How do I go about linking the information?

    Read the article

  • How do I get a preference to correlate to variable?

    - by Dan T
    I have my menu button bringing up a Settings option, which brings up numerous ListPreferences such as weight and various sizes for glasses (it's a BAC calculator app). I'll pick one example... weight will work. Depending on how much you weigh it will affect your BAC. I have a int for Weight, set at 180. I would like someone to be able to go into the menu Settings, pick the "Weight" ListPreference, and choose between 100, 130, 150, 180, 210, 240, 270, and 300. I already have the numbers show up (all of the arrays have been created) and I can choose one, but it doesn't do anything because it's not linked up with the int Weight variable. How do I go about linking the information?

    Read the article

  • Neural network input data, cartesian plane x/y coordinates, correlate with Handwriting.

    - by Sam
    I very curious about making a handwriting recognition application in a web browser. Users draw a letter, ajax sends the data to the server, neural network finds the closest match, and returns results. So if you draw an a, the first result should be an a, then o, then e, something like that. I'm don't know much about neural networks. What kinda data would I need to pass to the NN. Could it be an array of the x/y coordinates where the user has drawn on a pad. Or what type of data is the neural network expecting or would produce the best results for handwriting?

    Read the article

  • WebLogic Server–Use the Execution Context ID in Applications–Lessons From Hansel and Gretel

    - by james.bayer
    I learned a neat trick this week.  Don’t let your breadcrumbs go to waste like Hansel and Gretel did!  Keep track of the code path, logs and errors for each request as they flow through the system.  Earlier this week an OTN forum post in the WLS – General category by Oracle Ace John Stegeman asked a question how to retrieve the Execution Context ID so that it could be used on an error page that a user could provide to a help desk or use to check with application administrators so they could look up what went wrong.  What is the Execution Context ID (ECID)?  Fusion Middleware injects an ECID as a request enters the system and it says with the request as it flows from Oracle HTTP Server to Oracle Web Cache to multiple WebLogic Servers to the Oracle Database. It’s a way to uniquely identify a request across tiers.  According to the documentation it’s: The value of the ECID is a unique identifier that can be used to correlate individual events as being part of the same request execution flow. For example, events that are identified as being related to a particular request typically have the same ECID value.  The format of the ECID string itself is determined by an internal mechanism that is subject to change; therefore, you should not have or place any dependencies on that format. The novel idea that I see John had was to extend this concept beyond the diagnostic information that is captured by Fusion Middleware.  Why not also use this identifier in your logs and errors so you can correlate even more information together!  Your logging might already identify the user, so why not identify the request so you filter down even more.  All you need to do inside of WebLogic Server to get ahold of this information is invoke DiagnosticConextHelper: weblogic.diagnostics.context.DiagnosticContextHelper.getContextId() This class has other helpful methods to see other values tracked by the diagnostics framework too.  This way I can see even more detail and get information across tiers. In performance profiling, this can be very handy to track down where time is being spent in code.  I’ve blogged and made videos about this before.  JRockit Flight Recorder can use the WLDF Diagnostic Volume in WLS 10.3.3+ to automatically capture and correlate lots of helpful information for each request without installing any special agents and with the out-of-the-box JRockit and WLS settings!  You can see here how information is displayed in JRockit Flight Recorder about a single request as it calls a Servlet, which calls an EJB, which gets a DB connection, which starts a transaction, etc.  You can get timings around everything and even see the SQL that is used. http://download.oracle.com/docs/cd/E21764_01/web.1111/e13714/using_flightrecorder.htm#WLDFC480 Recent versions of the WLS console also are able to visualize this data too, so it works with other JVMs besides JRockit when you turn on WLDF instrumentation. I wrote a little sample application that verified to myself that the ECID did actually cross JVM boundaries.  I invoked a Servlet in one JVM, which acted as an EJB client to Stateless Session Bean running in another JVM.  Each call returned the same ECID.  You need to turn on WLDF Instrumentation for this to work otherwise the framework returns null.  I’m glad John put me on to this API as I have some interesting ideas on how to correlate some information together.

    Read the article

  • Android EditText within a ListView

    - by metalideath
    I have created a custom Array Adapter to bind a custom row that contains some static text and an editable EditText. I am trying to register to be notified when the user changes the text within the edit text and when notified to determine which ArrayList row the modified EditText corresponds to. In the past with other types of views such as a Spinner I could simply put a reference to the parent view and the row number into the tag for the Spinner view. And then when I was notified that the value changed I read the tag to determine how to correlate it back to the master ArrayList. The problem with registering to be notifed with an EditText change is that you do not get back a view but instead get a TextWatcher and I have no way to correlate back to the parent view or ArrayList row. What is the technique that you need to use in this circumstance?

    Read the article

  • An XEvent a Day (22 of 31) – The Future – fn_dblog() No More? Tracking Transaction Log Activity in Denali

    - by Jonathan Kehayias
    I bet that made you look didn’t it?  Worry not, fn_dblog() still exists in SQL Server Denali, and I plan on using it to validate the information being returned by a new Event in SQL Server Denali CTP1, sqlerver.transaction_log, which brings with it the ability to correlate specific transaction log entries to the operations that actually caused them to occur. There is no greater source of information about the transaction log in SQL Server than Paul Randal’s blog category Transaction Log . ...(read more)

    Read the article

  • Are short identifiers bad?

    - by Daniel C. Sobral
    Are short identifiers bad? How does identifier length correlate with code comprehension? What other factors (besides code comprehension) might be of consideration when it comes to naming identifiers? Just to try to keep the quality of the answers up, please note that there is some research on the subject already! Edit Curious that everyone either doesn't think length is relevant or tend to prefer larger identifiers, when both links I provided indicate large identifiers are harmful!

    Read the article

  • What collection object is appropriate for fixed ordering of values?

    - by makerofthings7
    Scenario: I am tracking several performance counters and have a CounterDescription[] correlate to DataSnapshot[]... where CounterDescription[n] describes the data loaded within DataSnapshot[n]. I want to expose an easy to use API within C# that will allow for the easy and efficient expansion of the arrays. For example CounterDescription[0] = Humidity; DataSnapshot[0] = .9; CounterDescription[1] = Temp; DataSnapshot[1] = 63; My upload object is defined like this: Note how my intent is to correlate many Datasnapshots with a dattime reference, and using the offset of the data to refer to its meaning. This was determined to be the most efficient way to store the data on the back-end, and has now reflected itself into the following structure: public class myDataObject { [DataMember] public SortedDictionary<DateTime, float[]> Pages { get; set; } /// <summary> /// An array that identifies what each position in the array is supposed to be /// </summary> [DataMember] public CounterDescription[] Counters { get; set; } } I will need to expand each of these arrays (float[] and CounterDescription[] ), but whatever data already exists must stay in that relative offset. Which .NET objects support this? I think Array[] , LinkedList<t>, and List<t> Are able to keep the data fixed in the right locations. What do you think?

    Read the article

  • long access times and errors in iis application

    - by user55862
    I am having an issue with an IIS application (details of environment at the end of the message). The web site works great most of the time and I cannot reproduce any error in our test system. On the live system however with on averare of 5-15 requests per second I have a problem with that some requests (about 0.05%) will take over 300 seconds to complete. The other requests complete withing 5-10 seconds. It seem like if all the errornous requests end up with a Timer_EntityBody error in the error log. I have never seen this as an end user but I guess that they will receive some kind of error message. I am trying to find out what can be causing this errornous behaviour. Any ideas are welcome. I have read something about that there can be an MTU issue if ICMP and MTU protocols are blocked in the firewall. Does that sound reasonable? I have also read about updating to IIS 7 should do the trick. Does it sound reasonable? I think that the problem has another cause but I have no idea of what. I have tried running hte perormance monitor, monitoring for database locks and active transaction counts. I can see some of these in the perfmon log for the MSSQL server (another machine) for example: Active transactions is sometimes peaking and sometimes for long periods Lock waits per seconds is sometimes peaking Transactions per second is sometimes peaking Page IO Latch wait is sometimes peaking Lock wait time (ms) is sometimes peaking But I cannot see that any of these correlate to the errors in the IIS error log. On the IIS server machine I can also see with perfmon that some values peak a few times during a day: Request execution time Avg disk queue length I can neither see that any of these correlate to the errors in the IIS error log. In the below code I have anonymized by replacing some parts with HIDDEN The following can be seen in the access log 2010-10-01 08:35:05 W3SVC1301873091 **HIDDEN** POST /**HIDDEN**/Modules/BalanceModule.aspx - 80 - **HIDDEN** Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+5.1;+.NET+CLR+2.0.50727;+.NET+CLR+3.0.4506.2152;+.NET+CLR+3.5.30729;+.NET4.0C;+.NET4.0E) ASP.NET_SessionId=**HIDDEN** 400 0 64 0 2241 127799 At the same time the following can be seen in the error log: 2010-10-01 08:35:05 **HIDDEN** 1999 **HIDDEN** 80 HTTP/1.0 POST /**HIDDEN**/Modules/BalanceModule.aspx - 1301873091 Timer_EntityBody Test+Pool I can tell the following about the environment: Server: Windows Server 2003 x64 SP2 running on VMWare HTTP Server: IIS v6.0 with ASP.NET 2.0.50727 Antivirus: Trend Micro OfficeScan (Is it a good idea to have this on a server?)

    Read the article

  • long access times and errors in iis application

    - by Jens Olsson
    Hi, I am having an issue with an IIS application (details of environment at the end of the message). The web site works great most of the time and I cannot reproduce any error in our test system. On the live system however with on averare of 5-15 requests per second I have a problem with that some requests (about 0.05%) will take over 300 seconds to complete. The other requests complete withing 5-10 seconds. It seem like if all the errornous requests end up with a Timer_EntityBody error in the error log. I have never seen this as an end user but I guess that they will receive some kind of error message. I am trying to find out what can be causing this errornous behaviour. Any ideas are welcome. I have read something about that there can be an MTU issue if ICMP and MTU protocols are blocked in the firewall. Does that sound reasonable? I have also read about updating to IIS 7 should do the trick. Does it sound reasonable? I think that the problem has another cause but I have no idea of what. I have tried running hte perormance monitor, monitoring for database locks and active transaction counts. I can see some of these in the perfmon log for the MSSQL server (another machine) for example: Active transactions is sometimes peaking and sometimes for long periods Lock waits per seconds is sometimes peaking Transactions per second is sometimes peaking Page IO Latch wait is sometimes peaking Lock wait time (ms) is sometimes peaking But I cannot see that any of these correlate to the errors in the IIS error log. On the IIS server machine I can also see with perfmon that some values peak a few times during a day: Request execution time Avg disk queue length I can neither see that any of these correlate to the errors in the IIS error log. In the below code I have anonymized by replacing some parts with HIDDEN The following can be seen in the access log 2010-10-01 08:35:05 W3SVC1301873091 **HIDDEN** POST /**HIDDEN**/Modules/BalanceModule.aspx - 80 - **HIDDEN** Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+5.1;+.NET+CLR+2.0.50727;+.NET+CLR+3.0.4506.2152;+.NET+CLR+3.5.30729;+.NET4.0C;+.NET4.0E) ASP.NET_SessionId=**HIDDEN** 400 0 64 0 2241 127799 At the same time the following can be seen in the error log: 2010-10-01 08:35:05 **HIDDEN** 1999 **HIDDEN** 80 HTTP/1.0 POST /**HIDDEN**/Modules/BalanceModule.aspx - 1301873091 Timer_EntityBody Test+Pool I can tell the following about the environment: Server: Windows Server 2003 x64 SP2 running on VMWare HTTP Server: IIS v6.0 with ASP.NET 2.0.50727 Antivirus: Trend Micro OfficeScan (Is it a good idea to have this on a server?)

    Read the article

  • Getting SEC to only monitor latest version of a log file?

    - by user439407
    I have been tasked with running SEC to help correlate PHP logs. The basic setup is pretty straightforward, the problem I'm having is that we want to monitor a log file whose name contains the date(php-2012-10-01.log for instance). How can I tell SEC to only monitor the latest version of the file(and of course switch to the newest log file every day at midnight) I could do something like create a latest version of the file that links to the latest version and run a cron job at midnight to update the link, but I am looking for a more elegant solution

    Read the article

  • Correlating /var/log/* timestamps

    - by intuited
    /var/log/messages, /var/log/syslog, and some other log files use a timestamp which contains an absolute time, like Jan 13 14:13:10. /var/log/Xorg.0.log and /var/log/dmesg, as well as the output of $ dmesg, use a format that looks like [50595.991610] malkovich: malkovich malkovich malkovich malkovich I'm guessing/gathering that the numbers represent seconds and microseconds since startup. However, my attempt to correlate these two sets of timestamps (using the output from uptime) gave a discrepancy of about 5000 seconds. This is roughly the amount of time my computer was suspended for. Is there a convenient way to map the numeric timestamps used by dmesg and Xorg into absolute timestamps?

    Read the article

  • How do I Fix SQL Server error: Order by items must appear in the select list if Select distinct is s

    - by Paula DiTallo 2007-2009 All Rights Reserved
    There's more than one reason why you may receive this error, but the most common reason is that your order by statement column list doesn't correlate with the values specified in your column list when you happen to be using DISTINCT. This is usually easy to spot and resolve. A more obscure reason may be that you are using a function around one of the selected columns --but omitting to use the same function around the same selected column name in the order by statement. Here's an example:   select distinct upper(columnA)   from [evaluate].[testTable]    order by columnA  asc   This statement will cause the "Order by items must appear in the select list if SELECT DISTINCT is specified."  error to appear not because distinct was used, but because the order by statement did not utilize the upper() fundtion around colunnA.  To correct this error, do this: select distinct upper(columnA)   from [evaluate].[testTable]    order by upper(columnA) asc

    Read the article

  • Writing Large Portions Of Code Then Debugging?

    - by The Floating Brain
    Lately I have been writing a game engine, and I have been writing a lot of "foundation stuff" (standard interfaces, modules, a message system ect.), but I have noticed a pattern, a lot of the stuff is interdependent and I can not debug until everything is done, hence I do not debug for about 3 to 5 hours at a time. I am wondering if this is an acceptable practice for this part of the project, and if not, if anyone can give me some advice? -----Update-----: I downloaded some code metrics tools, and my programs cyclomatic complexity is 1.52 which as I understand it is good, and should correlate to high cohesion, if I am wrong please correct me/

    Read the article

  • How do you achieve a numeric versioning scheme with Git?

    - by Erlend
    My organization is considering moving from SVN to Git. One argument against moving is as follows: How do we do versioning? We have an SDK distribution based on the NetBeans Platform. As the svn revisions are simple numbers we can use them to extend the version numbers of our plugins and SDK builds. How do we handle this when we move to Git? Possible solutions: Using the build number from hudson (Problem: you have to check hudson to correlate that to an actual git version) Manually upping the version for nightly and stable (Problem: Learning curve, human error) If someone else has encountered a similar problem and solved it, we'd love to hear how.

    Read the article

  • Calculate random points (pixel) within a circle (image)

    - by DMills
    I have an image that contains a circles at a specific location, and of a specific diameter. What I need to do is to be able to calculate random points within the circle, and then manipulate said the pixels they correlate to. I have the following code already: private Point CalculatePoint() { var angle = _random.NextDouble() * ( Math.PI * 2 ); var x = _originX + ( _radius * Math.Cos( angle ) ); var y = _originY + ( _radius * Math.Sin( angle ) ); return new Point( ( int )x, ( int )y ); } And that works fine for finding all the points at the circumference of the circle, but I need all points from anywhere in the circle. If this doesn't make sense let me know and I will do my best to clarify.

    Read the article

  • Package to compare LSA, TFIDF, Cosine metrics and Language Models

    - by gouwsmeister
    Hi, I'm looking for a package (any language, really) that I can use on a corpus of 50 documents to perform interdocument similarity testing in various metrics, like tfidf, okapi, language models, lsa, etc. I want as a result a document similarity matrix, i.e. doc1 is x% similar to doc2, etc... This is for research purposes, not for production. I specifically want the doc similarity matrix as I want to correlate this with human ratings. Thank you in advance!

    Read the article

1 2 3 4 5  | Next Page >