Search Results

Search found 109 results on 5 pages for 'temporal'.

Page 1/5 | 1 2 3 4 5  | Next Page >

  • Working with Temporal Data in SQL Server

    - by Dejan Sarka
    My third Pluralsight course, Working with Temporal Data in SQL Server, is published. I am really proud on the second part of the course, where I discuss optimization of temporal queries. This was a nearly impossible task for decades. First solutions appeared only lately. I present all together six solutions (and one more that is not a solution), and I invented four of them. http://pluralsight.com/training/Courses/TableOfContents/working-with-temporal-data-sql-server

    Read the article

  • Examples of temporal database designs? [closed]

    - by miku
    I'm researching various database design for historical record keeping - because I would like to implement a prototypical web application that (excessively) tracks changes made by users and let them undo things, see revisions, etc. I'd love use mercurial or git as backend (with files as records) - since they already implement the kind of append-only changes I imagine. I tried git and dulwich (python git API) - and it went ok - but I was concerned about performance; Bi-temporal database design lets you store a row along with time periods, when this record was valid. This sure sound more performant than to operate on files on disk (as in 1.) - but I had little luck finding real-world examples (e.g. from open source projects), that use this kind of design and are approachable enough to learn from them. Revisions à la MediaWiki revisions or an extra table for versions, as in Redmine. The problem here is, that DELETE would take the whole history with it. I looked at NoSQL solutions, too. With a document oriented approach, it would be simple to just store the whole history of an entity within the document itself - which would reduce design plus implementation time in contrast to a RDBMS approach. However, in this case I'm a bit concerned about ACID-properties, which would be important in the application. I'd like ask about experiences about real-world and pragmatic designs for temporal data.

    Read the article

  • SQL Bits X – Temporal Snapshot Fact Table Session Slide & Demos

    - by Davide Mauri
    Already 10 days has passed since SQL Bits X in London. I really enjoyed it! Those kind of events are great not only for the content but also to meet friends that – due to distance – is not possible to meet every day. Friends from PASS, SQL CAT, Microsoft, MVP and so on all in one place, drinking beers, whisky and having fun. A perfect mixture for a great learning and sharing experience! I’ve also enjoyed a lot delivering my session on Temporal Snapshot Fact Tables. Given that the subject is very specific I was not expecting a lot of attendees….but I was totally wrong! It seems that the problem of handling daily snapshot of data is more common than what I expected. I’ve also already had feedback from several attendees that applied the explained technique to their existing solution with success. This is just what a speaker in such conference wish to hear! :) If you want to take a look at the slides and the demos, you can find them on SkyDrive: https://skydrive.live.com/redir.aspx?cid=377ea1391487af21&resid=377EA1391487AF21!1151&parid=root The demo is available both for SQL Sever 2008 and for SQL Server 2012. With this last version, you can also simplify the ETL process using the new LEAD analytic function. (This is not done in the demo, I’ve left this option as a little exercise for you :) )

    Read the article

  • Rules engine for spatial and temporal reasoning?

    - by John
    I have an application that receives a number of datums that characterize spatial / temporal processes. It then filters these datums and creates actions which are then sent to processes that perform the actions. Rinse and repeat. At present, I have a collection of custom filters that perform a lot of complicated spatial/temporal calculations. Many times as I discuss my system to individuals in my company, they ask if I'm using a rules engine. I have yet to find a rules engine that is able to reason well temporally and spatially. (Things like When are two entities ever close? Is entity A ever in region B? If entity C is near entity D but oriented backwards relative to C then perform action D.) I have looked at Drools, Cyc, Jess in the past (say 3-4 years ago). It's time to re-examine the state of the art. Any suggestions? Any standards that you know of that support this kind of reasoning? Any defacto standards? Any applications? Thanks!

    Read the article

  • Update Rule in Temporal difference

    - by Betamoo
    The update rule TD(0) Q-Learning: Q(t-1) = (1-alpha) * Q(t-1) + (alpha) * (Reward(t-1) + gamma* Max( Q(t) ) ) Then take either the current best action (to optimize) or a random action (to explorer) Where MaxNextQ is the maximum Q that can be got in the next state... But in TD(1) I think update rule will be: Q(t-2) = (1-alpha) * Q(t-2) + (alpha) * (Reward(t-2) + gamma * Reward(t-1) + gamma * gamma * Max( Q(t) ) ) My question: The term gamma * Reward(t-1) means that I will always take my best action at t-1 .. which I think will prevent exploring.. Can someone give me a hint? Thanks

    Read the article

  • Looking for temporal upsampling / motion interpolation software

    - by timday
    I'm looking for something (prereably FOSS software) which can take an animation with N images as input, and which will output an animation with M frames, where M is in the range 2N to 5N or so. I believe the general technique is called "temporal upsampling" or possibly "inbetweening" (or "'tweening" for short). Note that it does need to make some effort to do motion tracking of things in the scene ("optical flow"); just fading ("dissolve") between keyframes isn't going to cut it. Googling "temporal upsampling" turns up any number of papers on the subject, but I've yet to discover any code/software (a gstreamer plugin would be perfect) I can just use to try the technique out. Any suggestions ? Additional information: "Motion interpolation" seems to be the more widely used name in the TV world for what I'm looking for. See this and the video here too. Interestingly, some cinephiles seem to hate what these technologies do to 24FPS film: e.g a comment "makes Gone With The Wind look like a soap opera shot on video".

    Read the article

  • Looking for temporal upsampling software

    - by timday
    I'm looking for something (prereably FOSS software) which can take an animation with N images as input, and which will output an animation with M frames, where M is in the range 2N to 5N or so. I believe the general technique is called "temporal upsampling" or possibly "inbetweening" (or "'tweening" for short). Note that it does need to make some effort to do motion tracking of things in the scene ("optical flow"); just fading ("dissolve") between keyframes isn't going to cut it. Googling "temporal upsampling" turns up any number of papers on the subject, but I've yet to discover any code/software I can just use to try the technique out. Any suggestions ?

    Read the article

  • Create a model that switches between two different states using Temporal Logic?

    - by NLed
    Im trying to design a model that can manage different requests for different water sources. Platform : MAC OSX, using latest Python with TuLip module installed. For example, Definitions : Two water sources : w1 and w2 3 different requests : r1,r2,and r3 - Specifications : Water 1 (w1) is preferred, but w2 will be used if w1 unavailable. Water 2 is only used if w1 is depleted. r1 has the maximum priority. If all entities request simultaneously, r1's supply must not fall below 50%. - The water sources are not discrete but rather continuous, this will increase the difficulty of creating the model. I can do a crude discretization for the water levels but I prefer finding a model for the continuous state first. So how do I start doing that ? Some of my thoughts : Create a matrix W where w1,w2 ? W Create a matrix R where r1,r2,r3 ? R or leave all variables singular without putting them in a matrix I'm not an expert in coding so that's why I need help. Not sure what is the best way to start tackling this problem. I am only interested in the model, or a code sample of how can this be put together. edit Now imagine I do a crude discretization of the water sources to have w1=[0...4] and w2=[0...4] for 0, 25, 50, 75,100 percent respectively. == means implies Usage of water sources : if w1[0]==w2[4] -- meaning if water source 1 has 0%, then use 100% of water source 2 etc if w1[1]==w2[3] if w1[2]==w2[2] if w1[3]==w2[1] if w1[4]==w2[0] r1=r2=r3=[0,1] -- 0 means request OFF and 1 means request ON Now what model can be designed that will give each request 100% water depending on the values of w1 and w2 (w1 and w2 values are uncontrollable so cannot define specific value, but 0...4 is used for simplicity )

    Read the article

  • Are there any examples of a temporal field/object updater?

    - by Bryan Agee
    The system in question has numerous examples of temporal objects and fields--ones which are a certain variable at a certain point in time. An example of this would be someone's rate of pay--there are different answers depending on when you ask and what the constraints might be; eg, can there ever be more than one of a certain temporal object concurrently, etc. Ideally, there would be an object that handles those constraints when a new state/stateful object is introduced; when a new value is set, it would prevent creating negative ranges and overlaps. Martin Fowler has written some great material on this (such as this description of Temporal Objects) , but what I've found of it tends to be entirely theoretic, with no concrete implementations. PHP is the target language, but examples in any language would be most helpful.

    Read the article

  • My first SQL Saturday

    - by Paul Nielsen
    I’m leaving soon for an exciting journey with a thrilling destination – my first SQL Saturday. So I decided to do it right and I’m taking the Amtrak Acela Express from Boston to New York. I love New York! If you’re headed to SQL Saturday #39, and you love database design, I invite you to come to my session on Temporal Database Designs – how to design a table so it can be queried as of any pervious point in time. The proof of concept code is posted at http://temporalsql.codeplex.com/ . See you there....(read more)

    Read the article

  • Searching temporal data

    - by user321299
    I developing an application (in C#) where objects are active under a period of time, they have from and to properties of DateTime-type. Now I want to speed up my search routine for queries like: Are there other active objects in this timeperiod/at this time. Is there any existing temporal index I can use or can I use QuadTree/other tree-structures to search in an efficient way.

    Read the article

  • Tool to write linear temporal logic from UML 2.0 sequence diagram

    - by user326180
    i am working on checking model consistency of software. to do this i need to write linear temporal logic for UML 2.0 sequence diagram. if any body have any other tool for the same please response as soon as possible. I will be very obliged to you. i have found charmy tool have plugin for the same. Does anybody have source code for charmy tool(CHecking ARchitectural Model consistencY). It is not available on their website. Thanks in advance.

    Read the article

  • Compressing three individual jpeg pics containing temporal redundancy?

    - by michael
    I am interfacing and embedded device with a camera module that returns a single jpeg compressed frame each time I trigger it. I would like to take three successive shots (approx 1 frame per 1/4 second) and further compress the images into a single file. The assumption here is that there is a lot of temporal redundancy, therefore lots of room for more compression across the three frames (compared to sending three separate jpeg images). I will be implementing the solution on an embedded device in C without any libraries and no OS. The camera will be taking pics in an area with very little movement (no visitors or screens in the background, maybe a tree with swaying branches), so I think my assumption about redundancy is pretty solid. When the file is finally viewed on a pc/mac, I don't mind having to write something to extract the three frames (so it can be a nonstandard cluge) So I guess the actual question is: What is the best way to compress these three images together given the fact that they are already in JPEG format (it is a possibly to convert back to a raw image, but if i dont have too...)

    Read the article

  • Android - Temporal & Ancestral Navigation with ViewPager & Fragments

    - by rascuache
    I'm trying to work out the best way to implement the 'ancestral' and 'temporal' navigation (utilising the up button and back buttons), for a music player I'm working on. Currently, the user is presented with a viewpager, and can page between three main fragments (ArtistMain, AlbumMain and SongMain). Upon choosing an item inside that view, a fragment transaction occurs, and the viewpager goes out of view, replaced by a new fragment (AlbumSub, Songsub or player, depending on where the user came from). The user can then navigate deeper, until a song is chosen, and then they are taken to the 'player' screen. I guess the question is: How do I implement all of this conditional navigation? I'm fairly new to android and programming in general, and I just can't seem to come up with an efficient way to achieve this. At the moment, as each fragment is brought into view, the app is checking to see where the user just came from, and then determines where the user should be taken if back or home is called. This means I have a booleans like "fromArtistMain", "fromAlbumSub", and I'm checking for things like "fromSongSub && fromPlayer".. it's all turning into a bit of a mess. I've drawn a diagram (in paint, sorry!!), to depict the navigation I'm trying to achieve. The yellow represents the 'up' button press, the red is the 'back' button press, and blue is just normal navigation. The green arrows are meant to represent the view paging: Any advice is welcome. It might take something really simple that I've just overlooked. Thanks for your time.

    Read the article

  • SQL Rally Pre-Con: Data Warehouse Modeling – Making the Right Choices

    - by Davide Mauri
    As you may have already learned from my old post or Adam’s or Kalen’s posts, there will be two SQL Rally in North Europe. In the Stockholm SQL Rally, with my friend Thomas Kejser, I’ll be delivering a pre-con on Data Warehouse Modeling: Data warehouses play a central role in any BI solution. It's the back end upon which everything in years to come will be created. For this reason, it must be rock solid and yet flexible at the same time. To develop such a data warehouse, you must have a clear idea of its architecture, a thorough understanding of the concepts of Measures and Dimensions, and a proven engineered way to build it so that quality and stability can go hand-in-hand with cost reduction and scalability. In this workshop, Thomas Kejser and Davide Mauri will share all the information they learned since they started working with data warehouses, giving you the guidance and tips you need to start your BI project in the best way possible?avoiding errors, making implementation effective and efficient, paving the way for a winning Agile approach, and helping you define how your team should work so that your BI solution will stand the test of time. You'll learn: Data warehouse architecture and justification Agile methodology Dimensional modeling, including Kimball vs. Inmon, SCD1/SCD2/SCD3, Junk and Degenerate Dimensions, and Huge Dimensions Best practices, naming conventions, and lessons learned Loading the data warehouse, including loading Dimensions, loading Facts (Full Load, Incremental Load, Partitioned Load) Data warehouses and Big Data (Hadoop) Unit testing Tracking historical changes and managing large sizes With all the Self-Service BI hype, Data Warehouse is become more and more central every day, since if everyone will be able to analyze data using self-service tools, it’s better for him/her to rely on correct, uniform and coherent data. Already 50 people registered from the workshop and seats are limited so don’t miss this unique opportunity to attend to this workshop that is really a unique combination of years and years of experience! http://www.sqlpass.org/sqlrally/2013/nordic/Agenda/PreconferenceSeminars.aspx See you there!

    Read the article

  • Mac OS date command - getting higher temporal resolution

    - by Mark
    Hey all, I am trying to use the date command in Terminal on multiple Mac OS X machines that are synced via NTP to synchronize some code in a program. Essentially I am running a program... MyProgram with arguments[date] I can get date to give me the seconds since the Unix epoch with the %M specifier. When I try to use %N to get nanosecond resolution, date just returns N. Is there anyway to get date to give me finer then second resolution? I wouldn't even mind passing two arguments such as (date +%M):arg2 And then converting units in the program. Many thanks in advance! %N specifier listed here: http://en.wikipedia.org/wiki/Date_(Unix)

    Read the article

  • PostgreSQL, triggers, and concurrency to enforce a temporal key

    - by Hobbes
    I want to define a trigger in PostgreSQL to check that the inserted row, on a generic table, has the the property: "no other row exists with the same key in the same valid time" (the keys are sequenced keys). In fact, I has already implemented it. But since the trigger has to scan the entire table, now i'm wondering: is there a need for a table-level lock? Or this is managed someway by the PostgreSQL itself?

    Read the article

  • Calculate year for end date: PostgreSQL

    - by Dave Jarvis
    Background Users can pick dates as shown in the following screen shot: Any starting month/day and ending month/day combinations are valid, such as: Mar 22 to Jun 22 Dec 1 to Feb 28 The second combination is difficult (I call it the "tricky date scenario") because the year for the ending month/day is before the year for the starting month/day. That is to say, for the year 1900 (also shown selected in the screen shot above), the full dates would be: Dec 22, 1900 to Feb 28, 1901 Dec 22, 1901 to Feb 28, 1902 ... Dec 22, 2007 to Feb 28, 2008 Dec 22, 2008 to Feb 28, 2009 Problem Writing a SQL statement that selects values from a table with dates that fall between the start month/day and end month/day, regardless of how the start and end days are selected. In other words, this is a year wrapping problem. Inputs The query receives as parameters: Year1, Year2: The full range of years, independent of month/day combination. Month1, Day1: The starting day within the year to gather data. Month2, Day2: The ending day within the year (or the next year) to gather data. Previous Attempt Consider the following MySQL code (that worked): end_year = start_year + greatest( -1 * sign( datediff( date( concat_ws('-', year, end_month, end_day ) ), date( concat_ws('-', year, start_month, start_day ) ) ) ), 0 ) How it works, with respect to the tricky date scenario: Create two dates in the current year. The first date is Dec 22, 1900 and the second date is Feb 28, 1900. Count the difference, in days, between the two dates. If the result is negative, it means the year for the second date must be incremented by 1. In this case: Add 1 to the current year. Create a new end date: Feb 28, 1901. Check to see if the date range for the data falls between the start and calculated end date. If the result is positive, the dates have been provided in chronological order and nothing special needs to be done. This worked in MySQL because the difference in dates would be positive or negative. In PostgreSQL, the equivalent functionality always returns a positive number, regardless of their relative chronological order. Question How should the following (broken) code be rewritten for PostgreSQL to take into consideration the relative chronological order of the starting and ending month/day pairs (with respect to an annual temporal displacement)? SELECT m.amount FROM measurement m WHERE (extract(MONTH FROM m.taken) >= month1 AND extract(DAY FROM m.taken) >= day1) AND (extract(MONTH FROM m.taken) <= month2 AND extract(DAY FROM m.taken) <= day2) Any thoughts, comments, or questions? (The dates are pre-parsed into MM/DD format in PHP. My preference is for a pure PostgreSQL solution, but I am open to suggestions on what might make the problem simpler using PHP.) Versions PostgreSQL 8.4.4 and PHP 5.2.10

    Read the article

  • Historical / auditable database

    - by Mark
    Hi all, This question is related to the schema that can be found in one of my other questions here. Basically in my database I store users, locations, sensors amongst other things. All of these things are editable in the system by users, and deletable. However - when an item is edited or deleted I need to store the old data; I need to be able to see what the data was before the change. There are also non-editable items in the database, such as "readings". They are more of a log really. Readings are logged against sensors, because its the reading for a particular sensor. If I generate a report of readings, I need to be able to see what the attributes for a location or sensor was at the time of the reading. Basically I should be able to reconstruct the data for any point in time. Now, I've done this before and got it working well by adding the following columns to each editable table: valid_from valid_to edited_by If valid_to = 9999-12-31 23:59:59 then that's the current record. If valid_to equals valid_from, then the record is deleted. However, I was never happy with the triggers I needed to use to enforce foreign key consistency. I can possibly avoid triggers by using the extension to the "PostgreSQL" database. This provides a column type called "period" which allows you to store a period of time between two dates, and then allows you to do CHECK constraints to prevent overlapping periods. That might be an answer. I am wondering though if there is another way. I've seen people mention using special historical tables, but I don't really like the thought of maintainling 2 tables for almost every 1 table (though it still might be a possibility). Maybe I could cut down my initial implementation to not bother checking the consistency of records that aren't "current" - i.e. only bother to check constraints on records where the valid_to is 9999-12-31 23:59:59. Afterall, the people who use historical tables do not seem to have constraint checks on those tables (for the same reason, you'd need triggers). Does anyone have any thoughts about this? PS - the title also mentions auditable database. In the previous system I mentioned, there is always the edited_by field. This allowed all changes to be tracked so we could always see who changed a record. Not sure how much difference that might make. Thanks.

    Read the article

  • Delphi: EInvalidOp in neural network class (TD-lambda)

    - by user89818
    I have the following draft for a neural network class. This neural network should learn with TD-lambda. It is started by calling the getRating() function. But unfortunately, there is an EInvalidOp (invalid floading point operation) error after about 1000 iterations in the following lines: neuronsHidden[j] := neuronsHidden[j]+neuronsInput[t][i]*weightsInput[i][j]; // input -> hidden weightsHidden[j][k] := weightsHidden[j][k]+LEARNING_RATE_HIDDEN*tdError[k]*eligibilityTraceOutput[j][k]; // adjust hidden->output weights according to TD-lambda Why is this error? I can't find the mistake in my code :( Can you help me? Thank you very much in advance! unit uNeuronalesNetz; interface uses Windows, Messages, SysUtils, Variants, Classes, Graphics, Controls, Forms, Dialogs, ExtCtrls, StdCtrls, Grids, Menus, Math; const NEURONS_INPUT = 43; // number of neurons in the input layer NEURONS_HIDDEN = 60; // number of neurons in the hidden layer NEURONS_OUTPUT = 1; // number of neurons in the output layer NEURONS_TOTAL = NEURONS_INPUT+NEURONS_HIDDEN+NEURONS_OUTPUT; // total number of neurons in the network MAX_TIMESTEPS = 42; // maximum number of timesteps possible (after 42 moves: board is full) LEARNING_RATE_INPUT = 0.25; // in ideal case: decrease gradually in course of training LEARNING_RATE_HIDDEN = 0.15; // in ideal case: decrease gradually in course of training GAMMA = 0.9; LAMBDA = 0.7; // decay parameter for eligibility traces type TFeatureVector = Array[1..43] of SmallInt; // definition of the array type TFeatureVector TArtificialNeuralNetwork = class // definition of the class TArtificialNeuralNetwork private // GENERAL SETTINGS START learningMode: Boolean; // does the network learn and change its weights? // GENERAL SETTINGS END // NETWORK CONFIGURATION START neuronsInput: Array[1..MAX_TIMESTEPS] of Array[1..NEURONS_INPUT] of Extended; // array of all input neurons (their values) for every timestep neuronsHidden: Array[1..NEURONS_HIDDEN] of Extended; // array of all hidden neurons (their values) neuronsOutput: Array[1..NEURONS_OUTPUT] of Extended; // array of output neurons (their values) weightsInput: Array[1..NEURONS_INPUT] of Array[1..NEURONS_HIDDEN] of Extended; // array of weights: input->hidden weightsHidden: Array[1..NEURONS_HIDDEN] of Array[1..NEURONS_OUTPUT] of Extended; // array of weights: hidden->output // NETWORK CONFIGURATION END // LEARNING SETTINGS START outputBefore: Array[1..NEURONS_OUTPUT] of Extended; // the network's output value in the last timestep (the one before) eligibilityTraceHidden: Array[1..NEURONS_INPUT] of Array[1..NEURONS_HIDDEN] of Array[1..NEURONS_OUTPUT] of Extended; // array of eligibility traces: hidden layer eligibilityTraceOutput: Array[1..NEURONS_TOTAL] of Array[1..NEURONS_TOTAL] of Extended; // array of eligibility traces: output layer reward: Array[1..MAX_TIMESTEPS] of Array[1..NEURONS_OUTPUT] of Extended; // the reward value for all output neurons in every timestep tdError: Array[1..NEURONS_OUTPUT] of Extended; // the network's error value for every single output neuron t: Byte; // current timestep cyclesTrained: Integer; // number of cycles trained so far (learning rates could be decreased accordingly) last50errors: Array[1..50] of Extended; // LEARNING SETTINGS END public constructor Create; // create the network object and do the initialization procedure UpdateEligibilityTraces; // update the eligibility traces for the hidden and output layer procedure tdLearning; // learning algorithm: adjust the network's weights procedure ForwardPropagation; // propagate the input values through the network to the output layer function getRating(state: TFeatureVector; explorative: Boolean): Extended; // get the rating for a given state (feature vector) function HyperbolicTangent(x: Extended): Extended; // calculate the hyperbolic tangent [-1;1] procedure StartNewCycle; // start a new cycle with everything set to default except for the weights procedure setLearningMode(activated: Boolean=TRUE); // switch the learning mode on/off procedure setInputs(state: TFeatureVector); // transfer the given feature vector to the input layer (set input neurons' values) procedure setReward(currentReward: SmallInt); // set the reward for the current timestep (with learning then or without) procedure nextTimeStep; // increase timestep t function getCyclesTrained(): Integer; // get the number of cycles trained so far procedure Visualize(imgHidden: Pointer); // visualize the neural network's hidden layer end; implementation procedure TArtificialNeuralNetwork.UpdateEligibilityTraces; var i, j, k: Integer; begin // how worthy is a weight to be adjusted? for j := 1 to NEURONS_HIDDEN do begin for k := 1 to NEURONS_OUTPUT do begin eligibilityTraceOutput[j][k] := LAMBDA*eligibilityTraceOutput[j][k]+(neuronsOutput[k]*(1-neuronsOutput[k]))*neuronsHidden[j]; for i := 1 to NEURONS_INPUT do begin eligibilityTraceHidden[i][j][k] := LAMBDA*eligibilityTraceHidden[i][j][k]+(neuronsOutput[k]*(1-neuronsOutput[k]))*weightsHidden[j][k]*neuronsHidden[j]*(1-neuronsHidden[j])*neuronsInput[t][i]; end; end; end; end; procedure TArtificialNeuralNetwork.setReward; VAR i: Integer; begin for i := 1 to NEURONS_OUTPUT do begin // +1 = player A wins // 0 = draw // -1 = player B wins reward[t][i] := currentReward; end; end; procedure TArtificialNeuralNetwork.tdLearning; var i, j, k: Integer; begin if learningMode then begin for k := 1 to NEURONS_OUTPUT do begin if reward[t][k] = 0 then begin tdError[k] := GAMMA*neuronsOutput[k]-outputBefore[k]; // network's error value when reward is 0 end else begin tdError[k] := reward[t][k]-outputBefore[k]; // network's error value in the final state (reward received) end; for j := 1 to NEURONS_HIDDEN do begin weightsHidden[j][k] := weightsHidden[j][k]+LEARNING_RATE_HIDDEN*tdError[k]*eligibilityTraceOutput[j][k]; // adjust hidden->output weights according to TD-lambda for i := 1 to NEURONS_INPUT do begin weightsInput[i][j] := weightsInput[i][j]+LEARNING_RATE_INPUT*tdError[k]*eligibilityTraceHidden[i][j][k]; // adjust input->hidden weights according to TD-lambda end; end; end; end; end; procedure TArtificialNeuralNetwork.ForwardPropagation; var i, j, k: Integer; begin for j := 1 to NEURONS_HIDDEN do begin neuronsHidden[j] := 0; for i := 1 to NEURONS_INPUT do begin neuronsHidden[j] := neuronsHidden[j]+neuronsInput[t][i]*weightsInput[i][j]; // input -> hidden end; neuronsHidden[j] := HyperbolicTangent(neuronsHidden[j]); // activation of hidden neuron j end; for k := 1 to NEURONS_OUTPUT do begin neuronsOutput[k] := 0; for j := 1 to NEURONS_HIDDEN do begin neuronsOutput[k] := neuronsOutput[k]+neuronsHidden[j]*weightsHidden[j][k]; // hidden -> output end; neuronsOutput[k] := HyperbolicTangent(neuronsOutput[k]); // activation of output neuron k end; end; procedure TArtificialNeuralNetwork.setLearningMode; begin learningMode := activated; end; constructor TArtificialNeuralNetwork.Create; var i, j, k: Integer; begin inherited Create; Randomize; // initialize random numbers generator learningMode := TRUE; cyclesTrained := -2; // only set to -2 because it will be increased twice in the beginning StartNewCycle; for j := 1 to NEURONS_HIDDEN do begin for k := 1 to NEURONS_OUTPUT do begin weightsHidden[j][k] := abs(Random-0.5); // initialize weights: 0 <= random < 0.5 end; for i := 1 to NEURONS_INPUT do begin weightsInput[i][j] := abs(Random-0.5); // initialize weights: 0 <= random < 0.5 end; end; for i := 1 to 50 do begin last50errors[i] := 0; end; end; procedure TArtificialNeuralNetwork.nextTimeStep; begin t := t+1; end; procedure TArtificialNeuralNetwork.StartNewCycle; var i, j, k, m: Integer; begin t := 1; // start in timestep 1 cyclesTrained := cyclesTrained+1; // increase the number of cycles trained so far for j := 1 to NEURONS_HIDDEN do begin neuronsHidden[j] := 0; for k := 1 to NEURONS_OUTPUT do begin eligibilityTraceOutput[j][k] := 0; outputBefore[k] := 0; neuronsOutput[k] := 0; for m := 1 to MAX_TIMESTEPS do begin reward[m][k] := 0; end; end; for i := 1 to NEURONS_INPUT do begin for k := 1 to NEURONS_OUTPUT do begin eligibilityTraceHidden[i][j][k] := 0; end; end; end; end; function TArtificialNeuralNetwork.getCyclesTrained; begin result := cyclesTrained; end; procedure TArtificialNeuralNetwork.setInputs; var k: Integer; begin for k := 1 to NEURONS_INPUT do begin neuronsInput[t][k] := state[k]; end; end; function TArtificialNeuralNetwork.getRating; begin setInputs(state); ForwardPropagation; result := neuronsOutput[1]; if not explorative then begin tdLearning; // adjust the weights according to TD-lambda ForwardPropagation; // calculate the network's output again outputBefore[1] := neuronsOutput[1]; // set outputBefore which will then be used in the next timestep UpdateEligibilityTraces; // update the eligibility traces for the next timestep nextTimeStep; // go to the next timestep end; end; function TArtificialNeuralNetwork.HyperbolicTangent; begin if x > 5500 then // prevent overflow result := 1 else result := (Exp(2*x)-1)/(Exp(2*x)+1); end; end.

    Read the article

  • Data Modeling Resources

    - by Dejan Sarka
    You can find many different data modeling resources. It is impossible to list all of them. I selected only the most valuable ones for me, and, of course, the ones I contributed to. Books Chris J. Date: An Introduction to Database Systems – IMO a “must” to understand the relational model correctly. Terry Halpin, Tony Morgan: Information Modeling and Relational Databases – meet the object-role modeling leaders. Chris J. Date, Nikos Lorentzos and Hugh Darwen: Time and Relational Theory, Second Edition: Temporal Databases in the Relational Model and SQL – all theory needed to manage temporal data. Louis Davidson, Jessica M. Moss: Pro SQL Server 2012 Relational Database Design and Implementation – the best SQL Server focused data modeling book I know by two of my friends. Dejan Sarka, et al.: MCITP Self-Paced Training Kit (Exam 70-441): Designing Database Solutions by Using Microsoft® SQL Server™ 2005 – SQL Server 2005 data modeling training kit. Most of the text is still valid for SQL Server 2008, 2008 R2, 2012 and 2014. Itzik Ben-Gan, Lubor Kollar, Dejan Sarka, Steve Kass: Inside Microsoft SQL Server 2008 T-SQL Querying – Steve wrote a chapter with mathematical background, and I added a chapter with theoretical introduction to the relational model. Itzik Ben-Gan, Dejan Sarka, Roger Wolter, Greg Low, Ed Katibah, Isaac Kunen: Inside Microsoft SQL Server 2008 T-SQL Programming – I added three chapters with theoretical introduction and practical solutions for the user-defined data types, dynamic schema and temporal data. Dejan Sarka, Matija Lah, Grega Jerkic: Training Kit (Exam 70-463): Implementing a Data Warehouse with Microsoft SQL Server 2012 – my first two chapters are about data warehouse design and implementation. Courses Data Modeling Essentials – I wrote a 3-day course for SolidQ. If you are interested in this course, which I could also deliver in a shorter seminar way, you can contact your closes SolidQ subsidiary, or, of course, me directly on addresses [email protected] or [email protected]. This course could also complement the existing courseware portfolio of training providers, which are welcome to contact me as well. Logical and Physical Modeling for Analytical Applications – online course I wrote for Pluralsight. Working with Temporal data in SQL Server – my latest Pluralsight course, where besides theory and implementation I introduce many original ways how to optimize temporal queries. Forthcoming presentations SQL Bits 12, July 17th – 19th, Telford, UK – I have a full-day pre-conference seminar Advanced Data Modeling Topics there.

    Read the article

  • Changes to the LINQ-to-StreamInsight Dialect

    - by Roman Schindlauer
    In previous versions of StreamInsight (1.0 through 2.0), CepStream<> represents temporal streams of many varieties: Streams with ‘open’ inputs (e.g., those defined and composed over CepStream<T>.Create(string streamName) Streams with ‘partially bound’ inputs (e.g., those defined and composed over CepStream<T>.Create(Type adapterFactory, …)) Streams with fully bound inputs (e.g., those defined and composed over To*Stream – sequences or DQC) The stream may be embedded (where Server.Create is used) The stream may be remote (where Server.Connect is used) When adding support for new programming primitives in StreamInsight 2.1, we faced a choice: Add a fourth variety (use CepStream<> to represent streams that are bound the new programming model constructs), or introduce a separate type that represents temporal streams in the new user model. We opted for the latter. Introducing a new type has the effect of reducing the number of (confusing) runtime failures due to inappropriate uses of CepStream<> instances in the incorrect context. The new types are: IStreamable<>, which logically represents a temporal stream. IQStreamable<> : IStreamable<>, which represents a queryable temporal stream. Its relationship to IStreamable<> is analogous to the relationship of IQueryable<> to IEnumerable<>. The developer can compose temporal queries over remote stream sources using this type. The syntax of temporal queries composed over IQStreamable<> is mostly consistent with the syntax of our existing CepStream<>-based LINQ provider. However, we have taken the opportunity to refine certain aspects of the language surface. Differences are outlined below. Because 2.1 introduces new types to represent temporal queries, the changes outlined in this post do no impact existing StreamInsight applications using the existing types! SelectMany StreamInsight does not support the SelectMany operator in its usual form (which is analogous to SQL’s “CROSS APPLY” operator): static IEnumerable<R> SelectMany<T, R>(this IEnumerable<T> source, Func<T, IEnumerable<R>> collectionSelector) It instead uses SelectMany as a convenient syntactic representation of an inner join. The parameter to the selector function is thus unavailable. Because the parameter isn’t supported, its type in StreamInsight 1.0 – 2.0 wasn’t carefully scrutinized. Unfortunately, the type chosen for the parameter is nonsensical to LINQ programmers: static CepStream<R> SelectMany<T, R>(this CepStream<T> source, Expression<Func<CepStream<T>, CepStream<R>>> streamSelector) Using Unit as the type for the parameter accurately reflects the StreamInsight’s capabilities: static IQStreamable<R> SelectMany<T, R>(this IQStreamable<T> source, Expression<Func<Unit, IQStreamable<R>>> streamSelector) For queries that succeed – that is, queries that do not reference the stream selector parameter – there is no difference between the code written for the two overloads: from x in xs from y in ys select f(x, y) Top-K The Take operator used in StreamInsight causes confusion for LINQ programmers because it is applied to the (unbounded) stream rather than the (bounded) window, suggesting that the query as a whole will return k rows: (from win in xs.SnapshotWindow() from x in win orderby x.A select x.B).Take(k) The use of SelectMany is also unfortunate in this context because it implies the availability of the window parameter within the remainder of the comprehension. The following compiles but fails at runtime: (from win in xs.SnapshotWindow() from x in win orderby x.A select win).Take(k) The Take operator in 2.1 is applied to the window rather than the stream: Before After (from win in xs.SnapshotWindow() from x in win orderby x.A select x.B).Take(k) from win in xs.SnapshotWindow() from b in     (from x in win     orderby x.A     select x.B).Take(k) select b Multicast We are introducing an explicit multicast operator in order to preserve expression identity, which is important given the semantics about moving code to and from StreamInsight. This also better matches existing LINQ dialects, such as Reactive. This pattern enables expressing multicasting in two ways: Implicit Explicit var ys = from x in xs          where x.A > 1          select x; var zs = from y1 in ys          from y2 in ys.ShiftEventTime(_ => TimeSpan.FromSeconds(1))          select y1 + y2; var ys = from x in xs          where x.A > 1          select x; var zs = ys.Multicast(ys1 =>     from y1 in ys1     from y2 in ys1.ShiftEventTime(_ => TimeSpan.FromSeconds(1))     select y1 + y2; Notice the product translates an expression using implicit multicast into an expression using the explicit multicast operator. The user does not see this translation. Default window policies Only default window policies are supported in the new surface. Other policies can be simulated by using AlterEventLifetime. Before After xs.SnapshotWindow(     WindowInputPolicy.ClipToWindow,     SnapshotWindowInputPolicy.Clip) xs.SnapshotWindow() xs.TumblingWindow(     TimeSpan.FromSeconds(1),     HoppingWindowOutputPolicy.PointAlignToWindowEnd) xs.TumblingWindow(     TimeSpan.FromSeconds(1)) xs.TumblingWindow(     TimeSpan.FromSeconds(1),     HoppingWindowOutputPolicy.ClipToWindowEnd) Not supported … LeftAntiJoin Representation of LASJ as a correlated sub-query in the LINQ surface is problematic as the StreamInsight engine does not support correlated sub-queries (see discussion of SelectMany). The current syntax requires the introduction of an otherwise unsupported ‘IsEmpty()’ operator. As a result, the pattern is not discoverable and implies capabilities not present in the server. The direct representation of LASJ is used instead: Before After from x in xs where     (from y in ys     where x.A > y.B     select y).IsEmpty() select x xs.LeftAntiJoin(ys, (x, y) => x.A > y.B) from x in xs where     (from y in ys     where x.A == y.B     select y).IsEmpty() select x xs.LeftAntiJoin(ys, x => x.A, y => y.B) ApplyWithUnion The ApplyWithUnion methods have been deprecated since their signatures are redundant given the standard SelectMany overloads: Before After xs.GroupBy(x => x.A).ApplyWithUnion(gs => from win in gs.SnapshotWindow() select win.Count()) xs.GroupBy(x => x.A).SelectMany(     gs =>     from win in gs.SnapshotWindow()     select win.Count()) xs.GroupBy(x => x.A).ApplyWithUnion(gs => from win in gs.SnapshotWindow() select win.Count(), r => new { r.Key, Count = r.Payload }) from x in xs group x by x.A into gs from win in gs.SnapshotWindow() select new { gs.Key, Count = win.Count() } Alternate UDO syntax The representation of UDOs in the StreamInsight LINQ dialect confuses cardinalities. Based on the semantics of user-defined operators in StreamInsight, one would expect to construct queries in the following form: from win in xs.SnapshotWindow() from y in MyUdo(win) select y Instead, the UDO proxy method is referenced within a projection, and the (many) results returned by the user code are automatically flattened into a stream: from win in xs.SnapshotWindow() select MyUdo(win) The “many-or-one” confusion is exemplified by the following example that compiles but fails at runtime: from win in xs.SnapshotWindow() select MyUdo(win) + win.Count() The above query must fail because the UDO is in fact returning many values per window while the count aggregate is returning one. Original syntax New alternate syntax from win in xs.SnapshotWindow() select win.UdoProxy(1) from win in xs.SnapshotWindow() from y in win.UserDefinedOperator(() => new Udo(1)) select y -or- from win in xs.SnapshotWindow() from y in win.UdoMacro(1) select y Notice that this formulation also sidesteps the dynamic type pitfalls of the existing “proxy method” approach to UDOs, in which the type of the UDO implementation (TInput, TOuput) and the type of its constructor arguments (TConfig) need to align in a precise and non-obvious way with the argument and return types for the corresponding proxy method. UDSO syntax UDSO currently leverages the DataContractSerializer to clone initial state for logical instances of the user operator. Initial state will instead be described by an expression in the new LINQ surface. Before After xs.Scan(new Udso()) xs.Scan(() => new Udso()) Name changes ShiftEventTime => AlterEventStartTime: The alter event lifetime overload taking a new start time value has been renamed. CountByStartTimeWindow => CountWindow

    Read the article

  • Changes to the LINQ-to-StreamInsight Dialect

    - by Roman Schindlauer
    In previous versions of StreamInsight (1.0 through 2.0), CepStream<> represents temporal streams of many varieties: Streams with ‘open’ inputs (e.g., those defined and composed over CepStream<T>.Create(string streamName) Streams with ‘partially bound’ inputs (e.g., those defined and composed over CepStream<T>.Create(Type adapterFactory, …)) Streams with fully bound inputs (e.g., those defined and composed over To*Stream – sequences or DQC) The stream may be embedded (where Server.Create is used) The stream may be remote (where Server.Connect is used) When adding support for new programming primitives in StreamInsight 2.1, we faced a choice: Add a fourth variety (use CepStream<> to represent streams that are bound the new programming model constructs), or introduce a separate type that represents temporal streams in the new user model. We opted for the latter. Introducing a new type has the effect of reducing the number of (confusing) runtime failures due to inappropriate uses of CepStream<> instances in the incorrect context. The new types are: IStreamable<>, which logically represents a temporal stream. IQStreamable<> : IStreamable<>, which represents a queryable temporal stream. Its relationship to IStreamable<> is analogous to the relationship of IQueryable<> to IEnumerable<>. The developer can compose temporal queries over remote stream sources using this type. The syntax of temporal queries composed over IQStreamable<> is mostly consistent with the syntax of our existing CepStream<>-based LINQ provider. However, we have taken the opportunity to refine certain aspects of the language surface. Differences are outlined below. Because 2.1 introduces new types to represent temporal queries, the changes outlined in this post do no impact existing StreamInsight applications using the existing types! SelectMany StreamInsight does not support the SelectMany operator in its usual form (which is analogous to SQL’s “CROSS APPLY” operator): static IEnumerable<R> SelectMany<T, R>(this IEnumerable<T> source, Func<T, IEnumerable<R>> collectionSelector) It instead uses SelectMany as a convenient syntactic representation of an inner join. The parameter to the selector function is thus unavailable. Because the parameter isn’t supported, its type in StreamInsight 1.0 – 2.0 wasn’t carefully scrutinized. Unfortunately, the type chosen for the parameter is nonsensical to LINQ programmers: static CepStream<R> SelectMany<T, R>(this CepStream<T> source, Expression<Func<CepStream<T>, CepStream<R>>> streamSelector) Using Unit as the type for the parameter accurately reflects the StreamInsight’s capabilities: static IQStreamable<R> SelectMany<T, R>(this IQStreamable<T> source, Expression<Func<Unit, IQStreamable<R>>> streamSelector) For queries that succeed – that is, queries that do not reference the stream selector parameter – there is no difference between the code written for the two overloads: from x in xs from y in ys select f(x, y) Top-K The Take operator used in StreamInsight causes confusion for LINQ programmers because it is applied to the (unbounded) stream rather than the (bounded) window, suggesting that the query as a whole will return k rows: (from win in xs.SnapshotWindow() from x in win orderby x.A select x.B).Take(k) The use of SelectMany is also unfortunate in this context because it implies the availability of the window parameter within the remainder of the comprehension. The following compiles but fails at runtime: (from win in xs.SnapshotWindow() from x in win orderby x.A select win).Take(k) The Take operator in 2.1 is applied to the window rather than the stream: Before After (from win in xs.SnapshotWindow() from x in win orderby x.A select x.B).Take(k) from win in xs.SnapshotWindow() from b in     (from x in win     orderby x.A     select x.B).Take(k) select b Multicast We are introducing an explicit multicast operator in order to preserve expression identity, which is important given the semantics about moving code to and from StreamInsight. This also better matches existing LINQ dialects, such as Reactive. This pattern enables expressing multicasting in two ways: Implicit Explicit var ys = from x in xs          where x.A > 1          select x; var zs = from y1 in ys          from y2 in ys.ShiftEventTime(_ => TimeSpan.FromSeconds(1))          select y1 + y2; var ys = from x in xs          where x.A > 1          select x; var zs = ys.Multicast(ys1 =>     from y1 in ys1     from y2 in ys1.ShiftEventTime(_ => TimeSpan.FromSeconds(1))     select y1 + y2; Notice the product translates an expression using implicit multicast into an expression using the explicit multicast operator. The user does not see this translation. Default window policies Only default window policies are supported in the new surface. Other policies can be simulated by using AlterEventLifetime. Before After xs.SnapshotWindow(     WindowInputPolicy.ClipToWindow,     SnapshotWindowInputPolicy.Clip) xs.SnapshotWindow() xs.TumblingWindow(     TimeSpan.FromSeconds(1),     HoppingWindowOutputPolicy.PointAlignToWindowEnd) xs.TumblingWindow(     TimeSpan.FromSeconds(1)) xs.TumblingWindow(     TimeSpan.FromSeconds(1),     HoppingWindowOutputPolicy.ClipToWindowEnd) Not supported … LeftAntiJoin Representation of LASJ as a correlated sub-query in the LINQ surface is problematic as the StreamInsight engine does not support correlated sub-queries (see discussion of SelectMany). The current syntax requires the introduction of an otherwise unsupported ‘IsEmpty()’ operator. As a result, the pattern is not discoverable and implies capabilities not present in the server. The direct representation of LASJ is used instead: Before After from x in xs where     (from y in ys     where x.A > y.B     select y).IsEmpty() select x xs.LeftAntiJoin(ys, (x, y) => x.A > y.B) from x in xs where     (from y in ys     where x.A == y.B     select y).IsEmpty() select x xs.LeftAntiJoin(ys, x => x.A, y => y.B) ApplyWithUnion The ApplyWithUnion methods have been deprecated since their signatures are redundant given the standard SelectMany overloads: Before After xs.GroupBy(x => x.A).ApplyWithUnion(gs => from win in gs.SnapshotWindow() select win.Count()) xs.GroupBy(x => x.A).SelectMany(     gs =>     from win in gs.SnapshotWindow()     select win.Count()) xs.GroupBy(x => x.A).ApplyWithUnion(gs => from win in gs.SnapshotWindow() select win.Count(), r => new { r.Key, Count = r.Payload }) from x in xs group x by x.A into gs from win in gs.SnapshotWindow() select new { gs.Key, Count = win.Count() } Alternate UDO syntax The representation of UDOs in the StreamInsight LINQ dialect confuses cardinalities. Based on the semantics of user-defined operators in StreamInsight, one would expect to construct queries in the following form: from win in xs.SnapshotWindow() from y in MyUdo(win) select y Instead, the UDO proxy method is referenced within a projection, and the (many) results returned by the user code are automatically flattened into a stream: from win in xs.SnapshotWindow() select MyUdo(win) The “many-or-one” confusion is exemplified by the following example that compiles but fails at runtime: from win in xs.SnapshotWindow() select MyUdo(win) + win.Count() The above query must fail because the UDO is in fact returning many values per window while the count aggregate is returning one. Original syntax New alternate syntax from win in xs.SnapshotWindow() select win.UdoProxy(1) from win in xs.SnapshotWindow() from y in win.UserDefinedOperator(() => new Udo(1)) select y -or- from win in xs.SnapshotWindow() from y in win.UdoMacro(1) select y Notice that this formulation also sidesteps the dynamic type pitfalls of the existing “proxy method” approach to UDOs, in which the type of the UDO implementation (TInput, TOuput) and the type of its constructor arguments (TConfig) need to align in a precise and non-obvious way with the argument and return types for the corresponding proxy method. UDSO syntax UDSO currently leverages the DataContractSerializer to clone initial state for logical instances of the user operator. Initial state will instead be described by an expression in the new LINQ surface. Before After xs.Scan(new Udso()) xs.Scan(() => new Udso()) Name changes ShiftEventTime => AlterEventStartTime: The alter event lifetime overload taking a new start time value has been renamed. CountByStartTimeWindow => CountWindow

    Read the article

1 2 3 4 5  | Next Page >