Search Results

Search found 59975 results on 2399 pages for 'data comparison'.

Page 362/2399 | < Previous Page | 358 359 360 361 362 363 364 365 366 367 368 369  | Next Page >

  • Java - is this an idiom or pattern, behavior classes with no state

    - by Berlin Brown
    I am trying to incorporate more functional programming idioms into my java development. One pattern that I like the most and avoids side effects is building classes that have behavior but they don't necessarily have any state. The behavior is locked into the methods but they only act on the parameters passed in. The code below is code I am trying to avoid: public class BadObject { private Map<String, String> data = new HashMap<String, String>(); public BadObject() { data.put("data", "data"); } /** * Act on the data class. But this is bad because we can't * rely on the integrity of the object's state. */ public void execute() { data.get("data").toString(); } } The code below is nothing special but I am acting on the parameters and state is contained within that class. We still may run into issues with this class but that is an issue with the method and the state of the data, we can address issues in the routine as opposed to not trusting the entire object. Is this some form of idiom? Is this similar to any pattern that you use? public class SemiStatefulOOP { /** * Private class implies that I can access the members of the <code>Data</code> class * within the <code>SemiStatefulOOP</code> class and I can also access * the getData method from some other class. * * @see Test1 * */ class Data { protected int counter = 0; public int getData() { return counter; } public String toString() { return Integer.toString(counter); } } /** * Act on the data class. */ public void execute(final Data data) { data.counter++; } /** * Act on the data class. */ public void updateStateWithCallToService(final Data data) { data.counter++; } /** * Similar to CLOS (Common Lisp Object System) make instance. */ public Data makeInstance() { return new Data(); } } // End of Class // Issues with the code above: I wanted to declare the Data class private, but then I can't really reference it outside of the class: I can't override the SemiStateful class and access the private members. Usage: final SemiStatefulOOP someObject = new SemiStatefulOOP(); final SemiStatefulOOP.Data data = someObject.makeInstance(); someObject.execute(data); someObject.updateStateWithCallToService(data);

    Read the article

  • Using JSON Data to Populate a Google Map with Database Objects

    - by MikeH
    I'm revising this question after reading the resources mentioned in the original answers and working through implementing it. I'm using the google maps api to integrate a map into my Rails site. I have a markets model with the following columns: ID, name, address, lat, lng. On my markets/index view, I want to populate a map with all the markets in my markets table. I'm trying to output @markets as json data, and that's where I'm running into problems. I have the basic map displaying, but right now it's just a blank map. I'm following the tutorials very closely, but I can't get the markers to generate dynamically from the json. Any help is much appreciated! Here's my setup: Markets Controller: def index @markets = Market.filter_city(params[:filter]) respond_to do |format| format.html # index.html.erb format.json { render :json => @market} format.xml { render :xml => @market } end end Markets/index view: <head> <script type="text/javascript" src="http://www.google.com/jsapi?key=GOOGLE KEY REDACTED, BUT IT'S THERE" > </script> <script type="text/javascript"> var markets = <%= @markets.to_json %>; </script> <script type="text/javascript" charset="utf-8"> google.load("maps", "2.x"); google.load("jquery", "1.3.2"); </script> </head> <body> <div id="map" style="width:400px; height:300px;"></div> </body> Public/javascripts/application.js: function initialize() { if (GBrowserIsCompatible() && typeof markets != 'undefined') { var map = new GMap2(document.getElementById("map")); map.setCenter(new GLatLng(40.7371, -73.9903), 13); map.addControl(new GLargeMapControl()); function createMarker(latlng, market) { var marker = new GMarker(latlng); var html="<strong>"+market.name+"</strong><br />"+market.address; GEvent.addListener(marker,"click", function() { map.openInfoWindowHtml(latlng, html); }); return marker; } var bounds = new GLatLngBounds; for (var i = 0; i < markets.length; i++) { var latlng=new GLatLng(markets[i].lat,markets[i].lng) bounds.extend(latlng); map.addOverlay(createMarker(latlng, markets[i])); } } } window.onload=initialize; window.onunload=GUnload;

    Read the article

  • How to get HABTM associated data using hasOne binding

    - by Moin
    Hi, I am following example documented at http://book.cakephp.org/view/83/hasAndBelongsToMany-HABTM I am trying to retrieve associated data using hasOne. I created 3 tables posts, tags and posts_tags. I wrote following code to debug Posts. $this->Post->bindModel(array( 'hasOne' => array( 'PostsTag', 'FilterTag' => array( 'className' => 'Tag', 'foreignKey' => false, 'conditions' => array('FilterTag.id = PostsTag.tag_id') )))); $output=$this->Post->find('all', array( 'fields' => array('Post.*') )); debug($output); I was expecting output something like below. Array ( 0 => Array { [Post] => Array ( [id] => 1 [title] => test post 1 ) [Tag] => Array ( [0] => Array ( [id] => 1 [name] => php ) [1] => Array ( [id] => 2 [name] => javascript ) [2] => Array ( [id] => 3 [name] => xml ) ) } But my output do not have Tags at all. Here is what I got. Array ( [0] => Array ( [Post] => Array ( [id] => 1 [title] => test post1 ) ) [1] => Array ( [Post] => Array ( [id] => 2 [title] => test post2 ) ) ) How do I get associated tags along with the post. I know I am missing something, but unable to figure out. Any help would be highly appreciated.

    Read the article

  • Parsing Data in XML and Storing to DB in Python

    - by Rakesh
    Hi Guys i have problem parsing an xml file and entering the data to sqlite, the format is like i need to enter the chracters before the token like 111,AAA,BBB etc <DOCUMENT> <PAGE width="544.252" height="634.961" number="1" id="p1"> <MEDIABOX x1="0" y1="0" x2="544.252" y2="634.961"/> <BLOCK id="p1_b1"> <TEXT width="37.7" height="74.124" id="p1_t1" x="51.1" y="20.8652"> <TOKEN sid="p1_s11" id="p1_w1" font-name="Verdanae" bold="yes" italic="no">111</TOKEN> </TEXT> </BLOCK> <BLOCK id="p1_b3"> <TEXT width="151.267" height="10.725" id="p1_t6" x="24.099" y="572.096"> <TOKEN sid="p1_s35" id="p1_w22" font-name="Verdanae" bold="yes" italic="yes">AAA</TOKEN> <TOKEN sid="p1_s36" id="p1_w23" font-name="verdanae" bold="yes" italic="no">BBB</TOKEN> <TOKEN sid="p1_s37" id="p1_w24" font-name="verdanae" bold="yes" italic="no">CCC</TOKEN> </TEXT> </BLOCK> <BLOCK id="p1_b4"> <TEXT width="82.72" height="26" id="p1_t7" x="55.426" y="138.026"> <TOKEN sid="p1_s42" id="p1_w29" font-name="verdanae" bold="yes" italic="no">DDD</TOKEN> <TOKEN sid="p1_s43" id="p1_w30" font-name="verdanae" bold="yes" italic="no">EEE</TOKEN> </TEXT> <TEXT width="101.74" height="26" id="p1_t8" x="55.406" y="162.026"> <TOKEN sid="p1_s45" id="p1_w31" font-name="verdanae" bold="yes" italic="no">FFF</TOKEN> </TEXT> <TEXT width="152.96" height="26" id="p1_t9" x="55.406" y="186.026"> <TOKEN sid="p1_s47" id="p1_w32" font-name="verdanae" bold="yes" italic="no">GGG</TOKEN> <TOKEN sid="p1_s48" id="p1_w33" font-name="verdanae" bold="yes" italic="no">HHH</TOKEN> </TEXT> </BLOCK> </PAGE> </DOCUMENT> in .net it is done with 3 foreach loops 1. for "DOCUMENT/PAGE/BLOCK" 2."TEXT" 3. "TOKEN" and then it is entered into the DB i dont get how to do it in python and i am trying it with lxml module

    Read the article

  • Problem with skipping empty cells while importing data from .xlsx file in asp.net c# application

    - by Eedoh
    Hi to all. I have a problem with reading .xlsx files in asp.net mvc2.0 application, using c#. Problem occurs when reading empty cell from .xlsx file. My code simply skips this cell and reads the next one. For example, if the contents of .xlsx file are: FirstName LastName Age John 36 They will be read as: FirstName LastName Age John 36 Here's the code that does the reading. private string GetValue(Cell cell, SharedStringTablePart stringTablePart) { if (cell.ChildElements.Count == 0) return string.Empty; //get cell value string value = cell.ElementAt(0).InnerText;//CellValue.InnerText; //Look up real value from shared string table if ((cell.DataType != null) && (cell.DataType == CellValues.SharedString)) value = stringTablePart.SharedStringTable.ChildElements[Int32.Parse(value)].InnerText; return value; } private DataTable ExtractExcelSheetValuesToDataTable(string xlsxFilePath, string sheetName) { DataTable dt = new DataTable(); using (SpreadsheetDocument myWorkbook = SpreadsheetDocument.Open(xlsxFilePath, true)) { //Access the main Workbook part, which contains data WorkbookPart workbookPart = myWorkbook.WorkbookPart; WorksheetPart worksheetPart = null; if (!string.IsNullOrEmpty(sheetName)) { Sheet ss = workbookPart.Workbook.Descendants<Sheet>().Where(s => s.Name == sheetName).SingleOrDefault<Sheet>(); worksheetPart = (WorksheetPart)workbookPart.GetPartById(ss.Id); } else { worksheetPart = workbookPart.WorksheetParts.FirstOrDefault(); } SharedStringTablePart stringTablePart = workbookPart.SharedStringTablePart; if (worksheetPart != null) { Row lastRow = worksheetPart.Worksheet.Descendants<Row>().LastOrDefault(); Row firstRow = worksheetPart.Worksheet.Descendants<Row>().FirstOrDefault(); if (firstRow != null) { foreach (Cell c in firstRow.ChildElements) { string value = GetValue(c, stringTablePart); dt.Columns.Add(value); } } if (lastRow != null) { for (int i = 2; i <= lastRow.RowIndex; i++) { DataRow dr = dt.NewRow(); bool empty = true; Row row = worksheetPart.Worksheet.Descendants<Row>().Where(r => i == r.RowIndex).FirstOrDefault(); int j = 0; if (row != null) { foreach (Cell c in row.ChildElements) { //Get cell value string value = GetValue(c, stringTablePart); if (!string.IsNullOrEmpty(value) && value != "") empty = false; dr[j] = value; j++; if (j == dt.Columns.Count) break; } if (empty) break; dt.Rows.Add(dr); } } } } } return dt; }

    Read the article

  • Change Label Control Property Based on Data from SqlDataSource Inside a Repeater

    - by Furqan Muhammad Khan
    I am using a repeater control to populate data from SqlDataSource into my custom designed display-box. <asp:Repeater ID="Repeater1" runat="server" DataSourceID="SqlDataSource1" OnDataBinding="Repeater_ItemDataBound"> <HeaderTemplate> </HeaderTemplate> <ItemTemplate> <div class="bubble-content"> <div style="float: left;"> <h2 class="bubble-content-title"><%# Eval("CommentTitle") %></h2> </div> <div style="text-align: right;"> <asp:Label ID="lbl_category" runat="server" Text=""><%# Eval("CommentType") %> </asp:Label> </div> <div style="float: left;"> <p><%# Eval("CommentContent") %></p> </div> </div> </ItemTemplate> <FooterTemplate> </FooterTemplate> </asp:Repeater> <asp:SqlDataSource ID="mySqlDataSource" runat="server" ConnectionString="<%$ ConnectionStrings:myConnectionString %>" SelectCommand="SELECT [CommentTitle],[CommentType],[CommentContent] FROM [Comments] WHERE ([PostId] = @PostId)"> <SelectParameters> <asp:QueryStringParameter Name="PostId" QueryStringField="id" Type="String" /> </SelectParameters> </asp:SqlDataSource> Now, there can be three types of "CommentTypes" in the database. I want to change the CssClass property of "lbl_category" based on the value of [CommentType]. I tried doing this: <asp:Label ID="lbl_category" runat="server" CssClass="<%# Eval("CommentType") %>" Text=""><%# Eval("CommentType") %></asp:Label> But this gives an error: "The server control is not well formed" and haven't been able to find a way to achieve this in the code behind. Can someone please help?

    Read the article

  • Damaged data when gzipping

    - by RadiantHeart
    This is the script I hva written for gzipping content on my site, which is located in 'gzip.php'. The way i use it is that on pages where I want to enable gzipping i include the file at the top and at the bottom i call the output function like this: print_gzipped_page('javascript') If the file is a css-file i use 'css' as the $type-argument and if its a php file i call the function without declaring any arguments. The script works fine in all browsers except Opera which gives an error saying it could not decode the page due to damaged data. Can anyone tell me what I have done wrong? <?php function print_gzipped_page($type = false) { if(headers_sent()){ $encoding = false; } elseif( strpos($_SERVER['HTTP_ACCEPT_ENCODING'], 'x-gzip') !== false ){ $encoding = 'x-gzip'; } elseif( strpos($_SERVER['HTTP_ACCEPT_ENCODING'],'gzip') !== false ){ $encoding = 'gzip'; } else{ $encoding = false; } if ($type!=false) { $type_header_array = array("css" => "Content-Type: text/css", "javascript" => "Content-Type: application/x-javascript"); $type_header = $type_header_array[$type]; } $contents = ob_get_contents(); ob_end_clean(); $etag = '"' . md5($contents) . '"'; $etag_header = 'Etag: ' . $etag; header($etag_header); if ($type!=false) { header($type_header); } if (isset($_SERVER['HTTP_IF_NONE_MATCH']) and $_SERVER['HTTP_IF_NONE_MATCH']==$etag) { header("HTTP/1.1 304 Not Modified"); exit(); } if($encoding){ header('Content-Encoding: '.$encoding); print("\x1f\x8b\x08\x00\x00\x00\x00\x00"); $size = strlen($contents); $contents = gzcompress($contents, 9); $contents = substr($contents, 0, $size); } echo $contents; exit(); } ob_start(); ob_implicit_flush(0); ?> Additional info: The script works if the length og the document beeing compressed is only 10-15 characters.

    Read the article

  • StreamWriter appends random data

    - by void
    Hi I'm seeing odd behaviour using the StreamWriter class writing extra data to a file using this code: public void WriteToCSV(string filename) { StreamWriter streamWriter = null; try { streamWriter = new StreamWriter(filename); Log.Info("Writing CSV report header information ... "); streamWriter.WriteLine("\"{0}\",\"{1}\",\"{2}\",\"{3}\"", ((int)CSVRecordType.Header).ToString("D2", CultureInfo.CurrentCulture), m_InputFilename, m_LoadStartDate, m_LoadEndDate); int recordCount = 0; if (SummarySection) { Log.Info("Writing CSV report summary section ... "); foreach (KeyValuePair<KeyValuePair<LoadStatus, string>, CategoryResult> categoryResult in m_DataLoadResult.DataLoadResults) { streamWriter.WriteLine("\"{0}\",\"{1}\",\"{2}\",\"{3}\"", ((int)CSVRecordType.Summary).ToString("D2", CultureInfo.CurrentCulture), categoryResult.Value.StatusString, categoryResult.Value.Count.ToString(CultureInfo.CurrentCulture), categoryResult.Value.Category); recordCount++; } } Log.Info("Writing CSV report cases section ... "); foreach (KeyValuePair<KeyValuePair<LoadStatus, string>, CategoryResult> categoryResult in m_DataLoadResult.DataLoadResults) { foreach (CaseLoadResult result in categoryResult.Value.CaseLoadResults) { if ((LoadStatus.Success == result.Status && SuccessCases) || (LoadStatus.Warnings == result.Status && WarningCases) || (LoadStatus.Failure == result.Status && FailureCases) || (LoadStatus.NotProcessed == result.Status && NotProcessedCases)) { streamWriter.Write("\"{0}\",\"{1}\",\"{2}\",\"{3}\",\"{4}\"", ((int)CSVRecordType.Result).ToString("D2", CultureInfo.CurrentCulture), result.Status, result.UniqueId, result.Category, result.ClassicReference); if (RawResponse) { streamWriter.Write(",\"{0}\"", result.ResponseXml); } streamWriter.WriteLine(); recordCount++; } } } streamWriter.WriteLine("\"{0}\",\"{1}\"", ((int)CSVRecordType.Count).ToString("D2", CultureInfo.CurrentCulture), recordCount); Log.Info("CSV report written to '{0}'", fileName); } catch (IOException execption) { string errorMessage = string.Format(CultureInfo.CurrentCulture, "Unable to write XML report to '{0}'", fileName); Log.Error(errorMessage); Log.Error(exception.Message); throw new MyException(errorMessage, exception); } finally { if (null != streamWriter) { streamWriter.Close(); } } } The file produced contains a set of records on each line 0 to N, for example: [Record Zero] [Record One] ... [Record N] However the file produced either contains nulls or incomplete records from further up the file appended to the end. For example: [Record Zero] [Record One] ... [Record N] [Lots of nulls] or [Record Zero] [Record One] ... [Record N] [Half complete records] This also happens in separate pieces of code that also use the StreamWriter class. Furthermore, the files produced all have sizes that are multiples of 1024. I've been unable to reproduce this behaviour on any other machine and have tried recreating the environment. Previous versions of the application didn't exhibite this behaviour despite having the same code for the methods in question. EDIT: Added extra code.

    Read the article

  • How to Transfer Large File from MS Word Add-In (VBA) to Web Server?

    - by Ian Robinson
    Overview I have a Microsoft Word Add-In, written in VBA (Visual Basic for Applications), that compresses a document and all of it's related contents (embedded media) into a zip archive. After creating the zip archive it then turns the file into a byte array and posts it to an ASMX web service. This mostly works. Issues The main issue I have is transferring large files to the web site. I can successfully upload a file that is around 40MB, but not one that is 140MB (timeout/general failure). A secondary issue is that building the byte array in the VBScript Word Add-In can fail by running out of memory on the client machine if the zip archive is too large. Potential Solutions I am considering the following options and am looking for feedback on either option or any other suggestions. Option One Opening a file stream on the client (MS Word VBA) and reading one "chunk" at a time and transmitting to ASMX web service which assembles the "chunks" into a file on the server. This has the benefit of not adding any additional dependencies or components to the application, I would only be modifying existing functionality. (Fewer dependencies is better as this solution should work in a variety of server environments and be relatively easy to set up.) Question: Are there examples of doing this or any recommended techniques (either on the client in VBA or in the web service in C#/VB.NET)? Option Two I understand WCF may provide a solution to the issue of transferring large files by "chunking" or streaming data. However, I am not very familiar with WCF, and am not sure what exactly it is capable of or if I can communicate with a WCF service from VBA. This has the downside of adding another dependency (.NET 3.0). But if using WCF is definitely a better solution I may not mind taking that dependency. Questions: Does WCF reliably support large file transfers of this nature? If so, what does this involve? Any resources or examples? Are you able to call a WCF service from VBA? Any examples?

    Read the article

  • send data from one table to another page

    - by user91599
    I have this table I want when I click on a link in a table row that do a redirect to another page the data will be sent to the new page that can help me I have not found how to start I'm really stuck code table <table cellpadding="0" cellspacing="0" border="0" class="display" id="example"> <thead> <tr> <th>Date</th> <th>provider</th> <th>CI</th> <th>CELL</th> <th>BSC</th> <th>Commentaire</th> <th>nbr</th> <th>Type</th> <th><img src="{{ asset('image/Modify.png') }}" ALIGN="CENTER"/></th> <th><img src="{{ asset('image/Info.png') }}" ALIGN="CENTER"/></th> <th><img src="{{ asset('image/Male.png') }}" ALIGN="CENTER"/></th> <th>type_alertes</th> </tr> </thead> <tbody> <div class="textbox"> <h2> Information KPI dégradées</h2> <div class="textbox_content" id="kpi_dégrades"> {% for liste in listes %} <tr class="gradeU"> <td>{{ liste.DAT }} </td> <td>{{ liste.PROVIDER}} </td> <td>{{ liste.CI}} </td> <td>{{ liste.CELL}} </td> <td>{{ liste.BSC}}</td> <td>{{ liste.Cmts}}</td> <td >{{ liste.nbr}}</td> <td>{{ liste.TYPE}}</td> <td><a class="edit" href="">Edit</a></td> <td onclick="getInfo('{{ liste.CELL}}')">Information KPI dégradés</td> <td>{{ liste.user_name}}</td> <td>{{ liste.type_alertes}}</td> </tr> {% endfor %} </div> </div> </tbody>

    Read the article

  • Error when passing data between views

    - by Kristoffer Bilek
    I'm using a plist (array of dictionaries) to populate a tableview. Now, when a cell i pressed, I want to pass the reprecented dictionary to the detail view with a segue. It's working properly to fill the tableview, but how do I send the same value to the detail view? This is my try: #import "WinesViewController.h" #import "WineObject.h" #import "WineCell.h" #import "WinesDetailViewController.h" @interface WinesViewController () @end @implementation WinesViewController - (id)initWithStyle:(UITableViewStyle)style { self = [super initWithStyle:style]; if (self) { // Custom initialization } return self; } - (void)viewDidLoad { [super viewDidLoad]; } - (void)viewDidUnload { [super viewDidUnload]; } - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { return (interfaceOrientation == UIInterfaceOrientationPortrait); } #pragma mark - Table view data source - (void)viewWillAppear:(BOOL)animated { wine = [[WineObject alloc] initWithLibraryName:@"Wine"]; self.title = @"Vinene"; [self.tableView deselectRowAtIndexPath:[self.tableView indexPathForSelectedRow] animated:YES]; } - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { // Return the number of sections. return 1; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { // Return the number of rows in the section. return [wine libraryCount]; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"wineCell"; WineCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; // Configure the cell... cell.nameLabel.text = [[wine libraryItemAtIndex:indexPath.row] valueForKey:@"Name"]; cell.districtLabel.text = [[wine libraryItemAtIndex:indexPath.row] valueForKey:@"District"]; cell.countryLabel.text = [[wine libraryItemAtIndex:indexPath.row] valueForKey:@"Country"]; cell.bottleImageView.image = [UIImage imageNamed:[[wine libraryItemAtIndex:indexPath.row] valueForKey:@"Image"]]; cell.flagImageView.image = [UIImage imageNamed:[[wine libraryItemAtIndex:indexPath.row] valueForKey:@"Flag"]]; cell.fyldeImageView.image = [UIImage imageNamed:[[wine libraryItemAtIndex:indexPath.row] valueForKey:@"Fylde"]]; cell.friskhetImageView.image = [UIImage imageNamed:[[wine libraryItemAtIndex:indexPath.row] valueForKey:@"Friskhet"]]; cell.garvesyreImageView.image = [UIImage imageNamed:[[wine libraryItemAtIndex:indexPath.row] valueForKey:@"Garvesyre"]]; return cell; } #pragma mark - Table view delegate - (void)prepareForSegue:(UIStoryboardSegue *)segue sender:(id)sender { if ([[segue identifier] isEqualToString:@"DetailSegue"]) { NSIndexPath *selectedRowIndex = [self.tableView indexPathForSelectedRow]; WinesDetailViewController *winesdetailViewController = [segue destinationViewController]; //Here comes the error line: winesdetailViewController.winedetailName = [wine objectAtIndex:indexPath.row] valueForKey:@"Name"]; } } I get one error for: winesdetailViewController.winedetailName = [wine objectAtIndex:indexPath.row] valueForKey:@"Name"]; Use of undeclaired identifier: indexPath. did you mean NSIndexPath? I think I've missed something important here..

    Read the article

  • WP7 databinding displays int but not string?

    - by user1794106
    Whenever I test my app.. it binds the data from Note class and displays it if it isn't a string. But for the string variables it wont bind. What am I doing wrong? Inside my main: <Grid x:Name="ContentPanel" Grid.Row="1" Margin="12,0,12,0"> <ListBox x:Name="Listbox" SelectionChanged="listbox_SelectionChanged" ItemsSource="{Binding}"> <ListBox.ItemTemplate> <DataTemplate> <Border Width="800" MinHeight="60"> <StackPanel> <TextBlock x:Name="Title" VerticalAlignment="Center" FontSize="{Binding TextSize}" Text="{Binding Name}"/> <TextBlock x:Name="Date" VerticalAlignment="Center" FontSize="{Binding TextSize}" Text="{Binding Modified}"/> </StackPanel> </Border> </DataTemplate> </ListBox.ItemTemplate> </ListBox> </Grid> </Grid> in code behind: protected override void OnNavigatedTo(System.Windows.Navigation.NavigationEventArgs e) { MessageBox.Show("enters onNav Main"); DataContext = null; DataContext = Settings.NotesList; Settings.CurrentNoteIndex = -1; Listbox.SelectedIndex = -1; if (Settings.NotesList != null) { if (Settings.NotesList.Count == 0) { Notes.Text = "No Notes"; } else { Notes.Text = ""; } } } and public static class Settings { public static ObservableCollection<Note> NotesList; static IsolatedStorageSettings settings; private static int currentNoteIndex; static Settings() { NotesList = new ObservableCollection<Note>(); settings = IsolatedStorageSettings.ApplicationSettings; MessageBox.Show("enters constructor settings"); } notes class: public class Note { public DateTimeOffset Modified { get; set; } public string Title { get; set; } public string Content { get; set; } public int TextSize { get; set; } public Note() { MessageBox.Show("enters Note Constructor"); Modified = DateTimeOffset.Now; Title = "test"; Content = "test"; TextSize = 32; } }

    Read the article

  • SQL-How to retrieve the correct data using php

    - by Programatt
    I am new to SQL so please excuse my question if it is simple. I have a database with a few tables. 1 is a users table, the others are application tables that contain the users preferences for receiving notifications about that application based on the country they are interested in. What I want to do, is retrieve the e-mail address of all users that have an interest in that country. I am struggling to think about how to do this. I currently have the following query constructed, and the code to populate the values function check($string) { if (isset($_POST[$string])) { $print = implode(', ', $_POST[$string]); //Converts an array into a single string $imanageSQLArr = Array(); if (substr_count($print,'Benelux') > 0) { $imanageSQLArr[0] = "checked"; } else { $imanageSQLArr[0] = "off"; } if (substr_count($print, 'France') > 0) { $imanageSQLArr[1] = "checked"; } else { $imanageSQLArr[1] = "off"; } if (substr_count($print, 'Germany') > 0) { $imanageSQLArr[2] = "checked"; } else { $imanageSQLArr[2] = "off"; } if (substr_count($print, 'Italy') > 0) { $imanageSQLArr[3] = "checked"; } else { $imanageSQLArr[3] = "off"; } if (substr_count($print, 'Netherlands') > 0) { $imanageSQLArr[4] = "checked"; } else { $imanageSQLArr[4] = "off"; } if (substr_count($print, 'Portugal') > 0) { $imanageSQLArr[5] = "checked"; } else { $imanageSQLArr[5] = "off"; } if (substr_count($print, 'Spain') > 0) { $imanageSQLArr[6] = "checked"; } else { $imanageSQLArr[6] = "off"; } if (substr_count($print, 'Sweden') > 0) { $imanageSQLArr[7] = "checked"; } else { $imanageSQLArr[7] = "off"; } if (substr_count($print, 'Switzerland') > 0) { $imanageSQLArr[8] = "checked"; } else { $imanageSQLArr[8] = "off"; } if (substr_count($print, 'UK') > 0) { $imanageSQLArr[9] = "checked"; } else { $imanageSQLArr[9] = "off"; } and the query $tocheck = $db->prepare( "SELECT users.email FROM users,app WHERE users.id=app.userid AND BENELUX=:BENELUX AND FRANCE=:FRANCE AND GERMANY=:GERMANY AND ITALY=:ITALY AND NETHERLANDS=:NETHERLANDS AND PORTUGAL=:PORTUGAL AND SPAIN=:SPAIN AND SWEDEN=:SWEDEN AND SWITZERLAND=:SWITZERLAND AND UK=:UK"); $tocheck->execute($country); $row = $tocheck->fetchAll(); This does retrieve data, but only people who's preferences match EXACTLY what is put (so what they haven't selected is taken into account as much as what they have). Any help would be greatly appreciated.

    Read the article

  • Cakephp query doesn't render correct data

    - by user2915012
    I'm totally new in cakephp and fetching problem at the time of query to render data I tried this to find out categories/warehouses table info but failed.. $cart_products = $this->Order->OrdersProduct->find('all', array( 'fields' => array('*'), 'contain' => array('Category'), 'joins' => array( array( 'table' => 'products', 'alias' => 'Product', 'type' => 'LEFT', 'conditions' => array('Product.id = OrdersProduct.product_id') ), array( 'table' => 'orders', 'alias' => 'Order', 'type' => 'LEFT', 'conditions' => array('Order.id = OrdersProduct.order_id') ) ), 'conditions' => array( 'Order.store_id' => $store_id, 'Order.order_status' => 'in_cart' ) )); I need the result something like this... Array ( [0] => Array ( [OrdersProduct] => Array ( [id] => 1 [order_id] => 1 [product_id] => 16 [qty] => 10 [created] => 2013-10-24 08:04:33 [modified] => 2013-10-24 08:04:33 ) [Product] => Array ( [id] => 16 [part] => 56-987xyz [title] => iPhone 5 battery [description] => iPhone 5c description [wholesale_price] => 4 [retail_price] => 8 [purchase_cost] => 2 [sort_order] => [Category] => array( [id] => 1, [name] => Iphone 5 ) [Warehouse] => array( [id] => 1, [name] => Warehouse1 ) [weight] => [created] => 2013-10-22 12:14:57 [modified] => 2013-10-22 12:14:57 ) ) ) How can I find this? Can anybody help me? thanks

    Read the article

  • How can I represent a line of music notes in a way that allows fast insertion at any index?

    - by chairbender
    For "fun", and to learn functional programming, I'm developing a program in Clojure that does algorithmic composition using ideas from this theory of music called "Westergaardian Theory". It generates lines of music (where a line is just a single staff consisting of a sequence of notes, each with pitches and durations). It basically works like this: Start with a line consisting of three notes (the specifics of how these are chosen are not important). Randomly perform one of several "operations" on this line. The operation picks randomly from all pairs of adjacent notes that meet a certain criteria (for each pair, the criteria only depends on the pair and is independent of the other notes in the line). It inserts 1 or several notes (depending on the operation) between the chosen pair. Each operation has its own unique criteria. Continue randomly performing these operations on the line until the line is the desired length. The issue I've run into is that my implementation of this is quite slow, and I suspect it could be made faster. I'm new to Clojure and functional programming in general (though I'm experienced with OO), so I'm hoping someone with more experience can point out if I'm not thinking in a functional paradigm or missing out on some FP technique. My current implementation is that each line is a vector containing maps. Each map has a :note and a :dur. :note's value is a keyword representing a musical note like :A4 or :C#3. :dur's value is a fraction, representing the duration of the note (1 is a whole note, 1/4 is a quarter note, etc...). So, for example, a line representing the C major scale starting on C3 would look like this: [ {:note :C3 :dur 1} {:note :D3 :dur 1} {:note :E3 :dur 1} {:note :F3 :dur 1} {:note :G3 :dur 1} {:note :A4 :dur 1} {:note :B4 :dur 1} ] This is a problematic representation because there's not really a quick way to insert into an arbitrary index of a vector. But insertion is the most frequently performed operation on these lines. My current terrible function for inserting notes into a line basically splits the vector using subvec at the point of insertion, uses conj to join the first part + notes + last part, then uses flatten and vec to make them all be in a one-dimensional vector. For example if I want to insert C3 and D3 into the the C major scale at index 3 (where the F3 is), it would do this (I'll use the note name in place of the :note and :dur maps): (conj [C3 D3 E3] [C3 D3] [F3 G3 A4 B4]), which creates [C3 D3 E3 [C3 D3] [F3 G3 A4 B4]] (vec (flatten previous-vector)) which gives [C3 D3 E3 C3 D3 F3 G3 A4 B4] The run time of that is O(n), AFAIK. I'm looking for a way to make this insertion faster. I've searched for information on Clojure data structures that have fast insertion but haven't found anything that would work. I found "finger trees" but they only allow fast insertion at the start or end of the list. Edit: I split this into two questions. The other part is here.

    Read the article

  • Data Annotations validation Built into model

    - by Josh
    I want to build an object model that automatically wires in validation when I attempt to save an object. I am using DataAnnotations for my validation, and it all works well, but I think my inheritance is whacked. I am looking here for some guidance on a better way to wire in my validation. So, to build in validation I have this interface public interface IValidatable { bool IsValid { get; } ValidationResponse ValidationResults { get; } void Validate(); } Then, I have a base class that all my objects inherit from. I did a class because I wanted to wire in the validation calls automatically. The issue is that the validation has to know the type of the class is it validating. So I use Generics like so. public class CoreObjectBase<T> : IValidatable where T : CoreObjectBase<T> { #region IValidatable Members public virtual bool IsValid { get { // First, check rules that always apply to this type var result = new Validator<T>().Validate((T)this); // return false if any violations occurred return !result.HasViolations; } } public virtual ValidationResponse ValidationResults { get { var result = new Validator<T>().Validate((T)this); return result; } } public virtual void Validate() { // First, check rules that always apply to this type var result = new Validator<T>().Validate((T)this); // throw error if any violations were detected if (result.HasViolations) throw new RulesException(result.Errors); } #endregion } So, I have this circular inheritance statement. My classes look like this then: public class MyClass : CoreObjectBase<MyClass> { } But the problem occurs when I have a more complicated model. Because I can only inherit from one class, when I have a situation where inheritance makes sense I believe the child classes won't have validation on their properties. public class Parent : CoreObjectBase<Parent> { //properties validated } public class Child : Parent { //properties not validated? } I haven't really tested the validation in these cases yet, but I am pretty sure that anything in child with a data annotation on it will not be automatically validated when I call Child.Validate(); due to the way the inheritance is configured. Is there a better way to do this?

    Read the article

  • json data not rendered in backbone view

    - by user2535706
    I have been trying to render the json data to the view by calling the rest api and the code is as follows: var Profile = Backbone.Model.extend({ dataType:'jsonp', defaults: { intuitId: null, email: null, type: null }, }); var ProfileList = Backbone.Collection.extend({ model: Profile, url: '/v1/entities/6414256167329108895' }); var ProfileView = Backbone.View.extend({ el: "#profiles", template: _.template($('#profileTemplate').html()), render: function() { _.each(this.model.models, function(profile) { var profileTemplate = this.template(this.model.toJSON()); $(this.el).append(tprofileTemplate); }, this); return this; } }); var profiles = new ProfileList(); var profilesView = new ProfileView({model: profiles}); profiles.fetch(); profilesView.render(); and the html file is as follows: <!DOCTYPE html> <html> <head> <title>SPA Example</title> <!-- <link rel="stylesheet" type="text/css" href="src/css/reset.css" /> <link rel="stylesheet" type="text/css" href="src/css/harmony_compiled.css" /> --> </head> <body class="harmony"> <header> <div class="title">SPA Example</div> </header> <div id="profiles"></div> <script id="profileTemplate" type="text/template"> <div class="profile"> <div class="info"> <div class="intuitId"> <%= intuitId %> </div> <div class="email"> <%= email %> </div> <div class="type"> <%= type %> </div> </div> </div> </script> </body> </html> This gives me an error and the render function isn't invoking properly and the render function is called even before the REST API returns the JSON response. Could anyone please help me to figure out where I went wrong. Any help is highly appreciated Thank you

    Read the article

  • How to delete rows based on comparison from Data Flow Task in an SSIS?

    - by vikasde
    I have a DataFlow task with two OLE DB Source objects. This is the SQL I want to achieve using SSIS: Insert into server2.db.dbo.[table2] (...) Select col1, col2, col3 ... from Server1.db.dbo.[table1] where [table1.col1] not in (Select col5 from server2.db.dbo.[table2] Where ...) I am pretty new to SSIS and not sure how to achieve this. I thought I could do this using the Data Flow task and populating the first source with the data from server1.db.dbo.table1 and the second source with server2.db.dbo.[table2] and then do the conditional check before inserting it into server2.db.dbo.[table2]. I am not sure how to do the conditional check though. Any help is appreciated.

    Read the article

  • Reading data file into an array C#

    - by Whitey
    I have an array like this: int[,] iMap = new int[iMapHeight, iMapWidth] { { 0, 1, 2, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, { 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, { 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, { 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, { 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, { 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, { 2, 2, 2, 2, 0, 0, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2}, { 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, { 0, 0, 0, 1, 1, 1, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, { 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, { 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, { 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, { 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, { 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0}, { 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, { 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, { 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, { 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, { 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0}, { 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, }; And I want the same thing coming from a data file. The file should be structured like this preferably: 0000000000000000000000000 0000000000000000000000000 0000000000000000000000000 0000000000000000000000000 0000000000000000000000000 0000000000000000000000000 0000000000000000000000000 0000000000000000000000000 0000000000000000000000000 0000000000000000000000000 0000000000000000000000000 0000000000000000000000000 0000000000000000000000000 0000000000000000000000000 Which will ultimately make an array like above with all values set to 0. What would be the best way to do this? Would I read one line from the file and then split each characters separately and transfer it to the new array? Or perhaps read a line, and then read each character and put that in the array? Thanks for any help.

    Read the article

  • recursion resulting in extra unwanted data

    - by spacerace
    I'm writing a module to handle dice rolling. Given x die of y sides, I'm trying to come up with a list of all potential roll combinations. This code assumes 3 die, each with 3 sides labeled 1, 2, and 3. (I realize I'm using "magic numbers" but this is just an attempt to simplify and get the base code working.) int[] set = { 1, 1, 1 }; list = diceroll.recurse(0,0, list, set); ... public ArrayList<Integer> recurse(int index, int i, ArrayList<Integer> list, int[] set){ if(index < 3){ // System.out.print("\n(looping on "+index+")\n"); for(int k=1;k<=3;k++){ // System.out.print("setting i"+index+" to "+k+" "); set[index] = k; dump(set); recurse(index+1, i, list, set); } } return list; } (dump() is a simple method to just display the contents of list[]. The variable i is not used at the moment.) What I'm attempting to do is increment a list[index] by one, stepping through the entire length of the list and incrementing as I go. This is my "best attempt" code. Here is the output: Bold output is what I'm looking for. I can't figure out how to get rid of the rest. (This is assuming three dice, each with 3 sides. Using recursion so I can scale it up to any x dice with y sides.) [1][1][1] [1][1][1] [1][1][1] [1][1][2] [1][1][3] [1][2][3] [1][2][1] [1][2][2] [1][2][3] [1][3][3] [1][3][1] [1][3][2] [1][3][3] [2][3][3] [2][1][3] [2][1][1] [2][1][2] [2][1][3] [2][2][3] [2][2][1] [2][2][2] [2][2][3] [2][3][3] [2][3][1] [2][3][2] [2][3][3] [3][3][3] [3][1][3] [3][1][1] [3][1][2] [3][1][3] [3][2][3] [3][2][1] [3][2][2] [3][2][3] [3][3][3] [3][3][1] [3][3][2] [3][3][3] I apologize for the formatting, best I could come up with. Any help would be greatly appreciated. (This method was actually stemmed to use the data for something quite trivial, but has turned into a personal challenge. :) edit: If there is another approach to solving this problem I'd be all ears, but I'd also like to solve my current problem and successfully use recursion for something useful.

    Read the article

  • Registration form validation not validating

    - by jgray
    I am a noob when it comes to web development. I am trying to validate a registration form and to me it looks right but it will not validate.. This is what i have so far and i am validating through a repository or database. Any help would be greatly appreciated. thanks <?php session_start(); $title = "User Registration"; $keywords = "Name, contact, phone, e-mail, registration"; $description = "user registration becoming a member."; require "partials/_html_header.php"; //require "partials/_header.php"; require "partials/_menu.php"; require "DataRepository.php"; // if all validation passed save user $db = new DataRepository(); // form validation goes here $first_nameErr = $emailErr = $passwordErr = $passwordConfirmErr = ""; $first_name = $last_name = $email = $password = $passwordConfirm = ""; if(isset($_POST['submit'])) { $valid = TRUE; // check if all fields are valid { if ($_SERVER["REQUEST_METHOD"] == "POST") { if (empty($_POST["first_name"])) {$first_nameErr = "Name is required";} else { // $first_name = test_input($_POST["first_name"]); // check if name only contains letters and whitespace if (!preg_match("/^[a-zA-Z ]*$/",$first_name)) { $first_nameErr = "Only letters and white space allowed"; } } if (empty($_POST["email"])) {$emailErr = "Email is required";} else { // $email = test_input($_POST["email"]); // check if e-mail address syntax is valid if (!preg_match("/([\w\-]+\@[\w\-]+\.[\w\-]+)/",$email)) { $emailErr = "Invalid email format"; } } if (!preg_match("/(......)/",$password)) { $passwordErr = "Subject must contain THREE or more characters!"; } if ($_POST['password']!= $_POST['passwordConfirm']) { echo("Oops! Password did not match! Try again. "); } function test_input($data) { $data = trim($data); $data = stripslashes($data); $data = htmlspecialchars($data); return $data; } } } if(!$db->isEmailUnique($_POST['email'])) { $valid = FALSE; //display errors in the correct places } // if still valid save the user if($valid) { $new_user = array( 'first_name' => $_POST['first_name'], 'last_name' => $_POST['last_name'], 'email' => $_POST['email'], 'password' => $_POST['password'] ); $results = $db->saveUser($new_user); if($results == TRUE) { header("Location: login.php"); } else { echo "WTF!"; exit; } } } ?> <head> <style> .error {color: #FF0000;} </style> </head> <h1 class="center"> World Wide Web Creations' User Registration </h1> <p><span class="error"></span><p> <form method="POST" action="<?php echo htmlspecialchars($_SERVER["PHP_SELF"]);?>" onsubmit="return validate_form()" > First Name: <input type="text" name="first_name" id="first_name" value="<?php echo $first_name;?>" /> <span class="error"> <?php echo $first_nameErr;?></span> <br /> <br /> Last Name(Optional): <input type="text" name="last_name" id="last_name" value="<?php echo $last_name;?>" /> <br /> <br /> E-mail: <input type="email" name="email" id="email" value="<?php echo $email;?>" /> <span class="error"> <?php echo $emailErr;?></span> <br /> <br /> Password: <input type="password" name="password" id="password" value="" /> <span class="error"> <?php echo $passwordErr;?></span> <br /> <br /> Confirmation Password: <input type="password" name="passwordConfirm" id="passwordConfirm" value="" /> <span class="error"> <?php echo $passwordConfirmErr;?></span> <br /> <br /> <br /> <br /> <input type="submit" name="submit" id="submit" value="Submit Data" /> <input type="reset" name="reset" id="reset" value="Reset Form" /> </form> </body> </html> <?php require "partials/_footer.php"; require "partials/_html_footer.php"; ?> class DataRepository { // version number private $version = "1.0.3"; // turn on and off debugging private static $debug = FALSE; // flag to (re)initialize db on each call private static $initialize_db = FALSE; // insert test data on initialization private static $load_default_data = TRUE; const DATAFILE = "203data.txt"; private $data = NULL; private $errors = array(); private $user_fields = array( 'id' => array('required' => 0), 'created_at' => array('required' => 0), 'updated_at' => array('required' => 0), 'first_name' => array('required' => 1), 'last_name' => array('required' => 0), 'email' => array('required' => 1), 'password' => array('required' => 1), 'level' => array('required' => 0, 'default' => 2), ); private $post_fields = array( 'id' => array('required' => 0), 'created_at' => array('required' => 0), 'updated_at' => array('required' => 0), 'user_id' => array('required' => 1), 'title' => array('required' => 1), 'message' => array('required' => 1), 'private' => array('required' => 0, 'default' => 0), ); private $default_user = array( 'id' => 1, 'created_at' => '2013-01-01 00:00:00', 'updated_at' => '2013-01-01 00:00:00', 'first_name' => 'Admin Joe', 'last_name' => 'Tester', 'email' => '[email protected]', 'password' => 'a94a8fe5ccb19ba61c4c0873d391e987982fbbd3', 'level' => 1, ); private $default_post = array( 'id' => 1, 'created_at' => '2013-01-01 00:00:00', 'updated_at' => '2013-01-01 00:00:00', 'user_id' => 1, 'title' => 'My First Post', 'message' => 'This is the message of the first post.', 'private' => 0, ); // constructor will load existing data into memory // if it does not exist it will create it and initialize if desired public function __construct() { // check if need to reset if(DataRepository::$initialize_db AND file_exists(DataRepository::DATAFILE)) { unlink(DataRepository::DATAFILE); } // if file doesn't exist, create the initial datafile if(!file_exists(DataRepository::DATAFILE)) { $this->log("Data file does not exist. Attempting to create it... (".__FUNCTION__.":".__LINE__.")"); // create initial file $this->data = array( 'users' => array( ), 'posts' => array() ); // load default data if needed if(DataRepository::$load_default_data) { $this->data['users'][1] = $this->default_user; $this->data['posts'][1] = $this->default_post; } $this->writeTheData(); } // load the data into memory for use $this->loadTheData(); } private function showErrors($break = TRUE, $type = NULL) { if(count($this->errors) > 0) { echo "<div style=\"color:red;font-weight: bold;font-size: 1.3em\":<h3>$type Errors</h3><ol>"; foreach($this->errors AS $error) { echo "<li>$error</li>"; } echo "</ol></div>"; if($break) { "</br></br></br>Exiting because of errors!"; exit; } } } private function writeTheData() { $this->log("Attempting to write the datafile: ".DataRepository::DATAFILE." (".__FUNCTION__.":".__LINE__.")"); file_put_contents(DataRepository::DATAFILE, json_encode($this->data)); $this->log("Datafile written: ".DataRepository::DATAFILE." (line: ".__LINE__.")"); } private function loadTheData() { $this->log("Attempting to load the datafile: ".DataRepository::DATAFILE." (".__FUNCTION__.":".__LINE__.")"); $this->data = json_decode(file_get_contents(DataRepository::DATAFILE), true); $this->log("Datafile loaded: ".DataRepository::DATAFILE." (".__FUNCTION__.":".__LINE__.")", $this->data); } private function validateFields(&$info, $fields, $pre_errors = NULL) { // merge in any pre_errors if($pre_errors != NULL) { $this->errors = array_merge($this->errors, $pre_errors); } // check all required fields foreach($fields AS $field => $reqs) { if(isset($reqs['required']) AND $reqs['required'] == 1) { if(!isset($info[$field]) OR strlen($info[$field]) == 0) { $this->errors[] = "$field is a REQUIRED field"; } } // set any default values if not present if(isset($reqs['default']) AND (!isset($info[$field]) OR $info[$field] == "")) { $info[$field] = $reqs['default']; } } $this->showErrors(); if(count($this->errors) == 0) { return TRUE; } else { return FALSE; } } private function validateUser(&$user_info) { // check if the email is already in use $this->log("About to check pre_errors: ".DataRepository::DATAFILE." (".__FUNCTION__.":".__LINE__.")", $user_info); $pre_errors = NULL; if(isset($user_info['email'])) { if(!$this->isEmailUnique($user_info['email'])) { $pre_errors = array('The email: '.$user_info['email'].' is already used in our system'); } } $this->log("After pre_error check: ".DataRepository::DATAFILE." (".__FUNCTION__.":".__LINE__.")", $pre_errors); return $this->validateFields($user_info, $this->user_fields, $pre_errors); } private function validatePost(&$post_info) { // check if the user_id in the post actually exists $this->log("About to check pre_errors: ".DataRepository::DATAFILE." (".__FUNCTION__.":".__LINE__.")", $post_info); $pre_errors = NULL; if(isset($post_info['user_id'])) { if(!isset($this->data['users'][$post_info['user_id']])) { $pre_errors = array('The posts must belong to a valid user. (User '.$post_info['user_id'].' does not exist in the data'); } } $this->log("After pre_error check: ".DataRepository::DATAFILE." (".__FUNCTION__.":".__LINE__.")", $pre_errors); return $this->validateFields($post_info, $this->post_fields, $pre_errors); } private function log($message, $data = NULL) { $style = "background-color: #F8F8F8; border: 1px solid #DDDDDD; border-radius: 3px; font-size: 13px; line-height: 19px; overflow: auto; padding: 6px 10px;"; if(DataRepository::$debug) { if($data != NULL) { $dump = "<div style=\"$style\"><pre>".json_encode($data, JSON_PRETTY_PRINT)."</pre></div>"; } else { $dump = NULL; } echo "<code><b>Debug:</b> $message</code>$dump<br />"; } } public function saveUser($user_info) { $this->log("Entering saveUser: (".__FUNCTION__.":".__LINE__.")", $user_info); $mydata = array(); $update = FALSE; // check for existing data if(isset($user_info['id']) AND $this->data['users'][$user_info['id']]) { $mydata = $this->data['users'][$user_info['id']]; $this->log("Loaded prior user: ".print_r($mydata, TRUE)." (".__FUNCTION__.":".__LINE__.")"); } // copy over existing values $this->log("Before copying over existing values: (".__FUNCTION__.":".__LINE__.")", $mydata); foreach($user_info AS $k => $v) { $mydata[$k] = $user_info[$k]; } $this->log("After copying over existing values: (".__FUNCTION__.":".__LINE__.")", $mydata); // check required fields if($this->validateUser($mydata)) { // hash password if new if(isset($mydata['password'])) { $mydata['password'] = sha1($mydata['password']); } // if no id, add the next available one if(!isset($mydata['id']) OR (int)$mydata['id'] < 1) { $this->log("No id set: ".DataRepository::DATAFILE." (".__FUNCTION__.":".__LINE__.")"); if(count($this->data['users']) == 0) { $mydata['id'] = 1; $this->log("Setting id to 1: ".DataRepository::DATAFILE." (".__FUNCTION__.":".__LINE__.")"); } else { $mydata['id'] = max(array_keys($this->data['users']))+1; $this->log("Found max id and added 1 [".$mydata['id']."]: ".DataRepository::DATAFILE." (".__FUNCTION__.":".__LINE__.")"); } } // set created date if null if(!isset($mydata['created_at'])) { $mydata['created_at'] = date ("Y-m-d H:i:s", time()); } // update modified time $mydata['modified_at'] = date ("Y-m-d H:i:s", time()); // copy into data and save $this->log("Before data save: (".__FUNCTION__.":".__LINE__.")", $this->data); $this->data['users'][$mydata['id']] = $mydata; $this->writeTheData(); } return TRUE; } public function getUserById($id) { if(isset($this->data['users'][$id])) { return $this->data['users'][$id]; } else { return array(); } } public function isEmailUnique($email) { // find the user that has the right username/password foreach($this->data['users'] AS $k => $v) { $this->log("Checking unique email: {$v['email']} == $email (".__FUNCTION__.":".__LINE__.")", NULL); if($v['email'] == $email) { $this->log("FOUND NOT unique email: {$v['email']} == $email (".__FUNCTION__.":".__LINE__.")", NULL); return FALSE; break; } } $this->log("Email IS unique: $email (".__FUNCTION__.":".__LINE__.")", NULL); return TRUE; } public function login($username, $password) { // hash password for validation $password = sha1($password); $this->log("Attempting to login with $username / $password: (".__FUNCTION__.":".__LINE__.")", NULL); $user = NULL; // find the user that has the right username/password foreach($this->data['users'] AS $k => $v) { if($v['email'] == $username AND $v['password'] == $password) { $user = $v; break; } } $this->log("Exiting login: (".__FUNCTION__.":".__LINE__.")", $user); return $user; } public function savePost($post_info) { $this->log("Entering savePost: (".__FUNCTION__.":".__LINE__.")", $post_info); $mydata = array(); // check for existing data if(isset($post_info['id']) AND $this->data['posts'][$post_info['id']]) { $mydata = $this->data['posts'][$post_info['id']]; $this->log("Loaded prior posts: ".print_r($mydata, TRUE)." (".__FUNCTION__.":".__LINE__.")"); } $this->log("Before copying over existing values: (".__FUNCTION__.":".__LINE__.")", $mydata); foreach($post_info AS $k => $v) { $mydata[$k] = $post_info[$k]; } $this->log("After copying over existing values: (".__FUNCTION__.":".__LINE__.")", $mydata); // check required fields if($this->validatePost($mydata)) { // if no id, add the next available one if(!isset($mydata['id']) OR (int)$mydata['id'] < 1) { $this->log("No id set: ".DataRepository::DATAFILE." (".__FUNCTION__.":".__LINE__.")"); if(count($this->data['posts']) == 0) { $mydata['id'] = 1; $this->log("Setting id to 1: ".DataRepository::DATAFILE." (".__FUNCTION__.":".__LINE__.")"); } else { $mydata['id'] = max(array_keys($this->data['posts']))+1; $this->log("Found max id and added 1 [".$mydata['id']."]: ".DataRepository::DATAFILE." (".__FUNCTION__.":".__LINE__.")"); } } // set created date if null if(!isset($mydata['created_at'])) { $mydata['created_at'] = date ("Y-m-d H:i:s", time()); } // update modified time $mydata['modified_at'] = date ("Y-m-d H:i:s", time()); // copy into data and save $this->data['posts'][$mydata['id']] = $mydata; $this->log("Before data save: (".__FUNCTION__.":".__LINE__.")", $this->data); $this->writeTheData(); } return TRUE; } public function getAllPosts() { return $this->loadPostsUsers($this->data['posts']); } public function loadPostsUsers($posts) { foreach($posts AS $id => $post) { $posts[$id]['user'] = $this->getUserById($post['user_id']); } return $posts; } public function dump($line_number, $temp = 'NO') { // if(DataRepository::$debug) { if($temp == 'NO') { $temp = $this->data; } echo "<pre>Dumping from line: $line_number\n"; echo json_encode($temp, JSON_PRETTY_PRINT); echo "</pre>"; } } } /* * Change Log * * 1.0.0 * - first version * 1.0.1 * - Added isEmailUnique() function for form validation and precheck on user save * 1.0.2 * - Fixed getAllPosts() to include the post's user info * - Added loadPostsUsers() to load one or more posts with their user info * 1.0.3 * - Added autoload to always add admin Joe. */

    Read the article

  • Large Data Table with first column fixed

    - by bhavya_w
    I have structure as shown in the fiddle http://jsfiddle.net/5LN7U/. <section class="container"> <section class="field"> <ul> <li> Question 1 </li> <li> question 2 </li> <li> question 3 </li> <li> question 4 </li> <li> question 5 </li> <li> question 6 </li> <li> question 7 </li> </ul> </section> <section class="datawrap"> <section class="datawrapinner"> <ul> <li><b>Answer 1 :</b></li> <li><b>Answer 2 :</b></li> <li><b>Answer 3 :</b></li> <li><b>Answer 4 :</b></li> <li><b>Answer 5 :</b></li> <li><b>Answer 6 :</b></li> <li><b>Answer 7 :</b></li> </ul> </section> </section> </section> Basically its a table structure made using divs. The first column contains a long list of questions and the second column contains answers/multiple answers which can be quite big ( there has to be horizontal scrolling in the second column.) The problem i am facing is when i scroll downwards the second column which has the horizontal scroll bar is also scrolling downward. I want horizontal scrollbar to be fixed there. as in it should be always fixed there no matter how much i scroll vertically. Much Like Google Spreadsheets: where the first column stays fixed and there's horizontal scrolling on rest of the columns with over vertical scrolling for whole of the data. I cannot used position fixed in the second column. P.S : please no lectures on using div's for making a table structure. I have my own reasons. and its kinda urgent. Thanks in advance.

    Read the article

  • Getting specific data from database

    - by ifsession
    I have a table called Categorie with a few columns and I'm trying to get only a few out of my database. So I've tried this: $sql = 'SELECT uppercat AS id, COUNT(uppercat) AS uppercat FROM categorie GROUP BY uppercat;'; $d = Yii::app()->db->createCommand($sql)->query(); But I find the output strange. I was trying to do an array_shift but I get an error that this isn't an array. When I do a var_dump on $d: object(CDbDataReader)[38] private '_statement' => object(PDOStatement)[37] public 'queryString' => string 'SELECT uppercat AS id, COUNT(uppercat) AS uppercat FROM categorie GROUP BY uppercat;' (length=100) private '_closed' => boolean false private '_row' => null private '_index' => int -1 private '_e' (CComponent) => null private '_m' (CComponent) => null Ok.. then I did a foreach on $d: array 'id' => string '0' (length=1) 'uppercat' => string '6' (length=1) array 'id' => string '3' (length=1) 'uppercat' => string '2' (length=1) array 'id' => string '6' (length=1) 'uppercat' => string '1' (length=1) array 'id' => string '7' (length=1) 'uppercat' => string '2' (length=1) array 'id' => string '9' (length=1) 'uppercat' => string '2' (length=1) Then why do I get the message that $d isn't an array while it contains arrays? Is there any other way on how to get some specific data out of my database and that I can then do an array_shift on them? I've also tried doing this with findAllBySql but then I can't reach my attribute for COUNT(uppercat) which is not in my model. I guess I'd have to add it to my model but I wouldn't like that because I need it just once.

    Read the article

  • Currency Conversion in Oracle BI applications

    - by Saurabh Verma
    Authored by Vijay Aggarwal and Hichem Sellami A typical data warehouse contains Star and/or Snowflake schema, made up of Dimensions and Facts. The facts store various numerical information including amounts. Example; Order Amount, Invoice Amount etc. With the true global nature of business now-a-days, the end-users want to view the reports in their own currency or in global/common currency as defined by their business. This presents a unique opportunity in BI to provide the amounts in converted rates either by pre-storing or by doing on-the-fly conversions while displaying the reports to the users. Source Systems OBIA caters to various source systems like EBS, PSFT, Sebl, JDE, Fusion etc. Each source has its own unique and intricate ways of defining and storing currency data, doing currency conversions and presenting to the OLTP users. For example; EBS stores conversion rates between currencies which can be classified by conversion rates, like Corporate rate, Spot rate, Period rate etc. Siebel stores exchange rates by conversion rates like Daily. EBS/Fusion stores the conversion rates for each day, where as PSFT/Siebel store for a range of days. PSFT has Rate Multiplication Factor and Rate Division Factor and we need to calculate the Rate based on them, where as other Source systems store the Currency Exchange Rate directly. OBIA Design The data consolidation from various disparate source systems, poses the challenge to conform various currencies, rate types, exchange rates etc., and designing the best way to present the amounts to the users without affecting the performance. When consolidating the data for reporting in OBIA, we have designed the mechanisms in the Common Dimension, to allow users to report based on their required currencies. OBIA Facts store amounts in various currencies: Document Currency: This is the currency of the actual transaction. For a multinational company, this can be in various currencies. Local Currency: This is the base currency in which the accounting entries are recorded by the business. This is generally defined in the Ledger of the company. Global Currencies: OBIA provides five Global Currencies. Three are used across all modules. The last two are for CRM only. A Global currency is very useful when creating reports where the data is viewed enterprise-wide. Example; a US based multinational would want to see the reports in USD. The company will choose USD as one of the global currencies. OBIA allows users to define up-to five global currencies during the initial implementation. The term Currency Preference is used to designate the set of values: Document Currency, Local Currency, Global Currency 1, Global Currency 2, Global Currency 3; which are shared among all modules. There are four more currency preferences, specific to certain modules: Global Currency 4 (aka CRM Currency) and Global Currency 5 which are used in CRM; and Project Currency and Contract Currency, used in Project Analytics. When choosing Local Currency for Currency preference, the data will show in the currency of the Ledger (or Business Unit) in the prompt. So it is important to select one Ledger or Business Unit when viewing data in Local Currency. More on this can be found in the section: Toggling Currency Preferences in the Dashboard. Design Logic When extracting the fact data, the OOTB mappings extract and load the document amount, and the local amount in target tables. It also loads the exchange rates required to convert the document amount into the corresponding global amounts. If the source system only provides the document amount in the transaction, the extract mapping does a lookup to get the Local currency code, and the Local exchange rate. The Load mapping then uses the local currency code and rate to derive the local amount. The load mapping also fetches the Global Currencies and looks up the corresponding exchange rates. The lookup of exchange rates is done via the Exchange Rate Dimension provided as a Common/Conforming Dimension in OBIA. The Exchange Rate Dimension stores the exchange rates between various currencies for a date range and Rate Type. Two physical tables W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are used to provide the lookups and conversions between currencies. The data is loaded from the source system’s Ledger tables. W_EXCH_RATE_G stores the exchange rates between currencies with a date range. On the other hand, W_GLOBAL_EXCH_RATE_G stores the currency conversions between the document currency and the pre-defined five Global Currencies for each day. Based on the requirements, the fact mappings can decide and use one or both tables to do the conversion. Currency design in OBIA also taps into the MLS and Domain architecture, thus allowing the users to map the currencies to a universal Domain during the implementation time. This is especially important for companies deploying and using OBIA with multiple source adapters. Some Gotchas to Look for It is necessary to think through the currencies during the initial implementation. 1) Identify various types of currencies that are used by your business. Understand what will be your Local (or Base) and Documentation currency. Identify various global currencies that your users will want to look at the reports. This will be based on the global nature of your business. Changes to these currencies later in the project, while permitted, but may cause Full data loads and hence lost time. 2) If the user has a multi source system make sure that the Global Currencies and Global Rate Types chosen in Configuration Manager do have the corresponding source specific counterparts. In other words, make sure for every DW specific value chosen for Currency Code or Rate Type, there is a source Domain mapping already done. Technical Section This section will briefly mention the technical scenarios employed in the OBIA adaptors to extract data from each source system. In OBIA, we have two main tables which store the Currency Rate information as explained in previous sections. W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are the two tables. W_EXCH_RATE_G stores all the Currency Conversions present in the source system. It captures data for a Date Range. W_GLOBAL_EXCH_RATE_G has Global Currency Conversions stored at a Daily level. However the challenge here is to store all the 5 Global Currency Exchange Rates in a single record for each From Currency. Let’s voyage further into the Source System Extraction logic for each of these tables and understand the flow briefly. EBS: In EBS, we have Currency Data stored in GL_DAILY_RATES table. As the name indicates GL_DAILY_RATES EBS table has data at a daily level. However in our warehouse we store the data with a Date Range and insert a new range record only when the Exchange Rate changes for a particular From Currency, To Currency and Rate Type. Below are the main logical steps that we employ in this process. (Incremental Flow only) – Cleanup the data in W_EXCH_RATE_G. Delete the records which have Start Date > minimum conversion date Update the End Date of the existing records. Compress the daily data from GL_DAILY_RATES table into Range Records. Incremental map uses $$XRATE_UPD_NUM_DAY as an extra parameter. Generate Previous Rate, Previous Date and Next Date for each of the Daily record from the OLTP. Filter out the records which have Conversion Rate same as Previous Rates or if the Conversion Date lies within a single day range. Mark the records as ‘Keep’ and ‘Filter’ and also get the final End Date for the single Range record (Unique Combination of From Date, To Date, Rate and Conversion Date). Filter the records marked as ‘Filter’ in the INFA map. The above steps will load W_EXCH_RATE_GS. Step 0 updates/deletes W_EXCH_RATE_G directly. SIL map will then insert/update the GS data into W_EXCH_RATE_G. These steps convert the daily records in GL_DAILY_RATES to Range records in W_EXCH_RATE_G. We do not need such special logic for loading W_GLOBAL_EXCH_RATE_G. This is a table where we store data at a Daily Granular Level. However we need to pivot the data because the data present in multiple rows in source tables needs to be stored in different columns of the same row in DW. We use GROUP BY and CASE logic to achieve this. Fusion: Fusion has extraction logic very similar to EBS. The only difference is that the Cleanup logic that was mentioned in step 0 above does not use $$XRATE_UPD_NUM_DAY parameter. In Fusion we bring all the Exchange Rates in Incremental as well and do the cleanup. The SIL then takes care of Insert/Updates accordingly. PeopleSoft:PeopleSoft does not have From Date and To Date explicitly in the Source tables. Let’s look at an example. Please note that this is achieved from PS1 onwards only. 1 Jan 2010 – USD to INR – 45 31 Jan 2010 – USD to INR – 46 PSFT stores records in above fashion. This means that Exchange Rate of 45 for USD to INR is applicable for 1 Jan 2010 to 30 Jan 2010. We need to store data in this fashion in DW. Also PSFT has Exchange Rate stored as RATE_MULT and RATE_DIV. We need to do a RATE_MULT/RATE_DIV to get the correct Exchange Rate. We generate From Date and To Date while extracting data from source and this has certain assumptions: If a record gets updated/inserted in the source, it will be extracted in incremental. Also if this updated/inserted record is between other dates, then we also extract the preceding and succeeding records (based on dates) of this record. This is required because we need to generate a range record and we have 3 records whose ranges have changed. Taking the same example as above, if there is a new record which gets inserted on 15 Jan 2010; the new ranges are 1 Jan to 14 Jan, 15 Jan to 30 Jan and 31 Jan to Next available date. Even though 1 Jan record and 31 Jan have not changed, we will still extract them because the range is affected. Similar logic is used for Global Exchange Rate Extraction. We create the Range records and get it into a Temporary table. Then we join to Day Dimension, create individual records and pivot the data to get the 5 Global Exchange Rates for each From Currency, Date and Rate Type. Siebel: Siebel Facts are dependent on Global Exchange Rates heavily and almost none of them really use individual Exchange Rates. In other words, W_GLOBAL_EXCH_RATE_G is the main table used in Siebel from PS1 release onwards. As of January 2002, the Euro Triangulation method for converting between currencies belonging to EMU members is not needed for present and future currency exchanges. However, the method is still available in Siebel applications, as are the old currencies, so that historical data can be maintained accurately. The following description applies only to historical data needing conversion prior to the 2002 switch to the Euro for the EMU member countries. If a country is a member of the European Monetary Union (EMU), you should convert its currency to other currencies through the Euro. This is called triangulation, and it is used whenever either currency being converted has EMU Triangulation checked. Due to this, there are multiple extraction flows in SEBL ie. EUR to EMU, EUR to NonEMU, EUR to DMC and so on. We load W_EXCH_RATE_G through multiple flows with these data. This has been kept same as previous versions of OBIA. W_GLOBAL_EXCH_RATE_G being a new table does not have such needs. However SEBL does not have From Date and To Date columns in the Source tables similar to PSFT. We use similar extraction logic as explained in PSFT section for SEBL as well. What if all 5 Global Currencies configured are same? As mentioned in previous sections, from PS1 onwards we store Global Exchange Rates in W_GLOBAL_EXCH_RATE_G table. The extraction logic for this table involves Pivoting data from multiple rows into a single row with 5 Global Exchange Rates in 5 columns. As mentioned in previous sections, we use CASE and GROUP BY functions to achieve this. This approach poses a unique problem when all the 5 Global Currencies Chosen are same. For example – If the user configures all 5 Global Currencies as ‘USD’ then the extract logic will not be able to generate a record for From Currency=USD. This is because, not all Source Systems will have a USD->USD conversion record. We have _Generated mappings to take care of this case. We generate a record with Conversion Rate=1 for such cases. Reusable Lookups Before PS1, we had a Mapplet for Currency Conversions. In PS1, we only have reusable Lookups- LKP_W_EXCH_RATE_G and LKP_W_GLOBAL_EXCH_RATE_G. These lookups have another layer of logic so that all the lookup conditions are met when they are used in various Fact Mappings. Any user who would want to do a LKP on W_EXCH_RATE_G or W_GLOBAL_EXCH_RATE_G should and must use these Lookups. A direct join or Lookup on the tables might lead to wrong data being returned. Changing Currency preferences in the Dashboard: In the 796x series, all amount metrics in OBIA were showing the Global1 amount. The customer needed to change the metric definitions to show them in another Currency preference. Project Analytics started supporting currency preferences since 7.9.6 release though, and it published a Tech note for other module customers to add toggling between currency preferences to the solution. List of Currency Preferences Starting from 11.1.1.x release, the BI Platform added a new feature to support multiple currencies. The new session variable (PREFERRED_CURRENCY) is populated through a newly introduced currency prompt. This prompt can take its values from the xml file: userpref_currencies_OBIA.xml, which is hosted in the BI Server installation folder, under :< home>\instances\instance1\config\OracleBIPresentationServicesComponent\coreapplication_obips1\userpref_currencies.xml This file contains the list of currency preferences, like“Local Currency”, “Global Currency 1”,…which customers can also rename to give them more meaningful business names. There are two options for showing the list of currency preferences to the user in the dashboard: Static and Dynamic. In Static mode, all users will see the full list as in the user preference currencies file. In the Dynamic mode, the list shown in the currency prompt drop down is a result of a dynamic query specified in the same file. Customers can build some security into the rpd, so the list of currency preferences will be based on the user roles…BI Applications built a subject area: “Dynamic Currency Preference” to run this query, and give every user only the list of currency preferences required by his application roles. Adding Currency to an Amount Field When the user selects one of the items from the currency prompt, all the amounts in that page will show in the Currency corresponding to that preference. For example, if the user selects “Global Currency1” from the prompt, all data will be showing in Global Currency 1 as specified in the Configuration Manager. If the user select “Local Currency”, all amount fields will show in the Currency of the Business Unit selected in the BU filter of the same page. If there is no particular Business Unit selected in that filter, and the data selected by the query contains amounts in more than one currency (for example one BU has USD as a functional currency, the other has EUR as functional currency), then subtotals will not be available (cannot add USD and EUR amounts in one field), and depending on the set up (see next paragraph), the user may receive an error. There are two ways to add the Currency field to an amount metric: In the form of currency code, like USD, EUR…For this the user needs to add the field “Apps Common Currency Code” to the report. This field is in every subject area, usually under the table “Currency Tag” or “Currency Code”… In the form of currency symbol ($ for USD, € for EUR,…) For this, the user needs to format the amount metrics in the report as a currency column, by specifying the currency tag column in the Column Properties option in Column Actions drop down list. Typically this column should be the “BI Common Currency Code” available in every subject area. Select Column Properties option in the Edit list of a metric. In the Data Format tab, select Custom as Treat Number As. Enter the following syntax under Custom Number Format: [$:currencyTagColumn=Subjectarea.table.column] Where Column is the “BI Common Currency Code” defined to take the currency code value based on the currency preference chosen by the user in the Currency preference prompt.

    Read the article

  • Introduction to Extended Events

    - by extended_events
    For those fighting with all the Extended Event terminology, let's step back and have a small overall Introduction to Extended Events. This post will give you a simplified end to end view through some of the elements in Extended Events. Before we start, let’s review the first Extented Events that we are going to use: -          Events: The SQL Server code is populated with event calls that, by default, are disabled. Adding events to a session enables those event calls. Once enabled, they will execute the set of functionality defined by the session. -          Target: This is an Extended Event Object that can be used to log event information. Also it is important to understand the following Extended Event concept: -          Session: Server Object created by the user that defines functionality to be executed every time a set of events happen.   It’s time to write a small “Hello World” using Extended Events. This will help understand the above terms. We will use: -          Event sqlserver. error_reported: This event gets fired every time that an error happens in the server. -          Target package0.asynchronous_file_target: This target stores the event data in disk. -          Session: We will create a session that sends all the error_reported events to the ring buffer. Before we get started, a quick note: Don’t run this script in a production environment. Even though, we are going just going to be raise very low severity user errors, we don't want to introduce noise in our servers. -- TRIES TO ELIMINATE PREVIOUS SESSIONS BEGIN TRY       DROP EVENT SESSION test_session ON SERVER END TRY BEGIN CATCH END CATCH GO   -- CREATES THE SESSION CREATE EVENT SESSION test_session ON SERVER ADD EVENT sqlserver.error_reported ADD TARGET package0.asynchronous_file_target -- CONFIGURES THE FILE TARGET (set filename = 'c:\temp\data1.xel' , metadatafile = 'c:\temp\data1.xem') GO   -- STARTS THE SESSION ALTER EVENT SESSION test_session ON SERVER STATE = START GO   -- GENERATES AN ERROR RAISERROR (N'HELLO WORLD', -- Message text.            1, -- Severity,            1, 7, 3, N'abcde'); -- Other parameters GO   -- STOPS LISTENING FOR THE EVENT ALTER EVENT SESSION test_session ON SERVER STATE = STOP GO   -- REMOVES THE EVENT SESSION FROM THE SERVER DROP EVENT SESSION test_session ON SERVER GO -- REMOVES THE EVENT SESSION FROM THE SERVER select CAST(event_data as XML) as event_data from sys.fn_xe_file_target_read_file ('c:\temp\data1*.xel','c:\temp\data1*.xem', null, null) This query will output the event data with our first hello world in the Extended Event format: <event name="error_reported" package="sqlserver" id="100" version="1" timestamp="2010-02-27T03:08:04.210Z"><data name="error"><value>50000</value><text /></data><data name="severity"><value>1</value><text /></data><data name="state"><value>1</value><text /></data><data name="user_defined"><value>true</value><text /></data><data name="message"><value>HELLO WORLD</value><text /></data></event> More on parsing event data in this post: Reading event data 101 Now let's move that lets move on to the other three Extended Event objects: -          Actions. This Extended Objects actions get executed before events are published (stored in buffers to be transferred to the targets). Currently they are used additional data (like the TSQL Statement related to an event, the session, the user) or generate a mini dump.   -          Predicates: Predicates express are logical expressions that specify what predicates to fire (E.g. only listen to errors with a severity greater than 16). This are composed of two Extended Objects: o   Predicate comparators: Defines an operator for a pair of values. Examples: §  Severity > 16 §  error_message = ‘Hello World!!’ o   Predicate sources: These are values that can be also used by the predicates. They are generic data that isn’t usually provided in the event (similar to the actions). §  Sqlserver.username = ‘Tintin’ As logical expressions they can be combined using logical operators (and, or, not).  Note: This pair always has to be first an event field or predicate source and then a value         Let’s do another small Example. We will trigger errors but we will use the ones that have severity >= 10 and the error message != ‘filter’. To verify this we will use the action sql_text that will attach the sql statement to the event data: -- TRIES TO ELIMINATE PREVIOUS SESSIONS BEGIN TRY       DROP EVENT SESSION test_session ON SERVER END TRY BEGIN CATCH END CATCH GO   -- CREATES THE SESSION CREATE EVENT SESSION test_session ON SERVER ADD EVENT sqlserver.error_reported       (ACTION (sqlserver.sql_text) WHERE severity = 2 and (not (message = 'filter'))) ADD TARGET package0.asynchronous_file_target -- CONFIGURES THE FILE TARGET (set filename = 'c:\temp\data2.xel' , metadatafile = 'c:\temp\data2.xem') GO   -- STARTS THE SESSION ALTER EVENT SESSION test_session ON SERVER STATE = START GO   -- THIS EVENT WILL BE FILTERED BECAUSE SEVERITY != 2 RAISERROR (N'PUBLISH', 1, 1, 7, 3, N'abcde'); GO -- THIS EVENT WILL BE FILTERED BECAUSE MESSAGE = 'FILTER' RAISERROR (N'FILTER', 2, 1, 7, 3, N'abcde'); GO -- THIS ERROR WILL BE PUBLISHED RAISERROR (N'PUBLISH', 2, 1, 7, 3, N'abcde'); GO   -- STOPS LISTENING FOR THE EVENT ALTER EVENT SESSION test_session ON SERVER STATE = STOP GO   -- REMOVES THE EVENT SESSION FROM THE SERVER DROP EVENT SESSION test_session ON SERVER GO -- REMOVES THE EVENT SESSION FROM THE SERVER select CAST(event_data as XML) as event_data from sys.fn_xe_file_target_read_file ('c:\temp\data2*.xel','c:\temp\data2*.xem', null, null)   This last statement will output one event with the following data: <event name="error_reported" package="sqlserver" id="100" version="1" timestamp="2010-03-05T23:15:05.481Z">   <data name="error">     <value>50000</value>     <text />   </data>   <data name="severity">     <value>2</value>     <text />   </data>   <data name="state">     <value>1</value>     <text />   </data>   <data name="user_defined">     <value>true</value>     <text />   </data>   <data name="message">     <value>PUBLISH</value>     <text />   </data>   <action name="sql_text" package="sqlserver">     <value>-- THIS ERROR WILL BE PUBLISHED RAISERROR (N'PUBLISH', 2, 1, 7, 3, N'abcde'); </value>     <text />   </action> </event> If you see more events, check if you have deleted previous event files. If so, please run   -- Deletes previous event files EXEC SP_CONFIGURE GO EXEC SP_CONFIGURE 'xp_cmdshell', 1 GO RECONFIGURE GO XP_CMDSHELL 'del c:\temp\data*.xe*' GO   or delete them manually.   More Info on Events: Extended Event Events More Info on Targets: Extended Event Targets More Info on Sessions: Extended Event Sessions More Info on Actions: Extended Event Actions More Info on Predicates: Extended Event Predicates Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

< Previous Page | 358 359 360 361 362 363 364 365 366 367 368 369  | Next Page >