Search Results

Search found 12077 results on 484 pages for 'node js'.

Page 188/484 | < Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >

  • Oracle Database Appliance:???????????1Box?????2????????!

    - by Yusuke.Yamamoto
    11?14????????·????????Oracle Database Appliance???????????????? ????????:????????Oracle Database Appliance??????????? Oracle Database Appliance ??? Oracle Database Appliance ??Oracle Database ?????????????????????????·??????????Oracle Database ??(1)??????(2)RAC One Node ??(3)Oracle RAC ?????????????????????? Oracle Real Application Clusters(RAC)|??????????? ??????Oracle Database 11gR2 Oracle Real Application Clusters One Node ??(1)?????DB?????????????1Box???????? Oracle Database Appliance ???Oracle Real Application Clusters(RAC) ????????????DB??????????????????(????2??????????????????????????????????????????)?1Box????4U???????????????????????? ??(2)?????DB????2???????? Oracle Database Appliance ???Oracle Appliance Manager ????????????????????????Oracle Appliance Manager ????????(7????)???????????(Oracle Database?Oracle Grid Infrastructure?Oracle Enterprise Manager??)?????????(????????????????)?????????????????????2??????? ???Oracle Database Appliance ???????????????????????·????????????? Oracle Appliance Manager:????????????:7??????????????????? ??(3)????????????:???CPU???????????????????? Oracle Database Appliance ????Pay-As-You-Grow(?????????????)???????????????·????????????????Oracle Database Enterprise Edition ???????2??~24????????????? ?????????????????Oracle Database Enterprise Edition ????????(??????????????????)??????????????? Oracle Database Appliance:???? ????????????????????????????????? Oracle Database Appliance:???? ?????? Oracle Database Appliance Oracle Database Appliance:?????? Oracle Database Appliance:?????(??) Oracle Database Appliance:3D?? ????????? Oracle Direct ????Oracle Appliance Manager ????????????????????

    Read the article

  • Drupal: Exposing a module's data to Views2 using its API

    - by Sepehr Lajevardi
    I'm forking the filefield_stats module to provide it with the ability of exposing data into the Views module via the API. The filefield_stats module db table schema is as follow: <?php function filefield_stats_schema() { $schema['filefield_stats'] = array( 'fields' => array( 'fid' => array('type' => 'int', 'unsigned' => TRUE, 'not null' => TRUE, 'description' => 'Primary Key: the {files}.fid'), 'vid' => array('type' => 'int', 'unsigned' => TRUE, 'not null' => TRUE, 'description' => 'Primary Key: the {node}.vid'), 'uid' => array('type' => 'int', 'unsigned' => TRUE, 'not null' => TRUE, 'description' => 'The {users}.uid of the downloader'), 'timestamp' => array('type' => 'int', 'unsigned' => TRUE, 'not null' => TRUE, 'description' => 'The timestamp of the download'), 'hostname' => array('type' => 'varchar', 'length' => 128, 'not null' => TRUE, 'default' => '', 'description' => 'The hostname downloading the file (usually IP)'), 'referer' => array('type' => 'text', 'not null' => FALSE, 'description' => 'Referer for the download'), ), 'indexes' => array('fid_vid' => array('fid', 'vid')), ); return $schema; } ?> Well, so I implemented the hook_views_api() in filefield_stats.module & added a filefield_stats.views.inc file in the module's root directory, here it is: <?php // $Id$ /** * @file * Provide the ability of exposing data to Views2, for filefield_stats module. */ function filefield_stats_views_data() { $data = array(); $data['filefield_stats']['table']['group'] = t('FilefieldStats'); // Referencing the {node_revisions} table. $data['filefield_stats']['table']['join'] = array( 'node_revisions' => array( 'left_field' => 'vid', 'field' => 'vid', ), /*'files' => array( 'left_field' => 'fid', 'field' => 'fid', ), 'users' => array( 'left_field' => 'uid', 'field' => 'uid', ),*/ ); // Introducing filefield_stats table fields to Views2. // vid: The node's revision ID which wrapped the downloaded file $data['filefield_stats']['vid'] = array( 'title' => t('Node revision ID'), 'help' => t('The node\'s revision ID which wrapped the downloaded file'), 'relationship' => array( 'base' => 'node_revisions', 'field' => 'vid', 'handler' => 'views_handler_relationship', 'label' => t('Node Revision Reference.'), ), ); // uid: The ID of the user who downloaded the file. $data['filefield_stats']['uid'] = array( 'title' => t('User ID'), 'help' => t('The ID of the user who downloaded the file.'), 'relationship' => array( 'base' => 'users', 'field' => 'uid', 'handler' => 'views_handler_relationship', 'label' => t('User Reference.'), ), ); // fid: The ID of the downloaded file. $data['filefield_stats']['fid'] = array( 'title' => t('File ID'), 'help' => t('The ID of the downloaded file.'), 'relationship' => array( 'base' => 'files', 'field' => 'fid', 'handler' => 'views_handler_relationship', 'label' => t('File Reference.'), ), ); // hostname: The hostname which the file has been downloaded from. $data['filefield_stats']['hostname'] = array( 'title' => t('The Hostname'), 'help' => t('The hostname which the file has been downloaded from.'), 'field' => array( 'handler' => 'views_handler_field', 'click sortable' => TRUE, ), 'sort' => array( 'handler' => 'views_handler_sort', ), 'filter' => array( 'handler' => 'views_handler_filter_string', ), 'argument' => array( 'handler' => 'views_handler_argument_string', ), ); // referer: The referer address which the file download link has been triggered from. $data['filefield_stats']['referer'] = array( 'title' => t('The Referer'), 'help' => t('The referer which the file download link has been triggered from.'), 'field' => array( 'handler' => 'views_handler_field', 'click sortable' => TRUE, ), 'sort' => array( 'handler' => 'views_handler_sort', ), 'filter' => array( 'handler' => 'views_handler_filter_string', ), 'argument' => array( 'handler' => 'views_handler_argument_string', ), ); // timestamp: The time of the download. $data['filefield_stats']['timestamp'] = array( 'title' => t('Download Time'), 'help' => t('The time of the download.'), 'field' => array( 'handler' => 'views_handler_field_date', 'click sortable' => TRUE, ), 'sort' => array( 'handler' => 'views_handler_sort_date', ), 'filter' => array( 'handler' => 'views_handler_filter_date', ), ); return $data; } // filefield_stats_views_data() ?> According to the Views2 documentations this should work as a minimum, I think. But it doesn't! Also there is no error of any kind, when I come through the views UI, there's nothing about filefield_stats data. Any idea?

    Read the article

  • CCNet 1.6 Conditional Plugin Help Needed!

    - by Mike M
    Hi all, I cannot get the conditional plugin to work that has been added to CCNet as of version 1.6 - clicky. I am running the latest version of CCNet (1.6.7258.1) and have the following code in my ccnet.config: <project name="9iCompile"> <sourcecontrol type="svn"> <trunkUrl>http://bis-build:81/svn/Oracle/oas_forms/COPEN</trunkUrl> <workingDirectory>C:\OAS\COPEN</workingDirectory> <username>*</username> <password>*</password> <executable>C:\Program Files\VisualSVN\bin\svn.exe</executable> </sourcecontrol> <conditional> <conditions> <compareCondition> <value1>$[ProjectType]</value1> <value2>copen</value2> <evaluation>equal</evaluation> <ignoreCase>true</ignoreCase> </compareCondition> </conditions> <tasks> <nant> <executable>C:\Program Files\nant-0.85\bin\nant.exe</executable> <baseDirectory>C:\OAS</baseDirectory> <buildFile>Oracle9i_Automation_v2.build</buildFile> <targetList> <target>build</target> </targetList> </nant> </tasks> </conditional> <!-- more conditional statements would be here for different project types if I can get it to work --> <parameters> <selectParameter name="ProjectType"> <description>The type of project to operate on.</description> <allowedValues> <value name="COPEN">copen</value> <value name="BCS">bcs</value> <value name="FCDD">fcdd</value> </allowedValues> </parameters> <security type="defaultProjectSecurity" defaultRight="Deny"> <permissions> <rolePermission name="Developers" ref="Developers"/> <rolePermission name="Accepters" ref="Accepters"/> <rolePermission name="Releasers" ref="Releasers"/> <rolePermission name="Administrators" ref="Administrators"/> </permissions> </security> </project> The CCNet server crashes whenever I try to run this config though with the following output: [14:ERROR] Exception: Unused node detected: <conditional> <conditions> <compareCondition> <value1>$[ProjectType]</value1> <value2>copen</value2> <evaluation>equal</evaluation> <ignoreCase>true</ignoreCase> </compareCondition> </conditions> <tasks> <nant> <executable>C:\Program Files\nant-0.85\bin\nant.exe</executable> <baseDirectory>C:\OAS</baseDirectory> <buildFile>Oracle9i_Automation_v2.build</buildFile> <targetList> <target>build</target> </targetList> </nant> </tasks> </conditional> ---------- ThoughtWorks.CruiseControl.Core.Config.ConfigurationException: Unused node detected: <conditional> <conditions> <compareCondition> <value1>$[ProjectType]</value1> <value2>copen</value2> <evaluation>equal</evaluation> <ignoreCase>true</ignoreCase> </compareCondition> </conditions> <tasks> <nant> <executable>C:\Program Files\nant-0.85\bin\nant.exe</executable> <baseDirectory>C:\OAS</baseDirectory> <buildFile>Oracle9i_Automation_v2.build</buildFile> <targetList> <target>build</target> </targetList> </nant> </tasks> </conditional> at ThoughtWorks.CruiseControl.Core.Config.NetReflectorConfigurationReader.Defa­ultErrorProcesser.ProcessError(String message) at ThoughtWorks.CruiseControl.Core.Config.NetReflectorConfigurationReader.<>c_­_DisplayClass1.<Read>b__0(InvalidNodeEventArgs args) at Exortech.NetReflector.InvalidNodeEventHandler.Invoke(InvalidNodeEventArgsar­gs) at Exortech.NetReflector.NetReflectorTypeTable.OnInvalidNode(InvalidNodeEventA­rgs args) at Exortech.NetReflector.XmlTypeSerialiser.HandleUnusedNode(NetReflectorTypeTa­ble table, XmlNode orphan) at Exortech.NetReflector.XmlTypeSerialiser.ReadMembers(XmlNode node, Object instance, NetReflectorTypeTable table) at Exortech.NetReflector.XmlTypeSerialiser.Read(XmlNode node, NetReflectorTypeTable table) at Exortech.NetReflector.NetReflectorReader.Read(XmlNode node) at ThoughtWorks.CruiseControl.Core.Config.NetReflectorConfigurationReader.Read­(XmlDocument document, IConfigurationErrorProcesser errorProcesser) at ThoughtWorks.CruiseControl.Core.Config.DefaultConfigurationFileLoader.Load(­FileInfo configFile) at ThoughtWorks.CruiseControl.Core.Config.FileConfigurationService.Load() at ThoughtWorks.CruiseControl.Core.Config.FileWatcherConfigurationService.Load­() at ThoughtWorks.CruiseControl.Core.Config.CachingConfigurationService.Load() at ThoughtWorks.CruiseControl.Core.CruiseServer.Restart() at ThoughtWorks.CruiseControl.Core.Config.ConfigurationUpdateHandler.Invoke() at ThoughtWorks.CruiseControl.Core.Config.FileWatcherConfigurationService.Hand­leConfigurationFileChanged(Object source, FileSystemEventArgs args) ---------- Can someone please help?? I have no idea what I'm doing wrong here or if this is a bug :( I have also posted on the ccnet-user group several days ago but have not received any response :(

    Read the article

  • C++ function not found during compilation

    - by forthewinwin
    For a homework assignment: I'm supposed to create randomized alphabetial keys, print them to a file, and then hash each of them into a hash table using the function "goodHash", found in my below code. When I try to run the below code, it says my "goodHash" "identifier isn't found". What's wrong with my code? #include <iostream> #include <vector> #include <cstdlib> #include "math.h" #include <fstream> #include <time.h> using namespace std; // "makeKey" function to create an alphabetical key // based on 8 randomized numbers 0 - 25. string makeKey() { int k; string key = ""; for (k = 0; k < 8; k++) { int keyNumber = (rand() % 25); if (keyNumber == 0) key.append("A"); if (keyNumber == 1) key.append("B"); if (keyNumber == 2) key.append("C"); if (keyNumber == 3) key.append("D"); if (keyNumber == 4) key.append("E"); if (keyNumber == 5) key.append("F"); if (keyNumber == 6) key.append("G"); if (keyNumber == 7) key.append("H"); if (keyNumber == 8) key.append("I"); if (keyNumber == 9) key.append("J"); if (keyNumber == 10) key.append("K"); if (keyNumber == 11) key.append("L"); if (keyNumber == 12) key.append("M"); if (keyNumber == 13) key.append("N"); if (keyNumber == 14) key.append("O"); if (keyNumber == 15) key.append("P"); if (keyNumber == 16) key.append("Q"); if (keyNumber == 17) key.append("R"); if (keyNumber == 18) key.append("S"); if (keyNumber == 19) key.append("T"); if (keyNumber == 20) key.append("U"); if (keyNumber == 21) key.append("V"); if (keyNumber == 22) key.append("W"); if (keyNumber == 23) key.append("X"); if (keyNumber == 24) key.append("Y"); if (keyNumber == 25) key.append("Z"); } return key; } // "makeFile" function to produce the desired text file. // Note this only works as intended if you include the ".txt" extension, // and that a file of the same name doesn't already exist. void makeFile(string fileName, int n) { ofstream ourFile; ourFile.open(fileName); int k; // For use in below loop to compare with n. int l; // For use in the loop inside the below loop. string keyToPassTogoodHash = ""; for (k = 1; k <= n; k++) { for (l = 0; l < 8; l++) { // For-loop to write to the file ONE key ourFile << makeKey()[l]; keyToPassTogoodHash += (makeKey()[l]); } ourFile << " " << k << "\n";// Writes two spaces and the data value goodHash(keyToPassTogoodHash); // I think this has to do with the problem makeKey(); // Call again to make a new key. } } // Primary function to create our desired file! void mainFunction(string fileName, int n) { makeKey(); makeFile(fileName, n); } // Hash Table for Part 2 struct Node { int key; string value; Node* next; }; const int hashTableSize = 10; Node* hashTable[hashTableSize]; // "goodHash" function for Part 2 void goodHash(string key) { int x = 0; int y; int keyConvertedToNumber = 0; // For-loop to produce a numeric value based on the alphabetic key, // which is then hashed into hashTable using the hash function // declared below the loop (hashFunction). for (y = 0; y < 8; y++) { if (key[y] == 'A' || 'B' || 'C') x = 0; if (key[y] == 'D' || 'E' || 'F') x = 1; if (key[y] == 'G' || 'H' || 'I') x = 2; if (key[y] == 'J' || 'K' || 'L') x = 3; if (key[y] == 'M' || 'N' || 'O') x = 4; if (key[y] == 'P' || 'Q' || 'R') x = 5; if (key[y] == 'S' || 'T') x = 6; if (key[y] == 'U' || 'V') x = 7; if (key[y] == 'W' || 'X') x = 8; if (key[y] == 'Y' || 'Z') x = 9; keyConvertedToNumber = x + keyConvertedToNumber; } int hashFunction = keyConvertedToNumber % hashTableSize; Node *temp; temp = new Node; temp->value = key; temp->next = hashTable[hashFunction]; hashTable[hashFunction] = temp; } // First two lines are for Part 1, to call the functions key to Part 1. int main() { srand ( time(NULL) ); // To make sure our randomization works. mainFunction("sandwich.txt", 5); // To test program cin.get(); return 0; } I realize my code is cumbersome in some sections, but I'm a noob at C++ and don't know much to do it better. I'm guessing another way I could do it is to AFTER writing the alphabetical keys to the file, read them from the file and hash each key as I do that, but I wouldn't know how to go about coding that.

    Read the article

  • depth first search graph by using linked list

    - by programmerwannabe
    im using mac book and i cannot read the text file using this code. moreover, can you guys please add function(graph is connected?, and is this graph tree?) inputA.txt consist 1 2 1 6 1 5 2 3 2 6 3 4 3 6 4 5 4 6 5 6 #include <stdio.h> #include <memory.h> #include <stdlib.h> #define MAX 10 #define TRUE 1 #define FALSE 0 typedef struct Graph{ int vertex; struct Graph* link; } g_node; typedef struct graphType{ int x; int visited[MAX]; g_node* adjList_H[MAX]; } graphType; typedef struct stack{ int data; struct stack* link; } s_node; s_node* top; void push(int item){ s_node* n=(s_node*)malloc(sizeof(s_node)); n->data = item; n->link = top; top = n; } int pop(){ int item; s_node* n=top; if(top == NULL){ puts("\nstack is empty!\n"); return 0; } else { item = n-> data; top = n->link; free(n); return item; } } void createGraph(graphType* g){ int v; g->x = 1; for(v=1 ; v < MAX ; v++){ g -> visited[v] = FALSE; g -> adjList_H[v] = NULL; } } void insertVertex(graphType* g, int v){ if(((g->x)) > MAX){ puts("\n it has been overed the number of vertex\n"); return ; } g -> x++; } void insertEdge(graphType* g, int u, int v){ g_node* node; if(u >= g -> x || v >= g -> x){ puts("\n no vertex in the graph\n"); return ; } node = (g_node*)malloc(sizeof(g_node)); node -> vertex = v; node -> link = g -> adjList_H[u]; g-> adjList_H[u] = node; } void print_adjList(graphType* g){ int i; g_node *p; for(i=1 ; i<g -> x ; i++){ printf("\n\t\t vertex %d adjacency list ", i); p = g -> adjList_H[i]; while(p){ printf("-> %d", p-> vertex); p = p-> link; } } } void DFS_adjList(graphType* g, int v) { g_node* w; top = NULL; push(v); g->visited[v] = TRUE; printf(" %d", v); while(top != NULL){ w=g->adjList_H[v]; while(w){ if (!g->visited[w->vertex]){ push(w->vertex); g->visited[w->vertex] = TRUE; printf(" %d", w->vertex); v = w->vertex; w=g->adjList_H[v]; } else w= w->link; } v = pop(); } } int main (int argc, const char * argv[]) { FILE *fp; char mychar; char arr[][2]={0, }; int j, k; int i; graphType *G9; G9 = (graphType*)malloc(sizeof(graphType)); createGraph(G9); for(i=1; i<7 ; i++) insertVertex(G9, i); fp = fopen("inputD.txt", "r"); for(j = 0 ; j< 10 ; j++){ for(k = 0 ; k < 2 ; k++){ mychar = fgetc(fp); if(mychar = EOF){ j=10; break; } else if(mychar == ' ') continue; else if(mychar <= '9' || mychar >= '1'){ arr[j][k] = mychar; printf("%d%d", arr[i][k]); } } } insertEdge(G9, 1, 2); insertEdge(G9, 1, 6); insertEdge(G9, 1, 5); insertEdge(G9, 2, 3); insertEdge(G9, 2, 6); insertEdge(G9, 3, 4); insertEdge(G9, 3, 6); insertEdge(G9, 4, 5); insertEdge(G9, 4, 6); insertEdge(G9, 5, 6); insertEdge(G9, 6, 5); insertEdge(G9, 6, 4); insertEdge(G9, 5, 4); insertEdge(G9, 6, 3); insertEdge(G9, 4, 3); insertEdge(G9, 6, 2); insertEdge(G9, 3, 2); insertEdge(G9, 5, 1); insertEdge(G9, 6, 1); insertEdge(G9, 2, 1); printf("\n graph adjacency list "); print_adjList(G9); printf("\n \n//////////////////////////////////////////////\n\n depth fist search >> "); DFS_adjList(G9, 1); return 0; }

    Read the article

  • Developing web apps using ASP.NET MVC 3, Razor and EF Code First - Part 2

    - by shiju
    In my previous post Developing web apps using ASP.NET MVC 3, Razor and EF Code First - Part 1, we have discussed on how to work with ASP.NET MVC 3 and EF Code First for developing web apps. We have created generic repository and unit of work with EF Code First for our ASP.NET MVC 3 application and did basic CRUD operations against a simple domain entity. In this post, I will demonstrate on working with domain entity with deep object graph, Service Layer and View Models and will also complete the rest of the demo application. In the previous post, we have done CRUD operations against Category entity and this post will be focus on Expense entity those have an association with Category entity. You can download the source code from http://efmvc.codeplex.com . The following frameworks will be used for this step by step tutorial.    1. ASP.NET MVC 3 RTM    2. EF Code First CTP 5    3. Unity 2.0 Domain Model Category Entity public class Category   {       public int CategoryId { get; set; }       [Required(ErrorMessage = "Name Required")]       [StringLength(25, ErrorMessage = "Must be less than 25 characters")]       public string Name { get; set;}       public string Description { get; set; }       public virtual ICollection<Expense> Expenses { get; set; }   } Expense Entity public class Expense     {                public int ExpenseId { get; set; }                public string  Transaction { get; set; }         public DateTime Date { get; set; }         public double Amount { get; set; }         public int CategoryId { get; set; }         public virtual Category Category { get; set; }     } We have two domain entities - Category and Expense. A single category contains a list of expense transactions and every expense transaction should have a Category. Repository class for Expense Transaction Let’s create repository class for handling CRUD operations for Expense entity public class ExpenseRepository : RepositoryBase<Expense>, IExpenseRepository     {     public ExpenseRepository(IDatabaseFactory databaseFactory)         : base(databaseFactory)         {         }                } public interface IExpenseRepository : IRepository<Expense> { } Service Layer If you are new to Service Layer, checkout Martin Fowler's article Service Layer . According to Martin Fowler, Service Layer defines an application's boundary and its set of available operations from the perspective of interfacing client layers. It encapsulates the application's business logic, controlling transactions and coordinating responses in the implementation of its operations. Controller classes should be lightweight and do not put much of business logic onto it. We can use the service layer as the business logic layer and can encapsulate the rules of the application. Let’s create a Service class for coordinates the transaction for Expense public interface IExpenseService {     IEnumerable<Expense> GetExpenses(DateTime startDate, DateTime ednDate);     Expense GetExpense(int id);             void CreateExpense(Expense expense);     void DeleteExpense(int id);     void SaveExpense(); } public class ExpenseService : IExpenseService {     private readonly IExpenseRepository expenseRepository;            private readonly IUnitOfWork unitOfWork;     public ExpenseService(IExpenseRepository expenseRepository, IUnitOfWork unitOfWork)     {                  this.expenseRepository = expenseRepository;         this.unitOfWork = unitOfWork;     }     public IEnumerable<Expense> GetExpenses(DateTime startDate, DateTime endDate)     {         var expenses = expenseRepository.GetMany(exp => exp.Date >= startDate && exp.Date <= endDate);         return expenses;     }     public void CreateExpense(Expense expense)     {         expenseRepository.Add(expense);         unitOfWork.Commit();     }     public Expense GetExpense(int id)     {         var expense = expenseRepository.GetById(id);         return expense;     }     public void DeleteExpense(int id)     {         var expense = expenseRepository.GetById(id);         expenseRepository.Delete(expense);         unitOfWork.Commit();     }     public void SaveExpense()     {         unitOfWork.Commit();     } }   View Model for Expense Transactions In real world ASP.NET MVC applications, we need to design model objects especially for our views. Our domain objects are mainly designed for the needs for domain model and it is representing the domain of our applications. On the other hand, View Model objects are designed for our needs for views. We have an Expense domain entity that has an association with Category. While we are creating a new Expense, we have to specify that in which Category belongs with the new Expense transaction. The user interface for Expense transaction will have form fields for representing the Expense entity and a CategoryId for representing the Category. So let's create view model for representing the need for Expense transactions. public class ExpenseViewModel {     public int ExpenseId { get; set; }       [Required(ErrorMessage = "Category Required")]     public int CategoryId { get; set; }       [Required(ErrorMessage = "Transaction Required")]     public string Transaction { get; set; }       [Required(ErrorMessage = "Date Required")]     public DateTime Date { get; set; }       [Required(ErrorMessage = "Amount Required")]     public double Amount { get; set; }       public IEnumerable<SelectListItem> Category { get; set; } } The ExpenseViewModel is designed for the purpose of View template and contains the all validation rules. It has properties for mapping values to Expense entity and a property Category for binding values to a drop-down for list values of Category. Create Expense transaction Let’s create action methods in the ExpenseController for creating expense transactions public ActionResult Create() {     var expenseModel = new ExpenseViewModel();     var categories = categoryService.GetCategories();     expenseModel.Category = categories.ToSelectListItems(-1);     expenseModel.Date = DateTime.Today;     return View(expenseModel); } [HttpPost] public ActionResult Create(ExpenseViewModel expenseViewModel) {                      if (!ModelState.IsValid)         {             var categories = categoryService.GetCategories();             expenseViewModel.Category = categories.ToSelectListItems(expenseViewModel.CategoryId);             return View("Save", expenseViewModel);         }         Expense expense=new Expense();         ModelCopier.CopyModel(expenseViewModel,expense);         expenseService.CreateExpense(expense);         return RedirectToAction("Index");              } In the Create action method for HttpGet request, we have created an instance of our View Model ExpenseViewModel with Category information for the drop-down list and passing the Model object to View template. The extension method ToSelectListItems is shown below   public static IEnumerable<SelectListItem> ToSelectListItems(         this IEnumerable<Category> categories, int  selectedId) {     return           categories.OrderBy(category => category.Name)                 .Select(category =>                     new SelectListItem                     {                         Selected = (category.CategoryId == selectedId),                         Text = category.Name,                         Value = category.CategoryId.ToString()                     }); } In the Create action method for HttpPost, our view model object ExpenseViewModel will map with posted form input values. We need to create an instance of Expense for the persistence purpose. So we need to copy values from ExpenseViewModel object to Expense object. ASP.NET MVC futures assembly provides a static class ModelCopier that can use for copying values between Model objects. ModelCopier class has two static methods - CopyCollection and CopyModel.CopyCollection method will copy values between two collection objects and CopyModel will copy values between two model objects. We have used CopyModel method of ModelCopier class for copying values from expenseViewModel object to expense object. Finally we did a call to CreateExpense method of ExpenseService class for persisting new expense transaction. List Expense Transactions We want to list expense transactions based on a date range. So let’s create action method for filtering expense transactions with a specified date range. public ActionResult Index(DateTime? startDate, DateTime? endDate) {     //If date is not passed, take current month's first and last dte     DateTime dtNow;     dtNow = DateTime.Today;     if (!startDate.HasValue)     {         startDate = new DateTime(dtNow.Year, dtNow.Month, 1);         endDate = startDate.Value.AddMonths(1).AddDays(-1);     }     //take last date of start date's month, if end date is not passed     if (startDate.HasValue && !endDate.HasValue)     {         endDate = (new DateTime(startDate.Value.Year, startDate.Value.Month, 1)).AddMonths(1).AddDays(-1);     }     var expenses = expenseService.GetExpenses(startDate.Value ,endDate.Value);     //if request is Ajax will return partial view     if (Request.IsAjaxRequest())     {         return PartialView("ExpenseList", expenses);     }     //set start date and end date to ViewBag dictionary     ViewBag.StartDate = startDate.Value.ToShortDateString();     ViewBag.EndDate = endDate.Value.ToShortDateString();     //if request is not ajax     return View(expenses); } We are using the above Index Action method for both Ajax requests and normal requests. If there is a request for Ajax, we will call the PartialView ExpenseList. Razor Views for listing Expense information Let’s create view templates in Razor for showing list of Expense information ExpenseList.cshtml @model IEnumerable<MyFinance.Domain.Expense>   <table>         <tr>             <th>Actions</th>             <th>Category</th>             <th>                 Transaction             </th>             <th>                 Date             </th>             <th>                 Amount             </th>         </tr>       @foreach (var item in Model) {              <tr>             <td>                 @Html.ActionLink("Edit", "Edit",new { id = item.ExpenseId })                 @Ajax.ActionLink("Delete", "Delete", new { id = item.ExpenseId }, new AjaxOptions { Confirm = "Delete Expense?", HttpMethod = "Post", UpdateTargetId = "divExpenseList" })             </td>              <td>                 @item.Category.Name             </td>             <td>                 @item.Transaction             </td>             <td>                 @String.Format("{0:d}", item.Date)             </td>             <td>                 @String.Format("{0:F}", item.Amount)             </td>         </tr>          }       </table>     <p>         @Html.ActionLink("Create New Expense", "Create") |         @Html.ActionLink("Create New Category", "Create","Category")     </p> Index.cshtml @using MyFinance.Helpers; @model IEnumerable<MyFinance.Domain.Expense> @{     ViewBag.Title = "Index"; }    <h2>Expense List</h2>    <script src="@Url.Content("~/Scripts/jquery.unobtrusive-ajax.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery-ui.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.ui.datepicker.js")" type="text/javascript"></script> <link href="@Url.Content("~/Content/jquery-ui-1.8.6.custom.css")" rel="stylesheet" type="text/css" />      @using (Ajax.BeginForm(new AjaxOptions{ UpdateTargetId="divExpenseList", HttpMethod="Get"})) {     <table>         <tr>         <td>         <div>           Start Date: @Html.TextBox("StartDate", Html.Encode(String.Format("{0:mm/dd/yyyy}", ViewData["StartDate"].ToString())), new { @class = "ui-datepicker" })         </div>         </td>         <td><div>            End Date: @Html.TextBox("EndDate", Html.Encode(String.Format("{0:mm/dd/yyyy}", ViewData["EndDate"].ToString())), new { @class = "ui-datepicker" })          </div></td>          <td> <input type="submit" value="Search By TransactionDate" /></td>         </tr>     </table>         }   <div id="divExpenseList">             @Html.Partial("ExpenseList", Model)     </div> <script type="text/javascript">     $().ready(function () {         $('.ui-datepicker').datepicker({             dateFormat: 'mm/dd/yy',             buttonImage: '@Url.Content("~/Content/calendar.gif")',             buttonImageOnly: true,             showOn: "button"         });     }); </script> Ajax search functionality using Ajax.BeginForm The search functionality of Index view is providing Ajax functionality using Ajax.BeginForm. The Ajax.BeginForm() method writes an opening <form> tag to the response. You can use this method in a using block. In that case, the method renders the closing </form> tag at the end of the using block and the form is submitted asynchronously by using JavaScript. The search functionality will call the Index Action method and this will return partial view ExpenseList for updating the search result. We want to update the response UI for the Ajax request onto divExpenseList element. So we have specified the UpdateTargetId as "divExpenseList" in the Ajax.BeginForm method. Add jQuery DatePicker Our search functionality is using a date range so we are providing two date pickers using jQuery datepicker. You need to add reference to the following JavaScript files to working with jQuery datepicker. jquery-ui.js jquery.ui.datepicker.js For theme support for datepicker, we can use a customized CSS class. In our example we have used a CSS file “jquery-ui-1.8.6.custom.css”. For more details about the datepicker component, visit jquery UI website at http://jqueryui.com/demos/datepicker . In the jQuery ready event, we have used following JavaScript function to initialize the UI element to show date picker. <script type="text/javascript">     $().ready(function () {         $('.ui-datepicker').datepicker({             dateFormat: 'mm/dd/yy',             buttonImage: '@Url.Content("~/Content/calendar.gif")',             buttonImageOnly: true,             showOn: "button"         });     }); </script>   Source Code You can download the source code from http://efmvc.codeplex.com/ . Summary In this two-part series, we have created a simple web application using ASP.NET MVC 3 RTM, Razor and EF Code First CTP 5. I have demonstrated patterns and practices  such as Dependency Injection, Repository pattern, Unit of Work, ViewModel and Service Layer. My primary objective was to demonstrate different practices and options for developing web apps using ASP.NET MVC 3 and EF Code First. You can implement these approaches in your own way for building web apps using ASP.NET MVC 3. I will refactor this demo app on later time.

    Read the article

  • Diving into OpenStack Network Architecture - Part 2 - Basic Use Cases

    - by Ronen Kofman
      rkofman Normal rkofman 4 138 2014-06-05T03:38:00Z 2014-06-05T05:04:00Z 3 2735 15596 Oracle Corporation 129 36 18295 12.00 Clean Clean false false false false EN-US X-NONE HE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Arial; mso-bidi-theme-font:minor-bidi; mso-bidi-language:AR-SA;} In the previous post we reviewed several network components including Open vSwitch, Network Namespaces, Linux Bridges and veth pairs. In this post we will take three simple use cases and see how those basic components come together to create a complete SDN solution in OpenStack. With those three use cases we will review almost the entire network setup and see how all the pieces work together. The use cases we will use are: 1.       Create network – what happens when we create network and how can we create multiple isolated networks 2.       Launch a VM – once we have networks we can launch VMs and connect them to networks. 3.       DHCP request from a VM – OpenStack can automatically assign IP addresses to VMs. This is done through local DHCP service controlled by OpenStack Neutron. We will see how this service runs and how does a DHCP request and response look like. In this post we will show connectivity, we will see how packets get from point A to point B. We first focus on how a configured deployment looks like and only later we will discuss how and when the configuration is created. Personally I found it very valuable to see the actual interfaces and how they connect to each other through examples and hands on experiments. After the end game is clear and we know how the connectivity works, in a later post, we will take a step back and explain how Neutron configures the components to be able to provide such connectivity.  We are going to get pretty technical shortly and I recommend trying these examples on your own deployment or using the Oracle OpenStack Tech Preview. Understanding these three use cases thoroughly and how to look at them will be very helpful when trying to debug a deployment in case something does not work. Use case #1: Create Network Create network is a simple operation it can be performed from the GUI or command line. When we create a network in OpenStack the network is only available to the tenant who created it or it could be defined as “shared” and then it can be used by all tenants. A network can have multiple subnets but for this demonstration purpose and for simplicity we will assume that each network has exactly one subnet. Creating a network from the command line will look like this: # neutron net-create net1 Created a new network: +---------------------------+--------------------------------------+ | Field                     | Value                                | +---------------------------+--------------------------------------+ | admin_state_up            | True                                 | | id                        | 5f833617-6179-4797-b7c0-7d420d84040c | | name                      | net1                                 | | provider:network_type     | vlan                                 | | provider:physical_network | default                              | | provider:segmentation_id  | 1000                                 | | shared                    | False                                | | status                    | ACTIVE                               | | subnets                   |                                      | | tenant_id                 | 9796e5145ee546508939cd49ad59d51f     | +---------------------------+--------------------------------------+ Creating a subnet for this network will look like this: # neutron subnet-create net1 10.10.10.0/24 Created a new subnet: +------------------+------------------------------------------------+ | Field            | Value                                          | +------------------+------------------------------------------------+ | allocation_pools | {"start": "10.10.10.2", "end": "10.10.10.254"} | | cidr             | 10.10.10.0/24                                  | | dns_nameservers  |                                                | | enable_dhcp      | True                                           | | gateway_ip       | 10.10.10.1                                     | | host_routes      |                                                | | id               | 2d7a0a58-0674-439a-ad23-d6471aaae9bc           | | ip_version       | 4                                              | | name             |                                                | | network_id       | 5f833617-6179-4797-b7c0-7d420d84040c           | | tenant_id        | 9796e5145ee546508939cd49ad59d51f               | +------------------+------------------------------------------------+ We now have a network and a subnet, on the network topology view this looks like this: Now let’s dive in and see what happened under the hood. Looking at the control node we will discover that a new namespace was created: # ip netns list qdhcp-5f833617-6179-4797-b7c0-7d420d84040c   The name of the namespace is qdhcp-<network id> (see above), let’s look into the namespace and see what’s in it: # ip netns exec qdhcp-5f833617-6179-4797-b7c0-7d420d84040c ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00     inet 127.0.0.1/8 scope host lo     inet6 ::1/128 scope host        valid_lft forever preferred_lft forever 12: tap26c9b807-7c: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN     link/ether fa:16:3e:1d:5c:81 brd ff:ff:ff:ff:ff:ff     inet 10.10.10.3/24 brd 10.10.10.255 scope global tap26c9b807-7c     inet6 fe80::f816:3eff:fe1d:5c81/64 scope link        valid_lft forever preferred_lft forever   We see two interfaces in the namespace, one is the loopback and the other one is an interface called “tap26c9b807-7c”. This interface has the IP address of 10.10.10.3 and it will also serve dhcp requests in a way we will see later. Let’s trace the connectivity of the “tap26c9b807-7c” interface from the namespace.  First stop is OVS, we see that the interface connects to bridge  “br-int” on OVS: # ovs-vsctl show 8a069c7c-ea05-4375-93e2-b9fc9e4b3ca1     Bridge "br-eth2"         Port "br-eth2"             Interface "br-eth2"                 type: internal         Port "eth2"             Interface "eth2"         Port "phy-br-eth2"             Interface "phy-br-eth2"     Bridge br-ex         Port br-ex             Interface br-ex                 type: internal     Bridge br-int         Port "int-br-eth2"             Interface "int-br-eth2"         Port "tap26c9b807-7c"             tag: 1             Interface "tap26c9b807-7c"                 type: internal         Port br-int             Interface br-int                 type: internal     ovs_version: "1.11.0"   In the picture above we have a veth pair which has two ends called “int-br-eth2” and "phy-br-eth2", this veth pair is used to connect two bridge in OVS "br-eth2" and "br-int". In the previous post we explained how to check the veth connectivity using the ethtool command. It shows that the two are indeed a pair: # ethtool -S int-br-eth2 NIC statistics:      peer_ifindex: 10 . .   #ip link . . 10: phy-br-eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 . . Note that “phy-br-eth2” is connected to a bridge called "br-eth2" and one of this bridge's interfaces is the physical link eth2. This means that the network which we have just created has created a namespace which is connected to the physical interface eth2. eth2 is the “VM network” the physical interface where all the virtual machines connect to where all the VMs are connected. About network isolation: OpenStack supports creation of multiple isolated networks and can use several mechanisms to isolate the networks from one another. The isolation mechanism can be VLANs, VxLANs or GRE tunnels, this is configured as part of the initial setup in our deployment we use VLANs. When using VLAN tagging as an isolation mechanism a VLAN tag is allocated by Neutron from a pre-defined VLAN tags pool and assigned to the newly created network. By provisioning VLAN tags to the networks Neutron allows creation of multiple isolated networks on the same physical link.  The big difference between this and other platforms is that the user does not have to deal with allocating and managing VLANs to networks. The VLAN allocation and provisioning is handled by Neutron which keeps track of the VLAN tags, and responsible for allocating and reclaiming VLAN tags. In the example above net1 has the VLAN tag 1000, this means that whenever a VM is created and connected to this network the packets from that VM will have to be tagged with VLAN tag 1000 to go on this particular network. This is true for namespace as well, if we would like to connect a namespace to a particular network we have to make sure that the packets to and from the namespace are correctly tagged when they reach the VM network. In the example above we see that the namespace interface “tap26c9b807-7c” has vlan tag 1 assigned to it, if we examine OVS we see that it has flows which modify VLAN tag 1 to VLAN tag 1000 when a packet goes to the VM network on eth2 and vice versa. We can see this using the dump-flows command on OVS for packets going to the VM network we see the modification done on br-eth2: #  ovs-ofctl dump-flows br-eth2 NXST_FLOW reply (xid=0x4):  cookie=0x0, duration=18669.401s, table=0, n_packets=857, n_bytes=163350, idle_age=25, priority=4,in_port=2,dl_vlan=1 actions=mod_vlan_vid:1000,NORMAL  cookie=0x0, duration=165108.226s, table=0, n_packets=14, n_bytes=1000, idle_age=5343, hard_age=65534, priority=2,in_port=2 actions=drop  cookie=0x0, duration=165109.813s, table=0, n_packets=1671, n_bytes=213304, idle_age=25, hard_age=65534, priority=1 actions=NORMAL   For packets coming from the interface to the namespace we see the following modification: #  ovs-ofctl dump-flows br-int NXST_FLOW reply (xid=0x4):  cookie=0x0, duration=18690.876s, table=0, n_packets=1610, n_bytes=210752, idle_age=1, priority=3,in_port=1,dl_vlan=1000 actions=mod_vlan_vid:1,NORMAL  cookie=0x0, duration=165130.01s, table=0, n_packets=75, n_bytes=3686, idle_age=4212, hard_age=65534, priority=2,in_port=1 actions=drop  cookie=0x0, duration=165131.96s, table=0, n_packets=863, n_bytes=160727, idle_age=1, hard_age=65534, priority=1 actions=NORMAL   To summarize we can see that when a user creates a network Neutron creates a namespace and this namespace is connected through OVS to the “VM network”. OVS also takes care of tagging the packets from the namespace to the VM network with the correct VLAN tag and knows to modify the VLAN for packets coming from VM network to the namespace. Now let’s see what happens when a VM is launched and how it is connected to the “VM network”. Use case #2: Launch a VM Launching a VM can be done from Horizon or from the command line this is how we do it from Horizon: Attach the network: And Launch Once the virtual machine is up and running we can see the associated IP using the nova list command : # nova list +--------------------------------------+--------------+--------+------------+-------------+-----------------+ | ID                                   | Name         | Status | Task State | Power State | Networks        | +--------------------------------------+--------------+--------+------------+-------------+-----------------+ | 3707ac87-4f5d-4349-b7ed-3a673f55e5e1 | Oracle Linux | ACTIVE | None       | Running     | net1=10.10.10.2 | +--------------------------------------+--------------+--------+------------+-------------+-----------------+ The nova list command shows us that the VM is running and that the IP 10.10.10.2 is assigned to this VM. Let’s trace the connectivity from the VM to VM network on eth2 starting with the VM definition file. The configuration files of the VM including the virtual disk(s), in case of ephemeral storage, are stored on the compute node at/var/lib/nova/instances/<instance-id>/. Looking into the VM definition file ,libvirt.xml,  we see that the VM is connected to an interface called “tap53903a95-82” which is connected to a Linux bridge called “qbr53903a95-82”: <interface type="bridge">       <mac address="fa:16:3e:fe:c7:87"/>       <source bridge="qbr53903a95-82"/>       <target dev="tap53903a95-82"/>     </interface>   Looking at the bridge using the brctl show command we see this: # brctl show bridge name     bridge id               STP enabled     interfaces qbr53903a95-82          8000.7e7f3282b836       no              qvb53903a95-82                                                         tap53903a95-82    The bridge has two interfaces, one connected to the VM (“tap53903a95-82 “) and another one ( “qvb53903a95-82”) connected to “br-int” bridge on OVS: # ovs-vsctl show 83c42f80-77e9-46c8-8560-7697d76de51c     Bridge "br-eth2"         Port "br-eth2"             Interface "br-eth2"                 type: internal         Port "eth2"             Interface "eth2"         Port "phy-br-eth2"             Interface "phy-br-eth2"     Bridge br-int         Port br-int             Interface br-int                 type: internal         Port "int-br-eth2"             Interface "int-br-eth2"         Port "qvo53903a95-82"             tag: 3             Interface "qvo53903a95-82"     ovs_version: "1.11.0"   As we showed earlier “br-int” is connected to “br-eth2” on OVS using the veth pair int-br-eth2,phy-br-eth2 and br-eth2 is connected to the physical interface eth2. The whole flow end to end looks like this: VM è tap53903a95-82 (virtual interface)è qbr53903a95-82 (Linux bridge) è qvb53903a95-82 (interface connected from Linux bridge to OVS bridge br-int) è int-br-eth2 (veth one end) è phy-br-eth2 (veth the other end) è eth2 physical interface. The purpose of the Linux Bridge connecting to the VM is to allow security group enforcement with iptables. Security groups are enforced at the edge point which are the interface of the VM, since iptables nnot be applied to OVS bridges we use Linux bridge to apply them. In the future we hope to see this Linux Bridge going away rules.  VLAN tags: As we discussed in the first use case net1 is using VLAN tag 1000, looking at OVS above we see that qvo41f1ebcf-7c is tagged with VLAN tag 3. The modification from VLAN tag 3 to 1000 as we go to the physical network is done by OVS  as part of the packet flow of br-eth2 in the same way we showed before. To summarize, when a VM is launched it is connected to the VM network through a chain of elements as described here. During the packet from VM to the network and back the VLAN tag is modified. Use case #3: Serving a DHCP request coming from the virtual machine In the previous use cases we have shown that both the namespace called dhcp-<some id> and the VM end up connecting to the physical interface eth2  on their respective nodes, both will tag their packets with VLAN tag 1000.We saw that the namespace has an interface with IP of 10.10.10.3. Since the VM and the namespace are connected to each other and have interfaces on the same subnet they can ping each other, in this picture we see a ping from the VM which was assigned 10.10.10.2 to the namespace: The fact that they are connected and can ping each other can become very handy when something doesn’t work right and we need to isolate the problem. In such case knowing that we should be able to ping from the VM to the namespace and back can be used to trace the disconnect using tcpdump or other monitoring tools. To serve DHCP requests coming from VMs on the network Neutron uses a Linux tool called “dnsmasq”,this is a lightweight DNS and DHCP service you can read more about it here. If we look at the dnsmasq on the control node with the ps command we see this: dnsmasq --no-hosts --no-resolv --strict-order --bind-interfaces --interface=tap26c9b807-7c --except-interface=lo --pid-file=/var/lib/neutron/dhcp/5f833617-6179-4797-b7c0-7d420d84040c/pid --dhcp-hostsfile=/var/lib/neutron/dhcp/5f833617-6179-4797-b7c0-7d420d84040c/host --dhcp-optsfile=/var/lib/neutron/dhcp/5f833617-6179-4797-b7c0-7d420d84040c/opts --leasefile-ro --dhcp-range=tag0,10.10.10.0,static,120s --dhcp-lease-max=256 --conf-file= --domain=openstacklocal The service connects to the tap interface in the namespace (“--interface=tap26c9b807-7c”), If we look at the hosts file we see this: # cat  /var/lib/neutron/dhcp/5f833617-6179-4797-b7c0-7d420d84040c/host fa:16:3e:fe:c7:87,host-10-10-10-2.openstacklocal,10.10.10.2   If you look at the console output above you can see the MAC address fa:16:3e:fe:c7:87 which is the VM MAC. This MAC address is mapped to IP 10.10.10.2 and so when a DHCP request comes with this MAC dnsmasq will return the 10.10.10.2.If we look into the namespace at the time we initiate a DHCP request from the VM (this can be done by simply restarting the network service in the VM) we see the following: # ip netns exec qdhcp-5f833617-6179-4797-b7c0-7d420d84040c tcpdump -n 19:27:12.191280 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from fa:16:3e:fe:c7:87, length 310 19:27:12.191666 IP 10.10.10.3.bootps > 10.10.10.2.bootpc: BOOTP/DHCP, Reply, length 325   To summarize, the DHCP service is handled by dnsmasq which is configured by Neutron to listen to the interface in the DHCP namespace. Neutron also configures dnsmasq with the combination of MAC and IP so when a DHCP request comes along it will receive the assigned IP. Summary In this post we relied on the components described in the previous post and saw how network connectivity is achieved using three simple use cases. These use cases gave a good view of the entire network stack and helped understand how an end to end connection is being made between a VM on a compute node and the DHCP namespace on the control node. One conclusion we can draw from what we saw here is that if we launch a VM and it is able to perform a DHCP request and receive a correct IP then there is reason to believe that the network is working as expected. We saw that a packet has to travel through a long list of components before reaching its destination and if it has done so successfully this means that many components are functioning properly. In the next post we will look at some more sophisticated services Neutron supports and see how they work. We will see that while there are some more components involved for the most part the concepts are the same. @RonenKofman

    Read the article

  • ASP.NET MVC 3 Hosting :: How to Deploy Web Apps Using ASP.NET MVC 3, Razor and EF Code First - Part II

    - by mbridge
    In previous post, I have discussed on how to work with ASP.NET MVC 3 and EF Code First for developing web apps. In this post, I will demonstrate on working with domain entity with deep object graph, Service Layer and View Models and will also complete the rest of the demo application. In the previous post, we have done CRUD operations against Category entity and this post will be focus on Expense entity those have an association with Category entity. Domain Model Category Entity public class Category   {       public int CategoryId { get; set; }       [Required(ErrorMessage = "Name Required")]       [StringLength(25, ErrorMessage = "Must be less than 25 characters")]       public string Name { get; set;}       public string Description { get; set; }       public virtual ICollection<Expense> Expenses { get; set; }   } Expense Entity public class Expense     {                public int ExpenseId { get; set; }                public string  Transaction { get; set; }         public DateTime Date { get; set; }         public double Amount { get; set; }         public int CategoryId { get; set; }         public virtual Category Category { get; set; }     } We have two domain entities - Category and Expense. A single category contains a list of expense transactions and every expense transaction should have a Category. Repository class for Expense Transaction Let’s create repository class for handling CRUD operations for Expense entity public class ExpenseRepository : RepositoryBase<Expense>, IExpenseRepository     {     public ExpenseRepository(IDatabaseFactory databaseFactory)         : base(databaseFactory)         {         }                } public interface IExpenseRepository : IRepository<Expense> { } Service Layer If you are new to Service Layer, checkout Martin Fowler's article Service Layer . According to Martin Fowler, Service Layer defines an application's boundary and its set of available operations from the perspective of interfacing client layers. It encapsulates the application's business logic, controlling transactions and coordinating responses in the implementation of its operations. Controller classes should be lightweight and do not put much of business logic onto it. We can use the service layer as the business logic layer and can encapsulate the rules of the application. Let’s create a Service class for coordinates the transaction for Expense public interface IExpenseService {     IEnumerable<Expense> GetExpenses(DateTime startDate, DateTime ednDate);     Expense GetExpense(int id);             void CreateExpense(Expense expense);     void DeleteExpense(int id);     void SaveExpense(); } public class ExpenseService : IExpenseService {     private readonly IExpenseRepository expenseRepository;            private readonly IUnitOfWork unitOfWork;     public ExpenseService(IExpenseRepository expenseRepository, IUnitOfWork unitOfWork)     {                  this.expenseRepository = expenseRepository;         this.unitOfWork = unitOfWork;     }     public IEnumerable<Expense> GetExpenses(DateTime startDate, DateTime endDate)     {         var expenses = expenseRepository.GetMany(exp => exp.Date >= startDate && exp.Date <= endDate);         return expenses;     }     public void CreateExpense(Expense expense)     {         expenseRepository.Add(expense);         unitOfWork.Commit();     }     public Expense GetExpense(int id)     {         var expense = expenseRepository.GetById(id);         return expense;     }     public void DeleteExpense(int id)     {         var expense = expenseRepository.GetById(id);         expenseRepository.Delete(expense);         unitOfWork.Commit();     }     public void SaveExpense()     {         unitOfWork.Commit();     } } View Model for Expense Transactions In real world ASP.NET MVC applications, we need to design model objects especially for our views. Our domain objects are mainly designed for the needs for domain model and it is representing the domain of our applications. On the other hand, View Model objects are designed for our needs for views. We have an Expense domain entity that has an association with Category. While we are creating a new Expense, we have to specify that in which Category belongs with the new Expense transaction. The user interface for Expense transaction will have form fields for representing the Expense entity and a CategoryId for representing the Category. So let's create view model for representing the need for Expense transactions. public class ExpenseViewModel {     public int ExpenseId { get; set; }       [Required(ErrorMessage = "Category Required")]     public int CategoryId { get; set; }       [Required(ErrorMessage = "Transaction Required")]     public string Transaction { get; set; }       [Required(ErrorMessage = "Date Required")]     public DateTime Date { get; set; }       [Required(ErrorMessage = "Amount Required")]     public double Amount { get; set; }       public IEnumerable<SelectListItem> Category { get; set; } } The ExpenseViewModel is designed for the purpose of View template and contains the all validation rules. It has properties for mapping values to Expense entity and a property Category for binding values to a drop-down for list values of Category. Create Expense transaction Let’s create action methods in the ExpenseController for creating expense transactions public ActionResult Create() {     var expenseModel = new ExpenseViewModel();     var categories = categoryService.GetCategories();     expenseModel.Category = categories.ToSelectListItems(-1);     expenseModel.Date = DateTime.Today;     return View(expenseModel); } [HttpPost] public ActionResult Create(ExpenseViewModel expenseViewModel) {                      if (!ModelState.IsValid)         {             var categories = categoryService.GetCategories();             expenseViewModel.Category = categories.ToSelectListItems(expenseViewModel.CategoryId);             return View("Save", expenseViewModel);         }         Expense expense=new Expense();         ModelCopier.CopyModel(expenseViewModel,expense);         expenseService.CreateExpense(expense);         return RedirectToAction("Index");              } In the Create action method for HttpGet request, we have created an instance of our View Model ExpenseViewModel with Category information for the drop-down list and passing the Model object to View template. The extension method ToSelectListItems is shown below public static IEnumerable<SelectListItem> ToSelectListItems(         this IEnumerable<Category> categories, int  selectedId) {     return           categories.OrderBy(category => category.Name)                 .Select(category =>                     new SelectListItem                     {                         Selected = (category.CategoryId == selectedId),                         Text = category.Name,                         Value = category.CategoryId.ToString()                     }); } In the Create action method for HttpPost, our view model object ExpenseViewModel will map with posted form input values. We need to create an instance of Expense for the persistence purpose. So we need to copy values from ExpenseViewModel object to Expense object. ASP.NET MVC futures assembly provides a static class ModelCopier that can use for copying values between Model objects. ModelCopier class has two static methods - CopyCollection and CopyModel.CopyCollection method will copy values between two collection objects and CopyModel will copy values between two model objects. We have used CopyModel method of ModelCopier class for copying values from expenseViewModel object to expense object. Finally we did a call to CreateExpense method of ExpenseService class for persisting new expense transaction. List Expense Transactions We want to list expense transactions based on a date range. So let’s create action method for filtering expense transactions with a specified date range. public ActionResult Index(DateTime? startDate, DateTime? endDate) {     //If date is not passed, take current month's first and last dte     DateTime dtNow;     dtNow = DateTime.Today;     if (!startDate.HasValue)     {         startDate = new DateTime(dtNow.Year, dtNow.Month, 1);         endDate = startDate.Value.AddMonths(1).AddDays(-1);     }     //take last date of start date's month, if end date is not passed     if (startDate.HasValue && !endDate.HasValue)     {         endDate = (new DateTime(startDate.Value.Year, startDate.Value.Month, 1)).AddMonths(1).AddDays(-1);     }     var expenses = expenseService.GetExpenses(startDate.Value ,endDate.Value);     //if request is Ajax will return partial view     if (Request.IsAjaxRequest())     {         return PartialView("ExpenseList", expenses);     }     //set start date and end date to ViewBag dictionary     ViewBag.StartDate = startDate.Value.ToShortDateString();     ViewBag.EndDate = endDate.Value.ToShortDateString();     //if request is not ajax     return View(expenses); } We are using the above Index Action method for both Ajax requests and normal requests. If there is a request for Ajax, we will call the PartialView ExpenseList. Razor Views for listing Expense information Let’s create view templates in Razor for showing list of Expense information ExpenseList.cshtml @model IEnumerable<MyFinance.Domain.Expense>   <table>         <tr>             <th>Actions</th>             <th>Category</th>             <th>                 Transaction             </th>             <th>                 Date             </th>             <th>                 Amount             </th>         </tr>       @foreach (var item in Model) {              <tr>             <td>                 @Html.ActionLink("Edit", "Edit",new { id = item.ExpenseId })                 @Ajax.ActionLink("Delete", "Delete", new { id = item.ExpenseId }, new AjaxOptions { Confirm = "Delete Expense?", HttpMethod = "Post", UpdateTargetId = "divExpenseList" })             </td>              <td>                 @item.Category.Name             </td>             <td>                 @item.Transaction             </td>             <td>                 @String.Format("{0:d}", item.Date)             </td>             <td>                 @String.Format("{0:F}", item.Amount)             </td>         </tr>          }       </table>     <p>         @Html.ActionLink("Create New Expense", "Create") |         @Html.ActionLink("Create New Category", "Create","Category")     </p> Index.cshtml @using MyFinance.Helpers; @model IEnumerable<MyFinance.Domain.Expense> @{     ViewBag.Title = "Index"; }    <h2>Expense List</h2>    <script src="@Url.Content("~/Scripts/jquery.unobtrusive-ajax.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery-ui.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.ui.datepicker.js")" type="text/javascript"></script> <link href="@Url.Content("~/Content/jquery-ui-1.8.6.custom.css")" rel="stylesheet" type="text/css" />      @using (Ajax.BeginForm(new AjaxOptions{ UpdateTargetId="divExpenseList", HttpMethod="Get"})) {     <table>         <tr>         <td>         <div>           Start Date: @Html.TextBox("StartDate", Html.Encode(String.Format("{0:mm/dd/yyyy}", ViewData["StartDate"].ToString())), new { @class = "ui-datepicker" })         </div>         </td>         <td><div>            End Date: @Html.TextBox("EndDate", Html.Encode(String.Format("{0:mm/dd/yyyy}", ViewData["EndDate"].ToString())), new { @class = "ui-datepicker" })          </div></td>          <td> <input type="submit" value="Search By TransactionDate" /></td>         </tr>     </table>         }   <div id="divExpenseList">             @Html.Partial("ExpenseList", Model)     </div> <script type="text/javascript">     $().ready(function () {         $('.ui-datepicker').datepicker({             dateFormat: 'mm/dd/yy',             buttonImage: '@Url.Content("~/Content/calendar.gif")',             buttonImageOnly: true,             showOn: "button"         });     }); </script> Ajax search functionality using Ajax.BeginForm The search functionality of Index view is providing Ajax functionality using Ajax.BeginForm. The Ajax.BeginForm() method writes an opening <form> tag to the response. You can use this method in a using block. In that case, the method renders the closing </form> tag at the end of the using block and the form is submitted asynchronously by using JavaScript. The search functionality will call the Index Action method and this will return partial view ExpenseList for updating the search result. We want to update the response UI for the Ajax request onto divExpenseList element. So we have specified the UpdateTargetId as "divExpenseList" in the Ajax.BeginForm method. Add jQuery DatePicker Our search functionality is using a date range so we are providing two date pickers using jQuery datepicker. You need to add reference to the following JavaScript files to working with jQuery datepicker. - jquery-ui.js - jquery.ui.datepicker.js For theme support for datepicker, we can use a customized CSS class. In our example we have used a CSS file “jquery-ui-1.8.6.custom.css”. For more details about the datepicker component, visit jquery UI website at http://jqueryui.com/demos/datepicker . In the jQuery ready event, we have used following JavaScript function to initialize the UI element to show date picker. <script type="text/javascript">     $().ready(function () {         $('.ui-datepicker').datepicker({             dateFormat: 'mm/dd/yy',             buttonImage: '@Url.Content("~/Content/calendar.gif")',             buttonImageOnly: true,             showOn: "button"         });     }); </script> Summary In this two-part series, we have created a simple web application using ASP.NET MVC 3 RTM, Razor and EF Code First CTP 5. I have demonstrated patterns and practices  such as Dependency Injection, Repository pattern, Unit of Work, ViewModel and Service Layer. My primary objective was to demonstrate different practices and options for developing web apps using ASP.NET MVC 3 and EF Code First. You can implement these approaches in your own way for building web apps using ASP.NET MVC 3. I will refactor this demo app on later time.

    Read the article

  • Branding Support for TopComponents

    - by Geertjan
    In yesterday's blog entry, you saw how a menu item can be created, in this case with the label "Brand", especially for Java classes that extend TopComponent: And, as you can see here, it's not about the name of the class, i.e., not because the class above is named "BlaTopComponent" because below the "Brand" men item is also available for the class named "Bla": Both the files BlaTopComponent.java and Bla.java have the "Brand" menu item available, because both extend the "org.openide.windows.TopComponent"  class, as shown yesterday. Now we continue by creating a new JPanel, with checkboxes for each part of a TopComponent that we consider to be brandable. In my case, this is the end result, at deployment, when the Brand menu item is clicked for the Bla class: When the user (who, in this case, is a developer) clicks OK, a constructor is created and the related client properties are added, depending on which of the checkboxes are clicked: public Bla() {     putClientProperty(TopComponent.PROP_SLIDING_DISABLED, false);     putClientProperty(TopComponent.PROP_UNDOCKING_DISABLED, true);     putClientProperty(TopComponent.PROP_MAXIMIZATION_DISABLED, false);     putClientProperty(TopComponent.PROP_CLOSING_DISABLED, true);     putClientProperty(TopComponent.PROP_DRAGGING_DISABLED, false); } At this point, no check is done to see whether a constructor already exists, nor whether the client properties are already available. That's for an upcoming blog entry! Right now, the constructor is always created, regardless of whether it already exists, and the client properties are always added. The key to all this is the 'actionPeformed' of the TopComponent, which was left empty yesterday. We start by creating a JDialog from the JPanel and we retrieve the selected state of the checkboxes defined in the JPanel: @Override public void actionPerformed(ActionEvent ev) {     String msg = dobj.getName() + " Branding";     final BrandTopComponentPanel brandTopComponentPanel = new BrandTopComponentPanel();     dd = new DialogDescriptor(brandTopComponentPanel, msg, true, new ActionListener() {         @Override         public void actionPerformed(ActionEvent e) {             Object result = dd.getValue();             if (DialogDescriptor.OK_OPTION == result) {                 isClosing = brandTopComponentPanel.getClosingCheckBox().isSelected();                 isDragging = brandTopComponentPanel.getDraggingCheckBox().isSelected();                 isMaximization = brandTopComponentPanel.getMaximizationCheckBox().isSelected();                 isSliding = brandTopComponentPanel.getSlidingCheckBox().isSelected();                 isUndocking = brandTopComponentPanel.getUndockingCheckBox().isSelected();                 JavaSource javaSource = JavaSource.forFileObject(dobj.getPrimaryFile());                 try {                     javaSource.runUserActionTask(new ScanTask(javaSource), true);                 } catch (IOException ex) {                     Exceptions.printStackTrace(ex);                 }             }         }     });     DialogDisplayer.getDefault().createDialog(dd).setVisible(true); } Then we start a scan process, which introduces the branding. We're already doing a scan process for identifying whether a class is a TopComponent. So, let's combine those two scans, branching out based on which one we're doing: private class ScanTask implements Task<CompilationController> {     private BrandTopComponentAction action = null;     private JavaSource js = null;     private ScanTask(JavaSource js) {         this.js = js;     }     private ScanTask(BrandTopComponentAction action) {         this.action = action;     }     @Override     public void run(final CompilationController info) throws Exception {         info.toPhase(Phase.ELEMENTS_RESOLVED);         if (action != null) {             new EnableIfTopComponentScanner(info, action).scan(                     info.getCompilationUnit(), null);         } else {             introduceBranding();         }     }     private void introduceBranding() throws IOException {         CancellableTask task = new CancellableTask<WorkingCopy>() {             @Override             public void run(WorkingCopy workingCopy) throws IOException {                 workingCopy.toPhase(Phase.RESOLVED);                 CompilationUnitTree cut = workingCopy.getCompilationUnit();                 TreeMaker treeMaker = workingCopy.getTreeMaker();                 for (Tree typeDecl : cut.getTypeDecls()) {                     if (Tree.Kind.CLASS == typeDecl.getKind()) {                         ClassTree clazz = (ClassTree) typeDecl;                         ModifiersTree methodModifiers = treeMaker.Modifiers(Collections.<Modifier>singleton(Modifier.PUBLIC));                         MethodTree newMethod =                                 treeMaker.Method(methodModifiers,                                 "<init>",                                 treeMaker.PrimitiveType(TypeKind.VOID),                                 Collections.<TypeParameterTree>emptyList(),                                 Collections.EMPTY_LIST,                                 Collections.<ExpressionTree>emptyList(),                                 "{ putClientProperty(TopComponent.PROP_SLIDING_DISABLED, " + isSliding + ");\n"+                                 "  putClientProperty(TopComponent.PROP_UNDOCKING_DISABLED, " + isUndocking + ");\n"+                                 "  putClientProperty(TopComponent.PROP_MAXIMIZATION_DISABLED, " + isMaximization + ");\n"+                                 "  putClientProperty(TopComponent.PROP_CLOSING_DISABLED, " + isClosing + ");\n"+                                 "  putClientProperty(TopComponent.PROP_DRAGGING_DISABLED, " + isDragging + "); }\n",                                 null);                         ClassTree modifiedClazz = treeMaker.addClassMember(clazz, newMethod);                         workingCopy.rewrite(clazz, modifiedClazz);                     }                 }             }             @Override             public void cancel() {             }         };         ModificationResult result = js.runModificationTask(task);         result.commit();     } } private static class EnableIfTopComponentScanner extends TreePathScanner<Void, Void> {     private CompilationInfo info;     private final AbstractAction action;     public EnableIfTopComponentScanner(CompilationInfo info, AbstractAction action) {         this.info = info;         this.action = action;     }     @Override     public Void visitClass(ClassTree t, Void v) {         Element el = info.getTrees().getElement(getCurrentPath());         if (el != null) {             TypeElement te = (TypeElement) el;             if (te.getSuperclass().toString().equals("org.openide.windows.TopComponent")) {                 action.setEnabled(true);             } else {                 action.setEnabled(false);             }         }         return null;     } }

    Read the article

  • Problems extracting information from RSS feed description field

    - by Graeme
    Hi, I've built an iPhone application using the parsing code from the TopSongs sample iPhone application. I've hit a problem though - the feed I'm trying to parse data from doesn't have a separate field for every piece of information (i.e. if it was for a feed about dogs, all the information such as dog type, dog age and dog price is contained in the feed. However, the TopSongs app relies on information having its own tags, so instead of using it uses and . So my question is this. How do I extract this information from the description field so that it can be parsed using the TopSongs parser? Can you somehow extract the dog age, price and type information using Yahoo Pipes and use that RSS feed for the feed? Or is there code that I can add to do it in application? Update: To view the code of my application parser (based on the TopSongs Core Data Apple provided application, see below. Here's a sample of one item from the the actual RSS feed I'm using (the description is longer, and has status,size, and a couple of other fields, but they're all formatted the same.: <item> <title>MOE, MARGRET STREET</title> <description> <b>District/Region:</b>&nbsp;REGION 09</br><b>Location:</b>&nbsp;MOE</br><b>Name:</b>&nbsp;MARGRET STREET</br></description> <pubDate>Thu,11 Mar 2010 05:43:03 GMT</pubDate> <guid>1266148</guid> </item> /* File: iTunesRSSImporter.m Abstract: Downloads, parses, and imports the iTunes top songs RSS feed into Core Data. Version: 1.1 Disclaimer: IMPORTANT: This Apple software is supplied to you by Apple Inc. ("Apple") in consideration of your agreement to the following terms, and your use, installation, modification or redistribution of this Apple software constitutes acceptance of these terms. If you do not agree with these terms, please do not use, install, modify or redistribute this Apple software. In consideration of your agreement to abide by the following terms, and subject to these terms, Apple grants you a personal, non-exclusive license, under Apple's copyrights in this original Apple software (the "Apple Software"), to use, reproduce, modify and redistribute the Apple Software, with or without modifications, in source and/or binary forms; provided that if you redistribute the Apple Software in its entirety and without modifications, you must retain this notice and the following text and disclaimers in all such redistributions of the Apple Software. Neither the name, trademarks, service marks or logos of Apple Inc. may be used to endorse or promote products derived from the Apple Software without specific prior written permission from Apple. Except as expressly stated in this notice, no other rights or licenses, express or implied, are granted by Apple herein, including but not limited to any patent rights that may be infringed by your derivative works or by other works in which the Apple Software may be incorporated. The Apple Software is provided by Apple on an "AS IS" basis. APPLE MAKES NO WARRANTIES, EXPRESS OR IMPLIED, INCLUDING WITHOUT LIMITATION THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, REGARDING THE APPLE SOFTWARE OR ITS USE AND OPERATION ALONE OR IN COMBINATION WITH YOUR PRODUCTS. IN NO EVENT SHALL APPLE BE LIABLE FOR ANY SPECIAL, INDIRECT, INCIDENTAL OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) ARISING IN ANY WAY OUT OF THE USE, REPRODUCTION, MODIFICATION AND/OR DISTRIBUTION OF THE APPLE SOFTWARE, HOWEVER CAUSED AND WHETHER UNDER THEORY OF CONTRACT, TORT (INCLUDING NEGLIGENCE), STRICT LIABILITY OR OTHERWISE, EVEN IF APPLE HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Copyright (C) 2009 Apple Inc. All Rights Reserved. */ #import "iTunesRSSImporter.h" #import "Song.h" #import "Category.h" #import "CategoryCache.h" #import <libxml/tree.h> // Function prototypes for SAX callbacks. This sample implements a minimal subset of SAX callbacks. // Depending on your application's needs, you might want to implement more callbacks. static void startElementSAX(void *context, const xmlChar *localname, const xmlChar *prefix, const xmlChar *URI, int nb_namespaces, const xmlChar **namespaces, int nb_attributes, int nb_defaulted, const xmlChar **attributes); static void endElementSAX(void *context, const xmlChar *localname, const xmlChar *prefix, const xmlChar *URI); static void charactersFoundSAX(void *context, const xmlChar *characters, int length); static void errorEncounteredSAX(void *context, const char *errorMessage, ...); // Forward reference. The structure is defined in full at the end of the file. static xmlSAXHandler simpleSAXHandlerStruct; // Class extension for private properties and methods. @interface iTunesRSSImporter () @property BOOL storingCharacters; @property (nonatomic, retain) NSMutableData *characterBuffer; @property BOOL done; @property BOOL parsingASong; @property NSUInteger countForCurrentBatch; @property (nonatomic, retain) Song *currentSong; @property (nonatomic, retain) NSURLConnection *rssConnection; @property (nonatomic, retain) NSDateFormatter *dateFormatter; // The autorelease pool property is assign because autorelease pools cannot be retained. @property (nonatomic, assign) NSAutoreleasePool *importPool; @end static double lookuptime = 0; @implementation iTunesRSSImporter @synthesize iTunesURL, delegate, persistentStoreCoordinator; @synthesize rssConnection, done, parsingASong, storingCharacters, currentSong, countForCurrentBatch, characterBuffer, dateFormatter, importPool; - (void)dealloc { [iTunesURL release]; [characterBuffer release]; [currentSong release]; [rssConnection release]; [dateFormatter release]; [persistentStoreCoordinator release]; [insertionContext release]; [songEntityDescription release]; [theCache release]; [super dealloc]; } - (void)main { self.importPool = [[NSAutoreleasePool alloc] init]; if (delegate && [delegate respondsToSelector:@selector(importerDidSave:)]) { [[NSNotificationCenter defaultCenter] addObserver:delegate selector:@selector(importerDidSave:) name:NSManagedObjectContextDidSaveNotification object:self.insertionContext]; } done = NO; self.dateFormatter = [[[NSDateFormatter alloc] init] autorelease]; [dateFormatter setDateStyle:NSDateFormatterLongStyle]; [dateFormatter setTimeStyle:NSDateFormatterNoStyle]; // necessary because iTunes RSS feed is not localized, so if the device region has been set to other than US // the date formatter must be set to US locale in order to parse the dates [dateFormatter setLocale:[[[NSLocale alloc] initWithLocaleIdentifier:@"US"] autorelease]]; self.characterBuffer = [NSMutableData data]; NSURLRequest *theRequest = [NSURLRequest requestWithURL:iTunesURL]; // create the connection with the request and start loading the data rssConnection = [[NSURLConnection alloc] initWithRequest:theRequest delegate:self]; // This creates a context for "push" parsing in which chunks of data that are not "well balanced" can be passed // to the context for streaming parsing. The handler structure defined above will be used for all the parsing. // The second argument, self, will be passed as user data to each of the SAX handlers. The last three arguments // are left blank to avoid creating a tree in memory. context = xmlCreatePushParserCtxt(&simpleSAXHandlerStruct, self, NULL, 0, NULL); if (rssConnection != nil) { do { [[NSRunLoop currentRunLoop] runMode:NSDefaultRunLoopMode beforeDate:[NSDate distantFuture]]; } while (!done); } // Display the total time spent finding a specific object for a relationship NSLog(@"lookup time %f", lookuptime); // Release resources used only in this thread. xmlFreeParserCtxt(context); self.characterBuffer = nil; self.dateFormatter = nil; self.rssConnection = nil; self.currentSong = nil; [theCache release]; theCache = nil; NSError *saveError = nil; NSAssert1([insertionContext save:&saveError], @"Unhandled error saving managed object context in import thread: %@", [saveError localizedDescription]); if (delegate && [delegate respondsToSelector:@selector(importerDidSave:)]) { [[NSNotificationCenter defaultCenter] removeObserver:delegate name:NSManagedObjectContextDidSaveNotification object:self.insertionContext]; } if (self.delegate != nil && [self.delegate respondsToSelector:@selector(importerDidFinishParsingData:)]) { [self.delegate importerDidFinishParsingData:self]; } [importPool release]; self.importPool = nil; } - (NSManagedObjectContext *)insertionContext { if (insertionContext == nil) { insertionContext = [[NSManagedObjectContext alloc] init]; [insertionContext setPersistentStoreCoordinator:self.persistentStoreCoordinator]; } return insertionContext; } - (void)forwardError:(NSError *)error { if (self.delegate != nil && [self.delegate respondsToSelector:@selector(importer:didFailWithError:)]) { [self.delegate importer:self didFailWithError:error]; } } - (NSEntityDescription *)songEntityDescription { if (songEntityDescription == nil) { songEntityDescription = [[NSEntityDescription entityForName:@"Song" inManagedObjectContext:self.insertionContext] retain]; } return songEntityDescription; } - (CategoryCache *)theCache { if (theCache == nil) { theCache = [[CategoryCache alloc] init]; theCache.managedObjectContext = self.insertionContext; } return theCache; } - (Song *)currentSong { if (currentSong == nil) { currentSong = [[Song alloc] initWithEntity:self.songEntityDescription insertIntoManagedObjectContext:self.insertionContext]; } return currentSong; } #pragma mark NSURLConnection Delegate methods // Forward errors to the delegate. - (void)connection:(NSURLConnection *)connection didFailWithError:(NSError *)error { [self performSelectorOnMainThread:@selector(forwardError:) withObject:error waitUntilDone:NO]; // Set the condition which ends the run loop. done = YES; } // Called when a chunk of data has been downloaded. - (void)connection:(NSURLConnection *)connection didReceiveData:(NSData *)data { // Process the downloaded chunk of data. xmlParseChunk(context, (const char *)[data bytes], [data length], 0); } - (void)connectionDidFinishLoading:(NSURLConnection *)connection { // Signal the context that parsing is complete by passing "1" as the last parameter. xmlParseChunk(context, NULL, 0, 1); context = NULL; // Set the condition which ends the run loop. done = YES; } #pragma mark Parsing support methods static const NSUInteger kImportBatchSize = 20; - (void)finishedCurrentSong { parsingASong = NO; self.currentSong = nil; countForCurrentBatch++; // Periodically purge the autorelease pool and save the context. The frequency of this action may need to be tuned according to the // size of the objects being parsed. The goal is to keep the autorelease pool from growing too large, but // taking this action too frequently would be wasteful and reduce performance. if (countForCurrentBatch == kImportBatchSize) { [importPool release]; self.importPool = [[NSAutoreleasePool alloc] init]; NSError *saveError = nil; NSAssert1([insertionContext save:&saveError], @"Unhandled error saving managed object context in import thread: %@", [saveError localizedDescription]); countForCurrentBatch = 0; } } /* Character data is appended to a buffer until the current element ends. */ - (void)appendCharacters:(const char *)charactersFound length:(NSInteger)length { [characterBuffer appendBytes:charactersFound length:length]; } - (NSString *)currentString { // Create a string with the character data using UTF-8 encoding. UTF-8 is the default XML data encoding. NSString *currentString = [[[NSString alloc] initWithData:characterBuffer encoding:NSUTF8StringEncoding] autorelease]; [characterBuffer setLength:0]; return currentString; } @end #pragma mark SAX Parsing Callbacks // The following constants are the XML element names and their string lengths for parsing comparison. // The lengths include the null terminator, to ensure exact matches. static const char *kName_Item = "item"; static const NSUInteger kLength_Item = 5; static const char *kName_Title = "title"; static const NSUInteger kLength_Title = 6; static const char *kName_Category = "category"; static const NSUInteger kLength_Category = 9; static const char *kName_Itms = "itms"; static const NSUInteger kLength_Itms = 5; static const char *kName_Artist = "description"; static const NSUInteger kLength_Artist = 7; static const char *kName_Album = "description"; static const NSUInteger kLength_Album = 6; static const char *kName_ReleaseDate = "releasedate"; static const NSUInteger kLength_ReleaseDate = 12; /* This callback is invoked when the importer finds the beginning of a node in the XML. For this application, out parsing needs are relatively modest - we need only match the node name. An "item" node is a record of data about a song. In that case we create a new Song object. The other nodes of interest are several of the child nodes of the Song currently being parsed. For those nodes we want to accumulate the character data in a buffer. Some of the child nodes use a namespace prefix. */ static void startElementSAX(void *parsingContext, const xmlChar *localname, const xmlChar *prefix, const xmlChar *URI, int nb_namespaces, const xmlChar **namespaces, int nb_attributes, int nb_defaulted, const xmlChar **attributes) { iTunesRSSImporter *importer = (iTunesRSSImporter *)parsingContext; // The second parameter to strncmp is the name of the element, which we known from the XML schema of the feed. // The third parameter to strncmp is the number of characters in the element name, plus 1 for the null terminator. if (prefix == NULL && !strncmp((const char *)localname, kName_Item, kLength_Item)) { importer.parsingASong = YES; } else if (importer.parsingASong && ( (prefix == NULL && (!strncmp((const char *)localname, kName_Title, kLength_Title) || !strncmp((const char *)localname, kName_Category, kLength_Category))) || ((prefix != NULL && !strncmp((const char *)prefix, kName_Itms, kLength_Itms)) && (!strncmp((const char *)localname, kName_Artist, kLength_Artist) || !strncmp((const char *)localname, kName_Album, kLength_Album) || !strncmp((const char *)localname, kName_ReleaseDate, kLength_ReleaseDate))) )) { importer.storingCharacters = YES; } } /* This callback is invoked when the parse reaches the end of a node. At that point we finish processing that node, if it is of interest to us. For "item" nodes, that means we have completed parsing a Song object. We pass the song to a method in the superclass which will eventually deliver it to the delegate. For the other nodes we care about, this means we have all the character data. The next step is to create an NSString using the buffer contents and store that with the current Song object. */ static void endElementSAX(void *parsingContext, const xmlChar *localname, const xmlChar *prefix, const xmlChar *URI) { iTunesRSSImporter *importer = (iTunesRSSImporter *)parsingContext; if (importer.parsingASong == NO) return; if (prefix == NULL) { if (!strncmp((const char *)localname, kName_Item, kLength_Item)) { [importer finishedCurrentSong]; } else if (!strncmp((const char *)localname, kName_Title, kLength_Title)) { importer.currentSong.title = importer.currentString; } else if (!strncmp((const char *)localname, kName_Category, kLength_Category)) { double before = [NSDate timeIntervalSinceReferenceDate]; Category *category = [importer.theCache categoryWithName:importer.currentString]; double delta = [NSDate timeIntervalSinceReferenceDate] - before; lookuptime += delta; importer.currentSong.category = category; } } else if (!strncmp((const char *)prefix, kName_Itms, kLength_Itms)) { if (!strncmp((const char *)localname, kName_Artist, kLength_Artist)) { NSString *string = importer.currentSong.artist; NSArray *strings = [string componentsSeparatedByString: @", "]; //importer.currentSong.artist = importer.currentString; } else if (!strncmp((const char *)localname, kName_Album, kLength_Album)) { importer.currentSong.album = importer.currentString; } else if (!strncmp((const char *)localname, kName_ReleaseDate, kLength_ReleaseDate)) { NSString *dateString = importer.currentString; importer.currentSong.releaseDate = [importer.dateFormatter dateFromString:dateString]; } } importer.storingCharacters = NO; } /* This callback is invoked when the parser encounters character data inside a node. The importer class determines how to use the character data. */ static void charactersFoundSAX(void *parsingContext, const xmlChar *characterArray, int numberOfCharacters) { iTunesRSSImporter *importer = (iTunesRSSImporter *)parsingContext; // A state variable, "storingCharacters", is set when nodes of interest begin and end. // This determines whether character data is handled or ignored. if (importer.storingCharacters == NO) return; [importer appendCharacters:(const char *)characterArray length:numberOfCharacters]; } /* A production application should include robust error handling as part of its parsing implementation. The specifics of how errors are handled depends on the application. */ static void errorEncounteredSAX(void *parsingContext, const char *errorMessage, ...) { // Handle errors as appropriate for your application. NSCAssert(NO, @"Unhandled error encountered during SAX parse."); } // The handler struct has positions for a large number of callback functions. If NULL is supplied at a given position, // that callback functionality won't be used. Refer to libxml documentation at http://www.xmlsoft.org for more information // about the SAX callbacks. static xmlSAXHandler simpleSAXHandlerStruct = { NULL, /* internalSubset */ NULL, /* isStandalone */ NULL, /* hasInternalSubset */ NULL, /* hasExternalSubset */ NULL, /* resolveEntity */ NULL, /* getEntity */ NULL, /* entityDecl */ NULL, /* notationDecl */ NULL, /* attributeDecl */ NULL, /* elementDecl */ NULL, /* unparsedEntityDecl */ NULL, /* setDocumentLocator */ NULL, /* startDocument */ NULL, /* endDocument */ NULL, /* startElement*/ NULL, /* endElement */ NULL, /* reference */ charactersFoundSAX, /* characters */ NULL, /* ignorableWhitespace */ NULL, /* processingInstruction */ NULL, /* comment */ NULL, /* warning */ errorEncounteredSAX, /* error */ NULL, /* fatalError //: unused error() get all the errors */ NULL, /* getParameterEntity */ NULL, /* cdataBlock */ NULL, /* externalSubset */ XML_SAX2_MAGIC, // NULL, startElementSAX, /* startElementNs */ endElementSAX, /* endElementNs */ NULL, /* serror */ }; Thanks.

    Read the article

  • Using jQuery and SPServices to Display List Items

    - by Bil Simser
    I had an interesting challenge recently that I turned to Marc Anderson’s wonderful SPServices project for. If you haven’t already seen or used SPServices, please do. It’s a jQuery library that does primarily two things. First, it wraps up all of the SharePoint web services in a nice little AJAX wrapper for use in JavaScript. Second, it enhances the form editing of items in SharePoint so you’re not hacking up your List Form pages. My challenge was simple but interesting. The user wanted to display a SharePoint item page (DispForm.aspx, which already had some customization on it to display related items via this blog post from Codeless Solutions for SharePoint) but launch from an external application using the value of one of the fields in the SharePoint list. For simplicity let’s say my list is a list of customers and the related list is a list of orders for that customer. It would look something like this (click on the item to see the full image): Your first thought might be, that’s easy! Display the customer information using a DataView Web Part and filter the item using a query string to match the customer number. However there are a few problems with this idea: You’ll need to build a custom page and then attach that related orders view to it. This is a bit of a problem because the solution from Codeless Solutions relies on the Title field on the page to be displayed. On a custom page you would have to recreate all of the elements found on the DispForm.aspx page so the related view would work. The DataView Web Part doesn’t look *exactly* like what the out of the box display form page does. Not a huge problem and can be overcome with some CSS style overrides but still, more work. A DVWP showing a single record doesn’t have the same toolbar that you would using the DispForm.aspx. Not a show-stopper and you can rebuild the toolbar but it’s going to potentially require code and then there’s the security trimming, etc. that you have to get right. DVWPs are not automatically updated if you add a column to the list like DispForm.aspx is. Work, work, work. For these reasons I thought it would be easier to take the already existing (modified) DispForm.aspx page and just add some jQuery magic to the page to find the item. Why do we need to find it? DispForm.aspx relies on a querystring parameter called “ID” which then displays whatever that item ID number is in the list. Trouble is, when you’re coming in from an external app via a link, you don’t know what that internal ID is (and frankly shouldn’t). I don’t like exposing internal SharePoint IDs to the outside world for the same reason I don’t do it with database IDs. They’re internal and while it’s find to use on the site itself you don’t want external links using it. It’s volatile and can change (delete one item then re-add it back with the same data and watch any ID references break). The next thought might be to call a SharePoint web service with a CAML query to get the item ID number using some criteria (in this case, the customer number). That’s great if you have that ability but again we had an existing application we were just adding a link to. The last thing I wanted to do was to crack open the code on that sucker and start calling web services (primarily because it’s Java, but really I’m a lazy geek). However if you’re doing this and have access to call a web service that would be an option. Back to this problem, how do I a) find a SharePoint List Item based on some field value other than ID and b) make it low impact so I can just construct a URL to it? That’s where jQuery and SPServices came to the rescue. After spending a few hours of emails back and forth with Marc and a couple of phone calls (and updating jQuery to the latest version, duh!) it was a simple answer. First we need a reference to a) jQuery b) SPServices and c) our script. I just dropped a Content Editor Web Part, the Swiss Army Knives of Web Parts, onto the DispForm.aspx page and added these lines: <script type="text/javascript" src="http://intranet/JavaScript/jquery-1.4.2.min.js"></script> <script type="text/javascript" src="http://intranet/JavaScript/jquery.SPServices-0.5.3.min.js"></script> <script type="text/javascript" src="http://intranet/JavaScript/RedirectToID.js"> </script> Update it to point to where you keep your scripts located. I prefer to keep them all in Document Libraries as I can make changes to them without having to remote into the server (and on a multiple web front end, that’s just a PITA), it provides me with version control of sorts, and it’s quick to add new plugins and scripts. Now we can look at our RedirectToID.js script. This invokes the SPServices Library to call the GetListItems method of the Lists web service and then rewrites the URL to DispForm.aspx to use the correct SharePoint ID (the internal one). $(document).ready(function(){ var queryStringValues = $().SPServices.SPGetQueryString(); var id = queryStringValues["ID"]; if(id == "0") { var customer = queryStringValues["CustomerNumber"]; var query = "<Query><Where><Eq><FieldRef Name='CustomerNumber'/><Value Type='Text'>" + customer + "</Value></Eq></Where></Query>"; var url = window.location; $().SPServices({ operation: "GetListItems", listName: "Customers", async: false, CAMLQuery: query, completefunc: function (xData, Status) { $(xData.responseXML).find("[nodeName=z:row]").each(function(){ id = $(this).attr("ows_ID"); url = $().SPServices.SPGetCurrentSite() + "/Lists/Customers/DispForm.aspx?ID=" + id; window.location = url; }); } }); } }); What’s happening here? Line 3: We call SPServices.SPGetQueryString to get an array of query string values (a handy function in the library as I had 15 lines of code to do this which is now gone). Line 4: Extract the ID value from the query string Line 6: If we pass in “0” it means we’re looking up a field value. This allows DispForm.aspx to work like normal with SharePoint lists but lookup our values when invoked. Why ID at all? DispForm.aspx doesn’t work unless you pass in something and “0” is a *magic* number that will invoke the page but not lookup a value in the database. Line 8-15: Extract the CustomerNumber query string value, build a CAML query to find it then call the GetListitems method using SPServices Line 16: Process the results in our completefunc to iterate over all the rows (there should only be one) and extract the real ID of the item Line 17-20: Build a new URL based on the site (using a call to SPGetCurrentSite) and append our real ID to redirect to the DispForm.aspx page As you can see, it dynamically creates a CAML query for the call to the web service using the passed in value. You could even make this generic to take in different query strings, one for the field name to search for and the other for the value to find. That way it could be used for any field you want. For example you could bring up the correct item on the DispForm.aspx page based on customer name with something like this: http://myserver/Lists/Customers/DispForm.aspx?ID=0&FilterId=CustomerName&FilterValue=Sony Use your imagination. Some people would opt for building a custom page with a DVWP but if you want to leverage all the functionality of DispForm.aspx this might come in handy if you don’t want to rely on internal SharePoint IDs.

    Read the article

  • Extjs Tooltips, IFrames and IE => Problems

    - by Chau
    I have an application using OpenLayers, Extjs and GeoExt. My application runs fine, but I need it to be placed inside an IFrame in another page. When doing this, my toolbar becomes responseless in Internet Explorer. The cause is Ext.QuickTips.init();. Comment out this line and everything works fine - except the quick tips ofcourse =) But why is it causing problems? Is it because I'm using it wrong, placing it wrong or just because it doesn't like IE and IFrames? Link: Link to the IFrame page IFrame page: <!DOCTYPE HTML PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html> <body> <iframe height="660" src="http://www.gis34.dk/doctype.html" width="660"> <p>Din browser understøtter ikke <i>frames</i>.</p> </iframe> </body> </html> Application page: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head><meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <script type="text/javascript" language="javascript"> var map; var mapPanel; var mainViewport; var toolbarItems = []; </script> <link href="/Libraries/Ext/resources/css/ext-all.css" type="text/css" rel="stylesheet" /> <link href="/Libraries/GeoExt/resources/css/geoext-all-debug.css" type="text/css" rel="stylesheet" /> <link href="/CSS/Extjs.css" type="text/css" rel="stylesheet" /> <link href="/CSS/OpenLayers.css" type="text/css" rel="stylesheet" /> <link href="/CSS/Poseidon.css" type="text/css" rel="stylesheet" /> </head> <body> <script src="/Libraries/OpenLayers/lib/OpenLayers.js" type="text/javascript"></script> <script src="/Libraries/Ext/adapter/ext/ext-base-debug.js" type="text/javascript"></script> <script src="/Libraries/Ext/ext-all-debug.js" type="text/javascript"></script> <script src="/Libraries/GeoExt/lib/GeoExt.js" type="text/javascript"></script> <script src="http://www.openstreetmap.org/openlayers/OpenStreetMap.js" type="text/javascript"></script> <div id="map"></div> <script type="text/javascript"> Ext.onReady(function() { Ext.QuickTips.init(); Ext.BLANK_IMAGE_URL = '/Libraries/Ext/resources/images/default/s.gif'; var layer = new OpenLayers.Layer.OSM.Mapnik( 'OpenStreetMap Mapnik', { sphericalMercator: true }, { isBaseLayer: true } ); var mapOptions = { projection: 'EPSG:900913', units: 'm', maxExtent: new OpenLayers.Bounds(1390414.0280576,7490505.7050394,1406198.2743956,7501990.3685372), minResolution: '0.125', maxResolution: '1000', restrictedExtent: new OpenLayers.Bounds(1390414.0280576,7490505.7050394,1406198.2743956,7501990.3685372), controls: [ ] }; map = new OpenLayers.Map('', mapOptions); var Navigation = new OpenLayers.Control.Navigation(); action = new GeoExt.Action( { control: new OpenLayers.Control.ZoomBox({out:false}), map: map, tooltip: "Zoom ind", iconCls: 'icon-zoom-in', toggleGroup: 'mapTools', group: 'mapTools' }); toolbarItems.push(action); action = new GeoExt.Action( { control: new OpenLayers.Control.ZoomBox({out:true}), map: map, tooltip: "Zoom ud", iconCls: 'icon-zoom-out', toggleGroup: 'mapTools', group: 'mapTools' }); toolbarItems.push(action); action = new GeoExt.Action({ control: new OpenLayers.Control.ZoomToMaxExtent(), map: map, iconCls: 'icon-zoom-max-extent', tooltip: 'Zoom helt ud' }); toolbarItems.push(action); map.addControl(Navigation); map.addLayer(layer); mapPanel = new GeoExt.MapPanel( { border: true, id: 'mapPanel', region: "center", map: map, tbar: toolbarItems }); mainViewport = new Ext.Viewport( { layout: "fit", hideBorders: true, items: { layout: "border", deferredRender: false, items: [ mapPanel ] } }); }); </script> </body> </html>

    Read the article

  • jqGrid - Problems opening in jquery tabs (on Firefox and Google Chrome)

    - by Ben Hargreaves
    I have developed a very simple MVC app to test out trirand's jqGrid for MVC. The app opens a jqgrid in a jquery tab group and everything is ok with IE. However when I use Firefox jqgrid only opens occasionaly in the first tab (but not under any other tab), and in Chrome my jqgrids dont appear to open under any tab of the group. I'm a bit of an MVC newbie (and have only been testing jqgrid out for a few days), but I know my users will want to use different browsers. Trirand have not come back with any answer so wondered if anyone else had had a similar issue. I have really just implemented jqgrid as per the controllers and model in the sample application on the Trirand site, and then combined it with a straightforward jquery tab group. My MVC Details Page is as follows; <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<PRAMSAPP.Models.Family>" %> <%@ Import Namespace="Trirand.Web.Mvc" %> <%@ Import Namespace="PRAMSAPP.Controllers" %> <%@ Import Namespace="PRAMSAPP.Models" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> Details </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <link rel="stylesheet" type="text/css" href="/scripts/jquery-ui-1.7.2.custom.css" /> <script type="text/javascript" src="/scripts/jquery-1.3.2.min.js"></script> <script type="text/javascript" src="/scripts/jquery-ui-1.7.2.custom.min.js"></script> <fieldset> <legend>Family</legend> <div class="display-field"><%= Html.Encode(Model.FamilyID) %></div> <div class="display-field"><%= Html.Encode(Model.FamilySurname) %></div> </fieldset> <div id="tabs"> <ul> <li> <%= Html.ActionLink("GridChildren", "GridDemo", new { controller = "Grid", id = Model.FamilyID })%> </li> <li> <%= Html.ActionLink("Children", "ShowFamiliesChildren", new { famid = Model.FamilyID, page = Page})%> </li> </ul> </div> <p> <%= Html.ActionLink("Edit", "Edit", new { id=Model.FamilyID }) %> | <%= Html.ActionLink("Back to List", "Index") %> </p> <script type="text/javascript"> $(function() { $('#tabs').tabs(); }); </script> </asp:Content> And My Controller page is as follows; <%@ Page Language="C#" Inherits="System.Web.Mvc.ViewPage<PRAMSAPP.Models.FamiliesChildrenJqGridModel>" %> <%@ Import Namespace="Trirand.Web.Mvc" %> <%@ Import Namespace="PRAMSAPP.Controllers" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head id="Head1" runat="server"> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <!-- The jQuery UI theme that will be used by the grid --> <link rel="stylesheet" type="text/css" media="screen" href="/Content/themes/redmond/jquery-ui-1.7.1.custom.css" /> <!-- The Css UI theme extension of jqGrid --> <link rel="stylesheet" type="text/css" media="screen" href="/Content/themes/ui.jqgrid.css" /> <!-- jQuery library is a prerequisite for jqGrid --> <script type="text/javascript" src="/Scripts/jquery-1.3.2.min.js"></script> <!-- language pack - MUST be included before the jqGrid javascript --> <script type="text/javascript" src="/Scripts/grid.locale-en.js"></script> <script type="text/javascript" src="/Scripts/jqgrid/jquery.jqGrid.min.js"></script> </head> <body> <div> <%= Html.Trirand().JQGrid(Model.FamiliesChildrenGrid, "JQGrid1") %> </div> </body>

    Read the article

  • jQuery Templates - XHTML Validation

    - by hajan
    Many developers have already asked me about this. How to make XHTML valid the web page which uses jQuery Templates. Maybe you have already tried, and I don't know what are your results but here is my opinion regarding this. By default, Visual Studio.NET adds the xhtml1-transitional.dtd schema <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> So, if you try to validate your page which has jQuery Templates against this schema, your page won't be XHTML valid. Why? It's because when creating templates, we use HTML tags inside <script> ... </script> block. Yes, I know that the script block has type="text/html" but it's not supported in this schema, thus it's not valid. Let's try validate the following code Code <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head>     <title>jQuery Templates :: XHTML Validation</title>     <script src="http://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.4.4.min.js" type="text/javascript"></script>     <script src="http://ajax.aspnetcdn.com/ajax/jquery.templates/beta1/jquery.tmpl.js" type="text/javascript"></script>          <script language="javascript" type="text/javascript">         $(function () {             var attendees = [                 { Name: "Hajan", Surname: "Selmani", speaker: true, phones: [070555555, 071888999, 071222333] },                 { Name: "Denis", Surname: "Manski", phones: [070555555, 071222333] }             ];             $("#myTemplate").tmpl(attendees).appendTo("#attendeesList");         });     </script>     <script id="myTemplate" type="text/html">          <li>             ${Name} ${Surname}             {{if speaker}}                 (<font color="red">speaks</font>)             {{else}}                 (attendee)             {{/if}}         </li>     </script>      </head>     <body>     <ol id="attendeesList"></ol> </body> </html> To validate it, go to http://validator.w3.org/#validate_by_input and copy paste the code rendered on client-side browser (it’s almost the same, only the template is rendered inside OL so LI tags are created for each item). Press CHECK and you will get: Result: 1 Errors, 2 warning(s)  The error message says: Validation Output: 1 Error Line 21, Column 13: document type does not allow element "li" here <li> Yes, the <li> HTML element is not allowed inside the <script>, so how to make it valid? FIRST: Using <![CDATA][…]]> The first thing that came in my mind was the CDATA. So, by wrapping any HTML tag which is in script blog, inside <![CDATA[ ........ ]]> it will make our code valid. However, the problem is that the template won't render since the template tags {} cannot get evaluated if they are inside CDATA. Ok, lets try with another approach. SECOND: HTML5 validation Well, if we just remove the strikethrough part bellow of the !DOPCTYPE <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> our template is going to be checked as HTML5 and will be valid. Ok, there is another approach I've also tried: THIRD: Separate template to an external file We can separate the template to external file. I didn’t show how to do this previously, so here is the example. 1. Add HTML file with name Template.html in your ASPX website. 2. Place your defined template there without <script> tag Content inside Template.html <li>     ${Name} ${Surname}     {{if speaker}}         (<font color="red">speaks</font>)     {{else}}         (attendee)     {{/if}} </li> 3. Call the HTML file using $.get() jQuery ajax method and render the template with data using $.tmpl() function. $.get("/Templates/Template.html", function (template) {     $.tmpl(template, attendees).appendTo("#attendeesList"); }); So the complete code is: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head>     <title>jQuery Templates :: XHTML Validation</title>     <script src="http://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.4.4.min.js" type="text/javascript"></script>     <script src="http://ajax.aspnetcdn.com/ajax/jquery.templates/beta1/jquery.tmpl.js" type="text/javascript"></script>          <script language="javascript" type="text/javascript">         $(function () {             var attendees = [                 { Name: "Hajan", Surname: "Selmani", speaker: true, phones: [070555555, 071888999, 071222333] },                 { Name: "Denis", Surname: "Manski", phones: [070555555, 071222333] }             ];             $.get("/Templates/Template.html", function (template) {                 $.tmpl(template, attendees).appendTo("#attendeesList");             });         });     </script>      </head>     <body>     <ol id="attendeesList"></ol> </body> </html> This document was successfully checked as XHTML 1.0 Transitional! Result: Passed If you have any additional methods for XHTML validation, you can share it :). Thanks,Hajan

    Read the article

  • How-to filter table filter input to only allow numeric input

    - by frank.nimphius
    In a previous ADF Code Corner post, I explained how to change the table filter behavior by intercepting the query condition in a query filter. See sample #30 at http://www.oracle.com/technetwork/developer-tools/adf/learnmore/index-101235.html In this OTN Harvest post I explain how to prevent users from providing invalid character entries as table filter criteria to avoid problems upon re-querying the table. In the example shown next, only numeric values are allowed for a table column filter. To create a table that allows data filtering, drag a View Object – or a data collection of a Web Service or JPA business service – from the DataControls panel and drop it as a table. Choose the Enable Filtering option in the Edit Table Columns dialog so the table renders with the column filter boxes displayed. The table filter fields are created using implicit af:inputText components that need to be customized for you to apply a custom filter input component, or to change the input behavior. To change the input filter, so only a defined set of input keys is allowed, you need to change the default filter field with your own af:inputText field to which you apply an af:clientListener tag that filters user keyboard entries. For this, in the Oracle JDeveloper visual editor, select the column which filter you want to change and expand the column node in the Oracle JDeveloper Structure Window. Part of the column definition is the Column facet node. Expand the facets so you see the filter facet entry. The filter facet is grayed out as there is no custom facet defined. In a next step, open theComponent Palette (ctrl+shift+P) and drag an Input Text component onto the facet. This demarks the first part in the filter customization. To make the custom filter component work, you need to map the af:inputText component value property to the ADF filter criteria that is exposed in the Expression Builder. Open the Expression Builder for the filter input component value property by clicking the arrow icon to its right. In the Expression Builder expand the JSP Objects | vs | filterCriteria node to select the attribute name represented by the table column. The vs entry is the name of a variable that is defined on the table and that grants you access to the table attributes. Now that the filter works as before – though using a custom filter input component – you can add the af:clientListener tag to your custom filter component – af:inputText – to call out to JavaScript when users type in the column filter field Point the client filter method property to a JavaScript function that you reference or add through using the af:resource tag and set the type property value to keyDown. <af:document id="d1">     <af:resource type="javascript" source="/js/filterHandler.js"/> … The filter definition looks as shown below <af:inputText label="Label 1" id="it1"                         value="#{vs.filterCriteria.Employe        <af:clientListener method="suppressCharacterInput"                                     type="keyDown"/> </af:inputText> The JavaScript code that you can use to either filter character inputs or numeric inputs is shown below. Just store this code in an external JavaScript (.js) file and reference it from the af:resource tag. //Allow numbers, cursor control keys and delete keys function suppressCharacterInput(evt) {     var _keyCode = evt.getKeyCode();     var _filterField = evt.getCurrentTarget();     var _oldValue = _filterField.getValue();     if (!((_keyCode < 57) ||(_keyCode > 96 && _keyCode < 105))) {         _filterField.setValue(_oldValue);         evt.cancel();     } } //Allow characters, cursor control keys and delete keys function suppressNumericInput(evt) {  var _keyCode = evt.getKeyCode();  var _filterField = evt.getCurrentTarget();  var _oldValue = _filterField.getValue();  //check for numbers  if ((_keyCode < 57 && _keyCode > 47) ||      (_keyCode > 96 && _keyCode < 105)){     _filterField.setValue(_oldValue);     evt.cancel();   } } But what if browsers don't allow JavaScript ? Don't worry about this. If browsers would not support JavaScript then ADF Faces as a whole would not work and you had a different problem.

    Read the article

  • Wordpress nav not visible in pages like articles, blog & search

    - by kwek-kwek
    My wordpress*(a custom template)* nav is all working on all of the pages but now I found out that the Main nav doesn't show on this pages All pages e.g. search.php, single.php, index.php, page.php all has <?php get_header(); ?> I really don't know whats wrong. Here is the code for my header.php <?php /** * @package WordPress * @subpackage Default_Theme */ ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" <?php language_attributes() ?>> <head> <meta http-equiv="Content-Type" content="<?php bloginfo('html_type'); ?>; charset=<?php bloginfo('charset'); ?>" /> <title><?php bloginfo('name'); ?> <?php wp_title(); ?></title> <link rel="stylesheet" href="<?php bloginfo('stylesheet_url'); ?>" type="text/css" media="screen,projection" /> <link rel="stylesheet" href="<?php bloginfo('template_url'); ?>/css/sifr.css" type="text/css" /> <script src="<?php bloginfo('template_url'); ?>/js/sifr.js" type="text/javascript"></script> <script src="<?php bloginfo('template_url'); ?>/js/sifr-config.js" type="text/javascript"></script> <script src="http://cdn.jquerytools.org/1.1.2/jquery.tools.min.js"></script> <?php wp_head(); ?> </head> <?php $current_page = $post->ID; $parent = 1; while($parent) { $page_query = $wpdb->get_row("SELECT post_name, post_parent FROM $wpdb->posts WHERE ID = '$current_page'"); $parent = $current_page = $page_query->post_parent; if(!$parent) $parent_name = $page_query->post_name; } ?> <body id="<?php echo (is_page()) ? "$parent_name" : ((is_home()) ? "blog" : ((is_search()) ? "other" : ((is_single()) ? "blog" : "blog"))); ?>"> <div id="BGtie"> <!--HEAD WRAPPER--> <div id="headwrapper"> <!--HEADER--> <div id="headContainer"> <div id="nameTag"> <a href="<?php echo get_option('home'); ?>/"><?php bloginfo('name'); ?></a> </div> <!--TOP NAV--> <div id="topNav"> <ul> <li><a href="<?php bloginfo('url'); ?>home">Home</a></li> <li><a href="#">Request info</a></li> <li><a href="#">Contact us</a></li> <?php do_action('icl_language_selector'); ?> </ul> </div> <!--END TOP NAV--> <!--MAIN NAV--> <?php if ( is_page() AND (strtolower(ICL_LANGUAGE_CODE) == 'fr') ) {include("main-nav-fr.php");} ?> <?php if (is_page() AND (strtolower(ICL_LANGUAGE_CODE) == 'en')) include("main-nav-en.php") ?> <!--END MAIN NAV--> </div> <!--END HEADER--> </div> <!--END HEAD WRAPPER--> </div>

    Read the article

  • Nashorn, the rhino in the room

    - by costlow
    Nashorn is a new runtime within JDK 8 that allows developers to run code written in JavaScript and call back and forth with Java. One advantage to the Nashorn scripting engine is that is allows for quick prototyping of functionality or basic shell scripts that use Java libraries. The previous JavaScript runtime, named Rhino, was introduced in JDK 6 (released 2006, end of public updates Feb 2013). Keeping tradition amongst the global developer community, "Nashorn" is the German word for rhino. The Java platform and runtime is an intentional home to many languages beyond the Java language itself. OpenJDK’s Da Vinci Machine helps coordinate work amongst language developers and tool designers and has helped different languages by introducing the Invoke Dynamic instruction in Java 7 (2011), which resulted in two major benefits: speeding up execution of dynamic code, and providing the groundwork for Java 8’s lambda executions. Many of these improvements are discussed at the JVM Language Summit, where language and tool designers get together to discuss experiences and issues related to building these complex components. There are a number of benefits to running JavaScript applications on JDK 8’s Nashorn technology beyond writing scripts quickly: Interoperability with Java and JavaScript libraries. Scripts do not need to be compiled. Fast execution and multi-threading of JavaScript running in Java’s JRE. The ability to remotely debug applications using an IDE like NetBeans, Eclipse, or IntelliJ (instructions on the Nashorn blog). Automatic integration with Java monitoring tools, such as performance, health, and SIEM. In the remainder of this blog post, I will explain how to use Nashorn and the benefit from those features. Nashorn execution environment The Nashorn scripting engine is included in all versions of Java SE 8, both the JDK and the JRE. Unlike Java code, scripts written in nashorn are interpreted and do not need to be compiled before execution. Developers and users can access it in two ways: Users running JavaScript applications can call the binary directly:jre8/bin/jjs This mechanism can also be used in shell scripts by specifying a shebang like #!/usr/bin/jjs Developers can use the API and obtain a ScriptEngine through:ScriptEngine engine = new ScriptEngineManager().getEngineByName("nashorn"); When using a ScriptEngine, please understand that they execute code. Avoid running untrusted scripts or passing in untrusted/unvalidated inputs. During compilation, consider isolating access to the ScriptEngine and using Type Annotations to only allow @Untainted String arguments. One noteworthy difference between JavaScript executed in or outside of a web browser is that certain objects will not be available. For example when run outside a browser, there is no access to a document object or DOM tree. Other than that, all syntax, semantics, and capabilities are present. Examples of Java and JavaScript The Nashorn script engine allows developers of all experience levels the ability to write and run code that takes advantage of both languages. The specific dialect is ECMAScript 5.1 as identified by the User Guide and its standards definition through ECMA international. In addition to the example below, Benjamin Winterberg has a very well written Java 8 Nashorn Tutorial that provides a large number of code samples in both languages. Basic Operations A basic Hello World application written to run on Nashorn would look like this: #!/usr/bin/jjs print("Hello World"); The first line is a standard script indication, so that Linux or Unix systems can run the script through Nashorn. On Windows where scripts are not as common, you would run the script like: jjs helloWorld.js. Receiving Arguments In order to receive program arguments your jjs invocation needs to use the -scripting flag and a double-dash to separate which arguments are for jjs and which are for the script itself:jjs -scripting print.js -- "This will print" #!/usr/bin/jjs var whatYouSaid = $ARG.length==0 ? "You did not say anything" : $ARG[0] print(whatYouSaid); Interoperability with Java libraries (including 3rd party dependencies) Another goal of Nashorn was to allow for quick scriptable prototypes, allowing access into Java types and any libraries. Resources operate in the context of the script (either in-line with the script or as separate threads) so if you open network sockets and your script terminates, those sockets will be released and available for your next run. Your code can access Java types the same as regular Java classes. The “import statements” are written somewhat differently to accommodate for language. There is a choice of two styles: For standard classes, just name the class: var ServerSocket = java.net.ServerSocket For arrays or other items, use Java.type: var ByteArray = Java.type("byte[]")You could technically do this for all. The same technique will allow your script to use Java types from any library or 3rd party component and quickly prototype items. Building a user interface One major difference between JavaScript inside and outside of a web browser is the availability of a DOM object for rendering views. When run outside of the browser, JavaScript has full control to construct the entire user interface with pre-fabricated UI controls, charts, or components. The example below is a variation from the Nashorn and JavaFX guide to show how items work together. Nashorn has a -fx flag to make the user interface components available. With the example script below, just specify: jjs -fx -scripting fx.js -- "My title" #!/usr/bin/jjs -fx var Button = javafx.scene.control.Button; var StackPane = javafx.scene.layout.StackPane; var Scene = javafx.scene.Scene; var clickCounter=0; $STAGE.title = $ARG.length>0 ? $ARG[0] : "You didn't provide a title"; var button = new Button(); button.text = "Say 'Hello World'"; button.onAction = myFunctionForButtonClicking; var root = new StackPane(); root.children.add(button); $STAGE.scene = new Scene(root, 300, 250); $STAGE.show(); function myFunctionForButtonClicking(){   var text = "Click Counter: " + clickCounter;   button.setText(text);   clickCounter++;   print(text); } For a more advanced post on using Nashorn to build a high-performing UI, see JavaFX with Nashorn Canvas example. Interoperable with frameworks like Node, Backbone, or Facebook React The major benefit of any language is the interoperability gained by people and systems that can read, write, and use it for interactions. Because Nashorn is built for the ECMAScript specification, developers familiar with JavaScript frameworks can write their code and then have system administrators deploy and monitor the applications the same as any other Java application. A number of projects are also running Node applications on Nashorn through Project Avatar and the supported modules. In addition to the previously mentioned Nashorn tutorial, Benjamin has also written a post about Using Backbone.js with Nashorn. To show the multi-language power of the Java Runtime, there is another interesting example that unites Facebook React and Clojure on JDK 8’s Nashorn. Summary Nashorn provides a simple and fast way of executing JavaScript applications and bridging between the best of each language. By making the full range of Java libraries to JavaScript applications, and the quick prototyping style of JavaScript to Java applications, developers are free to work as they see fit. Software Architects and System Administrators can take advantage of one runtime and leverage any work that they have done to tune, monitor, and certify their systems. Additional information is available within: The Nashorn Users’ Guide Java Magazine’s article "Next Generation JavaScript Engine for the JVM." The Nashorn team’s primary blog or a very helpful collection of Nashorn links.

    Read the article

  • Can't update textbox in TinyMCE

    - by Michael Tot Korsgaard
    I'm using TinyMCE, the text area is replaced with a TextBox, but when I try to update the database with the new text from my textbox, it wont update. Can anyone help me? My code looks like this <%@ Page Title="" Language="C#" MasterPageFile="~/Main.Master" AutoEventWireup="true" CodeBehind="default.aspx.cs" Inherits="Test_TinyMCE._default" ValidateRequest="false" %> <asp:Content ID="Content1" ContentPlaceHolderID="head" runat="server"> <script src="JavaScript/tiny_mce/tiny_mce.js" type="text/javascript"></script> <script type="text/javascript"> tinyMCE.init({ // General options mode: "textareas", theme: "advanced", plugins: "pagebreak,style,layer,table,save,advhr,advimage,advlink,emotions,iespell,insertdatetime,pre view,media,searchreplace,print,contextmenu,paste,directionality,fullscreen,noneditable,visualchars,no nbreaking,xhtmlxtras,template,wordcount,advlist,autosave", // Theme options theme_advanced_buttons1: "save,newdocument,|,bold,italic,underline,strikethrough,|,justifyleft,justifycenter,justifyright,justifyfull,styleselect,formatselect,fontselect,fontsizeselect", theme_advanced_buttons2: "cut,copy,paste,pastetext,pasteword,|,search,replace,|,bullist,numlist,|,outdent,indent,blockquote,|,undo,redo,|,link,unlink,anchor,image,cleanup,help,code,|,insertdate,inserttime,preview,|,forecolor,backcolor", theme_advanced_buttons3: "tablecontrols,|,hr,removeformat,visualaid,|,sub,sup,|,charmap,emotions,iespell,media,advhr,|,print,|,ltr,rtl,|,fullscreen", theme_advanced_buttons4: "insertlayer,moveforward,movebackward,absolute,|,styleprops,|,cite,abbr,acronym,del,ins,attribs,|,visualchars,nonbreaking,template,pagebreak,restoredraft", theme_advanced_toolbar_location: "top", theme_advanced_toolbar_align: "left", theme_advanced_statusbar_location: "bottom", theme_advanced_resizing: true, // Example content CSS (should be your site CSS) // using false to ensure that the default browser settings are used for best Accessibility // ACCESSIBILITY SETTINGS content_css: false, // Use browser preferred colors for dialogs. browser_preferred_colors: true, detect_highcontrast: true, // Drop lists for link/image/media/template dialogs template_external_list_url: "lists/template_list.js", external_link_list_url: "lists/link_list.js", external_image_list_url: "lists/image_list.js", media_external_list_url: "lists/media_list.js", // Style formats style_formats: [ { title: 'Bold text', inline: 'b' }, { title: 'Red text', inline: 'span', styles: { color: '#ff0000'} }, { title: 'Red header', block: 'h1', styles: { color: '#ff0000'} }, { title: 'Example 1', inline: 'span', classes: 'example1' }, { title: 'Example 2', inline: 'span', classes: 'example2' }, { title: 'Table styles' }, { title: 'Table row 1', selector: 'tr', classes: 'tablerow1' } ], // Replace values for the template plugin template_replace_values: { username: "Some User", staffid: "991234" } }); </script> </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="ContentPlaceHolder1" runat="server"> <div> <asp:TextBox ID="TextBox1" runat="server" TextMode="MultiLine"></asp:TextBox> <br /> <asp:LinkButton ID="LinkButton1" runat="server" onclick="LinkButton1_Click">Update</asp:LinkButton> </div> </asp:Content> My codebhind looks like this using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; namespace Test_TinyMCE { public partial class _default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { TextBox1.Text = Database.GetFirst().Text; } protected void LinkButton1_Click(object sender, EventArgs e) { Database.Update(Database.GetFirst().ID, TextBox1.Text); TextBox1.Text = Database.GetFirst().Text; } } } And finally the "Database" class im using looks like this using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Configuration; using System.Data.SqlClient; namespace Test_TinyMCE { public class Database { public int ID { get; set; } public string Text { get; set; } public static void Update(int ID, string Text) { SqlConnection connection = new SqlConnection(ConfigurationManager.AppSettings["DatabaseConnection"]); connection.Open(); try { SqlCommand command = new SqlCommand("Update Text set Text=@text where ID=@id"); command.Connection = connection; command.Parameters.Add(new SqlParameter("id", ID)); command.Parameters.Add(new SqlParameter("text", Text)); command.ExecuteNonQuery(); } finally { connection.Close(); } } public static Database GetFirst() { SqlConnection connection = new SqlConnection(ConfigurationManager.AppSettings["DatabaseConnection"]); connection.Open(); try { SqlCommand command = new SqlCommand("Select Top 1 ID, Text from Text order by ID asc"); command.Connection = connection; SqlDataReader reader = command.ExecuteReader(); if (reader.Read()) { Database item = new Database(); item.ID = reader.GetInt32(0); item.Text = reader.GetString(1); return item; } else { return null; } } finally { connection.Close(); } } } } I really hope that someone out there can help me

    Read the article

  • What's New In 11.1.2.1 (Talleyrand SP1)

    - by russ.bishop
    This release is primarily about bug fixes and that's what we spent the most time on, but we also addressed a number of other things: 1. Performance improvements We've done a lot of work to improve the performance of page load and execution times. For example, the View Compare page is about half the size it was previously! We've also done a lot of work on the server to improve performance of queries, exports, action scripts, etc. We implemented some finer-grained locking so fewer operations will block other users while they are in progress. We made some optimizations to improve performance when you have a lot of network or database latency as well. Just a few examples: An Import that previously took 8 GB of memory and hours to complete now runs in about 30 minutes and never takes more than 1 GB of RAM. Searching by exact Node Name now completes within 2 seconds even for a hierarchy with millions of nodes. Another search that was taking 30 seconds to run now completes in less than 5 seconds. 2. Upgrade support This release supports automatic upgrade from previous releases, built right into the console. 3. Console Improvements The Console has been reorganized and made easier to use. It is also much more multi-threaded so it responds quicker without freezing up when you save changes or when it needs to get status. 4. Property Namespaces Properties now have a concept called a Namespace. This is tied into the Application Templates to prevent conflicts with duplicate property names. Right now, if you have an AccountType and you pull in the HFM template, it also has AccountType so you end up creating properties with decorations on the name like "Account Type (HFM)". This is no longer necessary. In addition, properties within a namespace must have unique labels but they can be duplicated across namespaces. So in the Property Grid when you click on the HFM category, you just see "AccountType". When you click on MyCategory, you see "AccountType", but they are different properties with different values. Within formulas, the names are still unique (eg: Custom.AccountType vs HFM.AccountType). I'll write more about this one later. 5. Single Sign On DRM now supports Single Sign-On via HSS. For example, if you are using Oracle's OAM as your SSO solution then you configure HSS to use OAM just like you would before. You also configure DRM to use HSS, again just like before. Then you configure OAM to protect the DRM web app, like you would any other website. However once you do those things, users are no longer prompted to enter their username/password. They simply get redirected to OAM if they don't already have a login token, otherwise they pick their application and sail right into DRM. You can also avoid having to pick an application (see the next item) 6. URL-based navigation You can now specify the application you want to log into via the URL. Combined with SSO and your Intranet, it becomes easy to provide links on our intranet portal that take users directly into a specific DRM application. We also support specifying the Version, Hierarchy, and Node. Again, this can be used on your internal portal, but the scenarios get even more interesting when you are using workflow like Oracle BPEL you can automatically generate links within emails that will take users directly to a specific node in the UI. 7. Job status and cancellation A lot of the jobs now report their status and support true cancellation. Action Scripts also report a progress complete percentage since the amount of work is known ahead of time. 8. Action Script Options Action scripts support Option declarations at the top of the file so a script can self-describe (when specified in the file, the corresponding item in the file is ignored). For example: Option|DetectDelimiter Option|UsePropertyNames|true This will tell DRM to automatically detect the delimiter (a pipe symbol in this case) and that all references to properties are by Name, not by Label. Note that when you load a script in the UI, if you use Labels we automatically try to match them up if they are unique. Any duplicates are indicated and you are presented with a choice to pick which property you actually referred to. This is somewhat similar to Version substitution, but tailored for properties. There are other more minor changes and like I said earlier a lot of bug fixes and performance improvements. Hopefully I will get a chance to dig into some of these things in future blog posts.

    Read the article

  • My Codemash 2011 Retrospective

    - by Greg Malcolm
    I just got back from Codemash yesterday, and still on an adrenaline buzz. Here's my take on this years encounter: The Awesome Nearly everybody in one place Codemash is the ultimate place to catch up with community friends. This is my 3rd year visiting and I've got to know a great number of very cool people through various conferences, Give Camps and other community events. I'm finding more and more that Codemash is the best place to catch up with everybody regardless of technology interest or location. Of course I always make a whole bunch more friends while I'm there! Yay! Open Spaced I found the open spaces didn't work so well last year. This year things went a lot smoother and the topics were engaging and fresh. While I miss Alan Steven's approach of running it like an agile project, it was very cool to see that it evolving. Laptops were often cracked open, not just once but frequently! For example: Jasmine - Paired on a javascript kata using the Jasmine javascript test runner J - Sat in on a J demo from local J enthusiast, Tracy Harms Watir - More pairing, this time using Ruby with the watir-webdriver through cucumber. I'd mostly forgotten that Cucumber runs just fine without Rails. It made a change to do without. The other spaces were engaging too, but I think that's enough for that topic. Javascript Shenanigans I've already mentioned that I attended a Jasmine kata session. Jasmine is close to my heart right now every since I discovered it while on the hunt for a decent Javascript testing framework for a javascript koans project earlier this year. Well, it also got covered in the Java Precompiler and Pillar's vendor session, which was great to see. Node.js was also a reoccurring theme. Node.js in a nutshell? It's an extremely scalable Event based I/O server which runs on Javascript. I'd already encountered through a Startup Weekend project and have been noticing increasing interest of late. After encountering more node.js driven excitement from my peers at codemash I absolutely had to attend the open space on it. At least 20 people turned up and by the end we had some answers, a whole ton of new questions and an impromptu user group in the form of a twitter channel (#nodemash). I have no idea where this is going to go or how big it is going to become, but if it can cross the chasm into the enterprise it could become huge... Scala Koans I'm a bit of a Koans addict, and I really need more exposure to functional languages so I gave the Scala Koans precompiler a try. Great fun! I'm really glad I attended because I found I had a whole ton of questions. Currently the koans are available here, and the answers are here. Opportunities While we're on the subject can we change the subject now? Hai Gregory, You really need to keep the drinking for later in the day. I mean seriously, you're 34 and you still do this every single time! Sure, you made it to Chad Fowler keynote ok, but you looking a rather pale weren't you? Also might have been nice to attend 'Netflicks in the Cloud' instead of 'Sleeping It Off For People Who Should Know Better'. Kthxbye PS: Stop talking to yourself Not that I entirely regret it, I've had some of my greatest insights through late night drunken conversations at the CodeMash bar. Just might be nice to reign it in a little and get something out of the next morning too. Diversity This is something that is in the back of my mind because of conversations at Codemash as well as throughout the year; I'm realizing more and more how discouraging the IT profession is for women. I notice in the community there has been a lot of attention paid to stamping out harrasment, which is good, but there also seems to be a massive PR issue. I really don't have any solutions, but I figure it can't hurt to pay more attention to whats going on... And in Other News I now have a picture of Chad Fowler giving me more cowbell! Sadly I managed to lose the cowbell later on. Hopefully it's gone to a Better Place. The Womack Family Band joined in with the musicians jam this year. There's my cowbell again! Why must you hide from me? I also finally went in the water for the first time in all the I've been coming to codemash. Why did I wait so long?!?

    Read the article

  • RiverTrail - JavaScript GPPGU Data Parallelism

    - by JoshReuben
    Where is WebCL ? The Khronos WebCL working group is working on a JavaScript binding to the OpenCL standard so that HTML 5 compliant browsers can host GPGPU web apps – e.g. for image processing or physics for WebGL games - http://www.khronos.org/webcl/ . While Nokia & Samsung have some protype WebCL APIs, Intel has one-upped them with a higher level of abstraction: RiverTrail. Intro to RiverTrail Intel Labs JavaScript RiverTrail provides GPU accelerated SIMD data-parallelism in web applications via a familiar JavaScript programming paradigm. It extends JavaScript with simple deterministic data-parallel constructs that are translated at runtime into a low-level hardware abstraction layer. With its high-level JS API, programmers do not have to learn a new language or explicitly manage threads, orchestrate shared data synchronization or scheduling. It has been proposed as a draft specification to ECMA a (known as ECMA strawman). RiverTrail runs in all popular browsers (except I.E. of course). To get started, download a prebuilt version https://github.com/downloads/RiverTrail/RiverTrail/rivertrail-0.17.xpi , install Intel's OpenCL SDK http://www.intel.com/go/opencl and try out the interactive River Trail shell http://rivertrail.github.com/interactive For a video overview, see  http://www.youtube.com/watch?v=jueg6zB5XaM . ParallelArray the ParallelArray type is the central component of this API & is a JS object that contains ordered collections of scalars – i.e. multidimensional uniform arrays. A shape property describes the dimensionality and size– e.g. a 2D RGBA image will have shape [height, width, 4]. ParallelArrays are immutable & fluent – they are manipulated by invoking methods on them which produce new ParallelArray objects. ParallelArray supports several constructors over arrays, functions & even the canvas. // Create an empty Parallel Array var pa = new ParallelArray(); // pa0 = <>   // Create a ParallelArray out of a nested JS array. // Note that the inner arrays are also ParallelArrays var pa = new ParallelArray([ [0,1], [2,3], [4,5] ]); // pa1 = <<0,1>, <2,3>, <4.5>>   // Create a two-dimensional ParallelArray with shape [3, 2] using the comprehension constructor var pa = new ParallelArray([3, 2], function(iv){return iv[0] * iv[1];}); // pa7 = <<0,0>, <0,1>, <0,2>>   // Create a ParallelArray from canvas.  This creates a PA with shape [w, h, 4], var pa = new ParallelArray(canvas); // pa8 = CanvasPixelArray   ParallelArray exposes fluent API functions that take an elemental JS function for data manipulation: map, combine, scan, filter, and scatter that return a new ParallelArray. Other functions are scalar - reduce  returns a scalar value & get returns the value located at a given index. The onus is on the developer to ensure that the elemental function does not defeat data parallelization optimization (avoid global var manipulation, recursion). For reduce & scan, order is not guaranteed - the onus is on the dev to provide an elemental function that is commutative and associative so that scan will be deterministic – E.g. Sum is associative, but Avg is not. map Applies a provided elemental function to each element of the source array and stores the result in the corresponding position in the result array. The map method is shape preserving & index free - can not inspect neighboring values. // Adding one to each element. var source = new ParallelArray([1,2,3,4,5]); var plusOne = source.map(function inc(v) {     return v+1; }); //<2,3,4,5,6> combine Combine is similar to map, except an index is provided. This allows elemental functions to access elements from the source array relative to the one at the current index position. While the map method operates on the outermost dimension only, combine, can choose how deep to traverse - it provides a depth argument to specify the number of dimensions it iterates over. The elemental function of combine accesses the source array & the current index within it - element is computed by calling the get method of the source ParallelArray object with index i as argument. It requires more code but is more expressive. var source = new ParallelArray([1,2,3,4,5]); var plusOne = source.combine(function inc(i) { return this.get(i)+1; }); reduce reduces the elements from an array to a single scalar result – e.g. Sum. // Calculate the sum of the elements var source = new ParallelArray([1,2,3,4,5]); var sum = source.reduce(function plus(a,b) { return a+b; }); scan Like reduce, but stores the intermediate results – return a ParallelArray whose ith elements is the results of using the elemental function to reduce the elements between 0 and I in the original ParallelArray. // do a partial sum var source = new ParallelArray([1,2,3,4,5]); var psum = source.scan(function plus(a,b) { return a+b; }); //<1, 3, 6, 10, 15> scatter a reordering function - specify for a certain source index where it should be stored in the result array. An optional conflict function can prevent an exception if two source values are assigned the same position of the result: var source = new ParallelArray([1,2,3,4,5]); var reorder = source.scatter([4,0,3,1,2]); // <2, 4, 5, 3, 1> // if there is a conflict use the max. use 33 as a default value. var reorder = source.scatter([4,0,3,4,2], 33, function max(a, b) {return a>b?a:b; }); //<2, 33, 5, 3, 4> filter // filter out values that are not even var source = new ParallelArray([1,2,3,4,5]); var even = source.filter(function even(iv) { return (this.get(iv) % 2) == 0; }); // <2,4> Flatten used to collapse the outer dimensions of an array into a single dimension. pa = new ParallelArray([ [1,2], [3,4] ]); // <<1,2>,<3,4>> pa.flatten(); // <1,2,3,4> Partition used to restore the original shape of the array. var pa = new ParallelArray([1,2,3,4]); // <1,2,3,4> pa.partition(2); // <<1,2>,<3,4>> Get return value found at the indices or undefined if no such value exists. var pa = new ParallelArray([0,1,2,3,4], [10,11,12,13,14], [20,21,22,23,24]) pa.get([1,1]); // 11 pa.get([1]); // <10,11,12,13,14>

    Read the article

  • #OOW 2012 : IaaS, Private Cloud, Multitenant Database, and X3H2M2

    - by Eric Bezille
    The title of this post is a summary of the 4 announcements made by Larry Ellison today, during the opening session of Oracle Open World 2012... To know what's behind X3H2M2, you will have to wait a little, as I will go in order, beginning with the IaaS - Infrastructure as a Service - announcement. Oracle IaaS goes Public... and Private... Starting in 2004 with Fusion development, Oracle Cloud was launch last year to provide not only SaaS Application, based on standard development, but also the underlying PaaS, required to build the specifics, and required interconnections between applications, in and outside of the Cloud. Still, to cover the end-to-end Cloud  Services spectrum, we had to provide an Infrastructure as a Service, leveraging our Servers, Storage, OS, and Virtualization Technologies, all "Engineered Together". This Cloud Infrastructure, was already available for our customers to build rapidly their own Private Cloud either on SPARC/Solaris or x86/Linux... The second announcement made today bring that proposition a big step further : for cautious customers (like Banks, or sensible industries) who would like to benefits from the Cloud value of "as a Service", but don't want their Data out in the Cloud... We propose to them to operate the same systems, Exadata, Exalogic & SuperCluster, that are providing our Public Cloud Infrastructure, behind their firewall, in a Private Cloud model. Oracle 12c Multitenant Database This is also a major announcement made today, on what's coming with Oracle Database 12c : the ability to consolidate multiple databases with no extra additional  cost especially in terms of memory needed on the server node, which is often THE consolidation limiting factor. The principle could be compare to Solaris Zones, where, you will have a Database Container, who is "owning" the memory and Database background processes, and "Pluggable" Database in this Database Container. This particular feature is a strong compelling event to evaluate rapidly Oracle Database 12c once it will be available, as this is major step forward into true Database consolidation with Multitenancy on a shared (optimized) infrastructure. X3H2M2, enabling the new Exadata X3 in-Memory Database Here we are :  X3H2M2 stands for X3 (the new version of Exadata announced also today) Heuristic Hierarchical Mass Memory, providing the capability to keep most if not all the Data in the memory cache hierarchy. Of course, this is the major software enhancement of the new X3 Exadata machine, but as this is a software, our current customers would be able to benefit from it on their existing systems by upgrading to the new release. But that' not the only thing that we did with X3, at the same time we have upgraded everything : the CPUs, adding more cores per server node (16 vs. 12, with the arrival of Intel E5 / Sandy Bridge), the memory with 512GB memory as well per node,  and the new Flash Fire card, bringing now up to 22 TB of Flash cache. All of this 4TB of RAM + 22TB of Flash being use cleverly not only for read but also for write by the X3H2M2 algorithm... making a very big difference compare to traditional storage flash extension. But what does those extra performances brings to you on an already very efficient system: double your performances compare to the fastest storage array on the market today (including flash) and divide you storage price x10 at the same time... Something to consider closely this days... Especially that we also announced the availability of a new Exadata X3-2 8th rack : a good starting point. As you have seen a major opening for this year again with true innovation. But that was not the only thing that we saw today, as before Larry's talk, Fujitsu did introduce more in deep the up coming new SPARC processor, that they are co-developing with us. And as such Andrew Mendelsohn - Senior Vice President Database Server Technologies came on stage to explain that the next step after I/O optimization for Database with Exadata, was to accelerate the Database at execution level by bringing functions in the SPARC processor silicium. All in all, to process more and more Data... The big theme of the day... and of the Oracle User Groups Conferences that were also happening today and where I had the opportunity to attend some interesting sessions on practical use cases of Big Data one in Finances and Fraud profiling and the other one on practical deployment of Oracle Exalytics for Data Analytics. In conclusion, one picture to try to size Oracle Open World ... and you can understand why, with such a rich content... and this only the first day !

    Read the article

  • Search function fails because it refers to the wrong controller action?

    - by Christoffer
    My Sunspot search function (sunspot_rails gem) works just fine in my index view, but when I duplicate it to my show view my search breaks... views/supplierproducts/show.html.erb <%= form_tag supplierproducts_path, :method => :get, :id => "supplierproducts_search" do %> <p> <%= text_field_tag :search, params[:search], placeholder: "Search by SKU, product name & EAN number..." %> </p> <div id="supplierproducts"><%= render 'supplierproducts' %></div> <% end %> assets/javascripts/application.js $(function () { $('#supplierproducts th a').live('click', function () { $.getScript(this.href); return false; } ); $('#supplierproducts_search input').keyup(function () { $.get($("#supplierproducts_search").attr("action"), $("#supplierproducts_search").serialize(), null, 'script'); return false; }); }); views/supplierproducts/show.js.erb $('#supplierproducts').html('<%= escape_javascript(render("supplierproducts")) %>'); views/supplierproducts/_supplierproducts.hmtl.erb <%= hidden_field_tag :direction, params[:direction] %> <%= hidden_field_tag :sort, params[:sort] %> <table class="table table-bordered"> <thead> <tr> <th><%= sortable "sku", "SKU" %></th> <th><%= sortable "name", "Product name" %></th> <th><%= sortable "stock", "Stock" %></th> <th><%= sortable "price", "Price" %></th> <th><%= sortable "ean", "EAN number" %></th> </tr> </thead> <% for supplierproduct in @supplier.supplierproducts %> <tbody> <tr> <td><%= supplierproduct.sku %></td> <td><%= supplierproduct.name %></td> <td><%= supplierproduct.stock %></td> <td><%= supplierproduct.price %></td> <td><%= supplierproduct.ean %></td> </tr> </tbody> <% end %> </table> controllers/supplierproducts_controller.rb class SupplierproductsController < ApplicationController helper_method :sort_column, :sort_direction def show @supplier = Supplier.find(params[:id]) @search = @supplier.supplierproducts.search do fulltext params[:search] end @supplierproducts = @search.results end end private def sort_column Supplierproduct.column_names.include?(params[:sort]) ? params[:sort] : "name" end def sort_direction %w[asc desc].include?(params[:direction]) ? params[:direction] : "asc" end models/supplierproduct.rb class Supplierproduct < ActiveRecord::Base attr_accessible :ean, :name, :price, :sku, :stock, :supplier_id belongs_to :supplier validates :supplier_id, presence: true searchable do text :ean, :name, :sku end end Visiting show.html.erb works just fine. Log shows: Started GET "/supplierproducts/2" for 127.0.0.1 at 2012-06-24 13:44:52 +0200 Processing by SupplierproductsController#show as HTML Parameters: {"id"=>"2"} Supplier Load (0.1ms) SELECT "suppliers".* FROM "suppliers" WHERE "suppliers"."id" = ? LIMIT 1 [["id", "2"]] SOLR Request (252.9ms) [ path=#<RSolr::Client:0x007fa5880b8e68> parameters={data: fq=type%3ASupplierproduct&start=0&rows=30&q=%2A%3A%2A, method: post, params: {:wt=>:ruby}, query: wt=ruby, headers: {"Content-Type"=>"application/x-www-form-urlencoded; charset=UTF-8"}, path: select, uri: http://localhost:8982/solr/select?wt=ruby, open_timeout: , read_timeout: } ] Supplierproduct Load (0.2ms) SELECT "supplierproducts".* FROM "supplierproducts" WHERE "supplierproducts"."id" IN (1) Supplierproduct Load (0.1ms) SELECT "supplierproducts".* FROM "supplierproducts" WHERE "supplierproducts"."supplier_id" = 2 Rendered supplierproducts/_supplierproducts.html.erb (2.2ms) Rendered supplierproducts/show.html.erb within layouts/application (3.3ms) Rendered layouts/_shim.html.erb (0.0ms) User Load (0.1ms) SELECT "users".* FROM "users" WHERE "users"."remember_token" = 'zMrtTbDun2MjMHRApSthCQ' LIMIT 1 Rendered layouts/_header.html.erb (2.1ms) Rendered layouts/_footer.html.erb (0.2ms) Completed 200 OK in 278ms (Views: 20.6ms | ActiveRecord: 0.6ms | Solr: 252.9ms) But it breaks when I type in a search. Log shows: Started GET "/supplierproducts?utf8=%E2%9C%93&search=a&direction=&sort=&_=1340538830635" for 127.0.0.1 at 2012-06-24 13:53:50 +0200 Processing by SupplierproductsController#index as JS Parameters: {"utf8"=>"?", "search"=>"a", "direction"=>"", "sort"=>"", "_"=>"1340538830635"} Rendered supplierproducts/_supplierproducts.html.erb (2.4ms) Rendered supplierproducts/index.js.erb (2.9ms) Completed 500 Internal Server Error in 6ms ActionView::Template::Error (undefined method `supplierproducts' for nil:NilClass): 10: <th><%= sortable "ean", "EAN number" %></th> 11: </tr> 12: </thead> 13: <% for supplierproduct in @supplier.supplierproducts %> 14: <tbody> 15: <tr> 16: <td><%= supplierproduct.sku %></td> app/views/supplierproducts/_supplierproducts.html.erb:13:in `_app_views_supplierproducts__supplierproducts_html_erb___2251600857885474606_70174444831200' app/views/supplierproducts/index.js.erb:1:in `_app_views_supplierproducts_index_js_erb___1613906916161905600_70174464073480' Rendered /Users/Computer/.rvm/gems/ruby-1.9.3-p194@myapp/gems/actionpack-3.2.3/lib/action_dispatch/middleware/templates/rescues/_trace.erb (33.3ms) Rendered /Users/Computer/.rvm/gems/ruby-1.9.3-p194@myapp/gems/actionpack-3.2.3/lib/action_dispatch/middleware/templates/rescues/_request_and_response.erb (0.9ms) Rendered /Users/Computer/.rvm/gems/ruby-1.9.3-p194@myapp/gems/actionpack-3.2.3/lib/action_dispatch/middleware/templates/rescues/template_error.erb within rescues/layout (39.7ms)

    Read the article

  • Deep Cloning C++ class that inherits CCNode in Cocos2dx

    - by A Devanney
    I stuck with something in Cocos2dx ... I'm trying to deep clone one of my classes that inherits CCNode. Basically i have.... GameItem* pTemp = new GameItem(*_actualItem); // loops through all the blocks in gameitem and updates their position pTemp->moveDown(); // if in boundary or collision etc... if (_gameBoard->isValidMove(pTemp)) { _actualItem = pTemp; // display the position CCLog("pos (1) --- (X : %d,Y : %d)", _actualItem->getGridX(),_actualItem->getGridY()); } Then doesn't work, because the gameitem inherits CCNode and has the collection of another class that also inherits CCNode. its just creating a shallow copy and when you look at children of the gameitem node in the copy, just point to the original? class GameItem : public CCNode { // maps to the actual grid position of the shape CCPoint* _rawPosition; // tracks the current grid position int _gridX, _gridY; // tracks the change if the item has moved CCPoint _offset; public: //constructors GameItem& operator=(const GameItem& item); GameItem(Shape shape); ... } then in the implementation.... GameItem& GameItem::operator=(const GameItem& item) { _gridX = item.getGridX(); _gridY = item.getGridY(); _offset = item.getOffSet(); _rawPosition = item.getRawPosition(); // how do i copy the node? return *this; } // shape contains an array of position for the game character GameItem::GameItem(Shape shape) { _rawPosition = shape.getShapePositions(); //loop through all blocks in position for (int i = 0; i < 7; i++) { // get the position of the first block in the shape and add to the position of the first block int x = (int) (getRawPosition()[i].x + getGridX()); int y = (int) (getRawPosition()[i].y + getGridY()); //instantiate a block with the position and type Block* block = Block::blockWithFile(x,y,(i+1), shape); // add the block to the this node this->addChild(block); } } And for clarity here is the block class class Block : public CCNode{ private: // using composition over inheritance CCSprite* _sprite; // tracks the current grid position int _gridX, _gridY; // used to store actual image number int _blockNo; public: Block(void); Block(int gridX, int gridY, int blockNo); Block& operator=(const Block& block); // static constructor for the creation of a block static Block* blockWithFile(int gridX, int gridY,int blockNo, Shape shape); ... } The blocks implementation..... Block& Block::operator=(const Block& block) { _sprite = new CCSprite(*block._sprite); _gridX = block._gridX; _gridY = block._gridY; _blockNo = block._blockNo; //again how to clone CCNode? return *this; } Block* Block::blockWithFile(int gridX, int gridY,int blockNo, Shape shape) { Block* block = new Block(); if (block && block->initBlockWithFile(gridX, gridY,blockNo, shape)) { block->autorelease(); return block; } CC_SAFE_DELETE(block); return NULL; } bool Block::initBlockWithFile(int gridX, int gridY,int blockNo, Shape shape) { setGridX(gridX); setGridY(gridY); setBlockNo(blockNo); const char* characterImg = helperFunctions::Format(shape.getFileName(),blockNo); // add to the spritesheet CCTexture2D* gameArtTexture = CCTextureCache::sharedTextureCache()->addImage("Character.pvr.ccz"); CCSpriteBatchNode::createWithTexture(gameArtTexture); // block settings _sprite = CCSprite::createWithSpriteFrameName(characterImg); // set the position of the block and add it to the layer this->setPosition(CONVERTGRIDTOACTUALPOS_X_Y(gridX,gridY)); this->addChild(_sprite); return true; } Any ideas are welcome at this point!! thanks

    Read the article

  • ComboBox Data Binding

    - by Geertjan
    Let's create a databound combobox, levering MVC in a desktop application. The result will be a combobox, provided by the NetBeans ChoiceView, that displays data retrieved from a database: What follows is not much different from the NetBeans Platform CRUD Application Tutorial and you're advised to consult that document if anything that follows isn't clear enough. One kind of interesting thing about the instructions that follow is that it shows that you're able to create an application where each element of the MVC architecture can be located within a separate module: Start by creating a new NetBeans Platform application named "MyApplication". Model We're going to start by generating JPA entity classes from a database connection. In the New Project wizard, choose "Java Class Library". Click Next. Name the Java Class Library "MyEntities". Click Finish. Right-click the MyEntities project, choose New, and then select "Entity Classes from Database". Work through the wizard, selecting the tables of interest from your database, and naming the package "entities". Click Finish. Now a JPA entity is created for each of the selected tables. In the Project Properties dialog of the project, choose "Copy Dependent Libraries" in the Packaging panel. Build the project. In your project's "dist" folder (visible in the Files window), you'll now see a JAR, together with a "lib" folder that contains the JARs you'll need. In your NetBeans Platform application, create a module named "MyModel", with code name base "org.my.model". Right-click the project, choose Properties, and in the "Libraries" panel, click Add Dependency button in the Wrapped JARs subtab to add all the JARs from the previous step to the module. Also include "derby-client.jar" or the equivalent driver for your database connection to the module. Controler In your NetBeans Platform application, create a module named "MyControler", with code name base "org.my.controler". Right-click the module's Libraries node, in the Projects window, and add a dependency on "Explorer & Property Sheet API". In the MyControler module, create a class with this content: package org.my.controler; import org.openide.explorer.ExplorerManager; public class MyUtils { static ExplorerManager controler; public static ExplorerManager getControler() { if (controler == null) { controler = new ExplorerManager(); } return controler; } } View In your NetBeans Platform application, create a module named "MyView", with code name base "org.my.view".  Create a new Window Component, in "explorer" view, for example, let it open on startup, with class name prefix "MyView". Add dependencies on the Nodes API and on the Explorer & Property Sheet API. Also add dependencies on the "MyModel" module and the "MyControler" module. Before doing so, in the "MyModel" module, make the "entities" package and the "javax.persistence" packages public (in the Libraries panel of the Project Properties dialog) and make the one package that you have in the "MyControler" package public too. Define the top part of the MyViewTopComponent as follows: public final class MyViewTopComponent extends TopComponent implements ExplorerManager.Provider { ExplorerManager controler = MyUtils.getControler(); public MyViewTopComponent() { initComponents(); setName(Bundle.CTL_MyViewTopComponent()); setToolTipText(Bundle.HINT_MyViewTopComponent()); setLayout(new BoxLayout(this, BoxLayout.PAGE_AXIS)); controler.setRootContext(new AbstractNode(Children.create(new ChildFactory<Customer>() { @Override protected boolean createKeys(List list) { EntityManager entityManager = Persistence. createEntityManagerFactory("MyEntitiesPU").createEntityManager(); Query query = entityManager.createNamedQuery("Customer.findAll"); list.addAll(query.getResultList()); return true; } @Override protected Node createNodeForKey(Customer key) { Node customerNode = new AbstractNode(Children.LEAF, Lookups.singleton(key)); customerNode.setDisplayName(key.getName()); return customerNode; } }, true))); controler.addPropertyChangeListener(new PropertyChangeListener() { @Override public void propertyChange(PropertyChangeEvent evt) { Customer selectedCustomer = controler.getSelectedNodes()[0].getLookup().lookup(Customer.class); StatusDisplayer.getDefault().setStatusText(selectedCustomer.getName()); } }); JPanel row1 = new JPanel(new FlowLayout(FlowLayout.LEADING)); row1.add(new JLabel("Customers: ")); row1.add(new ChoiceView()); add(row1); } @Override public ExplorerManager getExplorerManager() { return controler; } ... ... ... Now run the application and you'll see the same as the image with which this blog entry started.

    Read the article

< Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >