Search Results

Search found 24675 results on 987 pages for 'table'.

Page 561/987 | < Previous Page | 557 558 559 560 561 562 563 564 565 566 567 568  | Next Page >

  • Try the Oracle Database Appliance Manager Configurator - For Fun!

    - by pwstephe-Oracle
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 If you would like to get a first hand glimpse of how easy it is to configure an ODA, even if you don’t have access to one, it’s possible to download the Appliance Manager Configurator from the Oracle Technology Network, and run it standalone on your PC or Linux/Unix  workstation. The configurator is packaged in a zip file that contains the complete Java environment to run standalone. Once the package is downloaded and unzipped it’s simply a matter of launching it using the config command or shell depending on your runtime environment. Oracle Appliance Manager Configurator is a Java-based tool that enables you to input your deployment plan and validate your network settings before an actual deployment, or you can just preview and experiment with it. Simply download and run the configurator on a local client system which can be a Windows, Linux, or UNIX system. (For Windows launch the batch file config.bat for Linux/Unix environments, run  ./ config.sh). You will be presented with the very same dialogs and options used to configure a production ODA but on your workstation. At the end of a configurator session, you may save your deployment plan in a configuration file. If you were actually ready to deploy, you could copy this configuration file to a real ODA where the online Oracle Appliance Manager Configurator would use the contents to deploy your plan in production. You may also print the file’s content and use the printout as a checklist for setting up your production external network configuration. Be sure to use the actual production network addresses you intend to use it as this will only work correctly if your client system is connected to same network that will be used for the ODA. (This step is not necessary if you are just previewing the Configurator). This is a great way to get an introductory look at the simple and intuitive Database Appliance configuration interface and the steps to configure a system. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • CFOs: Do You Have a Playbook for Growth?

    - by Oracle Accelerate for Midsize Companies
    by Jim Lein, Oracle Midsize Programs In most global markets, CFOs are optimistic about their company's growth opportunities. Deloitte's CFO Signals Report, "Time to Accelerate" found that: In the U.K. business optimism is at its highest level in three-and-a-half years Optimism in North America rose from a strong +42% last quarter (Q2 to Q3 2013) to an even stronger +54%. The inaugural Southeast Asia survey, 44% of CFOs reported a positive outlook despite worries over the Chinese economy and political uncertainty. Sustainable and profitable business growth doesn't usually happen by accident. Company's need a playbook for growth that's owned by the CFO. And today, that playbook must leverage the six enabling technologies--Social, Big Data, Mobile, Cloud, Analytics, and The Internet of Things (or, as Oracle president Mark Hurd explains, "The Internet of the People"). On Monday June 9 at  2:00 pm Eastern, CFO.com is hosting a webcast, "The CFO Playbook on Growth: How CFOs Can Boost Efficiency and Performance with Automation". Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} “Investing in technology begins with a business metric driven business case with clear tangible business results expected," says John Lieblang, Affiliate Partner with Waterstone Management Group. "The progressive CFO has learned how to forge a partnership with the CIO to align everyone in the 'result value chain' to be accountable for the business results not just for functional technology.” Click HERE to register  Looking for more news and information about Oracle Solutions for Midsize Companies? Read the latest Oracle for Midsize Companies Newsletter Sign-up to receive the latest communications from Oracle’s industry leaders and experts Jim Lein I evangelize Oracle's enterprise solutions for growing midsize companies. I recently celebrated 15 years with Oracle, having joined JD Edwards in 1999. I'm based in Evergreen, Colorado and love relating stories about creativity and innovation whether they be about software, live music, or the mountains. The views expressed here are my own, and not necessarily those of Oracle.

    Read the article

  • ADO and Two Way Storage Tiering

    - by Andy-Oracle
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 We get asked the following question about Automatic Data Optimization (ADO) storage tiering quite a bit. Can you tier back to the original location if the data gets hot again? The answer is yes but not with standard Automatic Data Optimization policies, at least not reliably. That's not how ADO is meant to operate. ADO is meant to mirror a traditional view of Information Lifecycle Management (ILM) where data will be very volatile when first created, will become less active or cool, and then will eventually cease to be accessed at all (i.e. cold). I think the reason this question gets asked is because customers realize that many of their business processes are cyclical and the thinking goes that those segments that only get used during month end or year-end cycles could sit on lower cost storage when not being used. Unfortunately this doesn't fit very well with the ADO storage tiering model. ADO storage tiering is based on the amount of free and used space in the source tablespace. There are two parameters that control this behavior, TBS_PERCENT_USED and TBS_PERCENT_FREE. When the space in the tablespace exceeds the TBS_PERCENT_USED value then segments specified in storage tiering clause(s) can be moved until the percent of free space reaches the TBS_PERCENT_FREE value. It is worth mentioning that no checks are made for available space in the target tablespace. Now, it is certainly possible to create custom functions to control storage tiering, but this can get complicated. The biggest problem is insuring that there is enough space to move the segment back to tier 1 storage, assuming that that's the goal. This isn't as much of a problem when moving from tier 1 to tier 2 storage because there is typically more tier 2 storage available. At least that's the premise since it is supposed to be less costly, lower performing and higher capacity storage. In either case though, if there isn't enough space then the operation fails. In the case of a customized function, the question becomes do you attempt to free the space so the move can be made or do you just stop and return false so that the move cannot take place? This is really the crux of the issue. Once you cross into this territory you're really going to have to implement two-way hierarchical storage and the whole point of ADO was to provide automatic storage tiering. You're probably better off using heat map and/or business access requirements and building your own hierarchical storage management infrastructure if you really want two way storage tiering. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • MySQL Server 5.6 defaults changes

    - by user12626240
    We're improving the MySQL Server defaults, as announced by Tomas Ulin at MySQL Connect. Here's what we're changing:  Setting  Old  New  Notes back_log  50  50 + ( max_connections / 5 ) capped at 900 binlog_checksum  off  CRC32  New variable in 5.6 binlog_row_event_max_size  1k  8k flush_time  1800  Windows changes from 1800 to 0  Was already 0 on other platforms host_cache_size  128  128 + 1 for each of the first 500 max_connections + 1 for every 20 max_connections over 500, capped at 2000  New variable in 5.6 innodb_autoextend_increment  8  64  Now affects *.ibd files. 64 is 64 megabytes innodb_buffer_pool_instances  0  8. On 32 bit Windows only, if innodb_buffer_pool_size is greater than 1300M, default is innodb_buffer_pool_size / 128M innodb_concurrency_tickets  500  5000 innodb_file_per_table  off  on innodb_log_file_size  5M  48M  InnoDB will always change size to match my.cnf value. Also see innodb_log_compressed_pages and binlog_row_image innodb_old_blocks_time 0  1000 1 second innodb_open_files  300  300; if innodb_file_per_table is ON, higher of table_open_cache or 300 innodb_purge_batch_size  20  300 innodb_purge_threads  0  1 innodb_stats_on_metadata  on  off join_buffer_size 128k  256k max_allowed_packet  1M  4M max_connect_errors  10  100 open_files_limit  0  5000  See note 1 query_cache_size  0  1M query_cache_type  on/1  off/0 sort_buffer_size  2M  256k sql_mode  none  NO_ENGINE_SUBSTITUTION  See later post about default my.cnf for STRICT_TRANS_TABLES sync_master_info  0  10000  Recommend: master_info_repository=table sync_relay_log  0  10000 sync_relay_log_info  0  10000  Recommend: relay_log_info_repository=table. Also see Replication Relay and Status Logs table_definition_cache  400  400 + table_open_cache / 2, capped at 2000 table_open_cache  400  2000   Also see table_open_cache_instances thread_cache_size  0  8 + max_connections/100, capped at 100 Note 1: In 5.5 there was already a rule to make open_files_limit 10 + max_connections + table_cache_size * 2 if that was higher than the user-specified value. Now uses the higher of that and (5000 or what you specify). We are also adding a new default my.cnf file and guided instructions on the key settings to adjust. More on this in a later post. We're also providing a page with suggestions for settings to improve backwards compatibility. The old example files like my-huge.cnf are obsolete. Some of the improvements are present from 5.6.6 and the rest are coming. These are ideas, and until they are in an official GA release, they are subject to change. As part of this work I reviewed every old server setting plus many hundreds of emails of feedback and testing results from inside and outside Oracle's MySQL Support team and the many excellent blog entries and comments from others over the years, including from many MySQL Gurus out there, like Baron, Sheeri, Ronald, Schlomi, Giuseppe and Mark Callaghan. With these changes we're trying to make it easier to set up the server by adjusting only a few settings that will cause others to be set. This happens only at server startup and only applies to variables where you haven't set a value. You'll see a similar approach used for the Performance Schema. The Gurus don't need this but for many newcomers the defaults will be very useful. Possibly the most unusual change is the way we vary the setting for innodb_buffer_pool_instances for 32-bit Windows. This is because we've found that DLLs with specified load addresses often fragment the limited four gigabyte 32-bit address space and make it impossible to allocate more than about 1300 megabytes of contiguous address space for the InnoDB buffer pool. The smaller requests for many pools are more likely to succeed. If you change the value of innodb_log_file_size in my.cnf you will see a message like this in the error log file at the next restart, instead of the old error message: [Warning] InnoDB: Resizing redo log from 2*64 to 5*128 pages, LSN=5735153 One of the biggest challenges for the defaults is the millions of installations on a huge range of systems, from point of sale terminals and routers though shared hosting or end user systems and on to major servers with lots of CPU cores, hundreds of gigabytes of RAM and terabytes of fast disk space. Our past defaults were for the smaller systems and these change that to larger shared hosting or shared end user systems, still with a bias towards the smaller end. There is a bias in favour of OLTP workloads, so reporting systems may need more changes. Where there is a conflict between the best settings for benchmarks and normal use, we've favoured production, not benchmarks. We're very interested in your feedback, comments and suggestions.

    Read the article

  • PBCS Hyperion Planning in the Cloud PartnerLab 2-Day Training

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 Objective of the PartnerLab:  To help partners engage the interest and commitment of their clients for Oracle Planning and Budgeting Cloud Service projects. This is your unique opportunity to learn how to expand your business with the PBCS Application. This 2-day PartnerLab workshop will enable your team to understand the fundamental concepts of the PBCS Application, the implications of Oracle Public Cloud deployment, and to effectively present and demonstrate PBCS to prospective clients. Participants must already be competent with the on-premise Hyperion Planning application: this training will build on existing expertise to cover SaaS Cloud specific deployment implications and how best to demonstrate this to clients and win services led PBCS implementation engagements. Register here now and see full Agenda for 07-08 July 2014 in Oracle Paris – Colombes 15, bd Charles de Gaulle, 92715 Colombes Cedex France Register here now and see full Agenda for 15-16 July 2014 in Oracle Italy via Fulvio Testi 136, Cinisello Balsamo, Milan, Italy This training is free of charge to OPN Member Partners This PartnerLab is a 2 day in-class workshop event led by Oracle Pre-Sales subject matter experts. These 2 days consist of discussions, presentations, demonstration and hands-on exercises. Note: the hands-on exercises are in an already installed environment that you can have access to after the event (see more @ Hyperion Demonstration Systems for Partners). The PartnerLab will be delivered in English or local language. Mandatory prerequisites for a participant: Please view material available and complete the assessments before you attend the PartnerLab event. Material and assessments cover foundational information about Oracle Hyperion Planning and Oracle Planning and Budgeting Cloud Service. View material prior to live PartnerLab: Oracle Hyperion Planning 11 Sales Specialist guided learning path Oracle Hyperion Planning 11 PreSales Specialist guided learning path Oracle Hyperion Planning 11 Implementation Specialist guided learning path Oracle Planning and Budgeting Cloud Service Specialist guided learning path PBCS How-to Videos Learn More at Oracle Planning and Budgeting Cloud Service Take and pass these on-line assessments prior to the live PartnerLab training: Oracle Hyperion Planning 11 Sales Specialist on-line exam Oracle Hyperion Planning 11 PreSales Specialist on-line exam /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;}

    Read the article

  • Can anybody help me in designing my UITableView into MVC Pattern ?

    - by user2877880
    I have written a ViewController in which i get data from the internet and display it in a UItableview using a json parser which uses object for key to identify its objects. What i would like your help in is to convert it into MVC pattern to make it less clumsy instead of including everything in the same controller class. Please try explaining it to me in terms of my code. THANKS IN ADVANCE. The code is as given below #import "ViewController.h" #import "AFNetworking.h" #import "ModelTableArray.h" @implementation ViewController @synthesize tableView = _tableView, activityIndicatorView = _activityIndicatorView, movies = _movies; - (void)viewDidLoad { [super viewDidLoad]; // Setting Up Table View self.tableView = [[UITableView alloc] initWithFrame:CGRectMake(0.0, 0.0, self.view.bounds.size.width, self.view.bounds.size.height) style:UITableViewStylePlain]; self.tableView.dataSource = self; self.tableView.delegate = self; self.tableView.autoresizingMask = UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleHeight; self.tableView.hidden = YES; [self.view addSubview:self.tableView]; // Setting Up Activity Indicator View self.activityIndicatorView = [[UIActivityIndicatorView alloc] initWithActivityIndicatorStyle:UIActivityIndicatorViewStyleGray]; self.activityIndicatorView.hidesWhenStopped = YES; self.activityIndicatorView.center = self.view.center; [self.view addSubview:self.activityIndicatorView]; [self.activityIndicatorView startAnimating]; // Initializing Data Source self.movies = [[NSArray alloc] init]; NSURL *url = [[NSURL alloc] initWithString:@"http://itunes.apple.com/search?term=rocky&country=us&entity=movie"]; NSURLRequest *request = [[NSURLRequest alloc] initWithURL:url]; UIRefreshControl *refreshControl = [[UIRefreshControl alloc] init]; [refreshControl addTarget:self action:@selector(refresh:) forControlEvents:UIControlEventValueChanged]; [self.tableView addSubview:refreshControl]; [refreshControl endRefreshing]; AFJSONRequestOperation *operation = [AFJSONRequestOperation JSONRequestOperationWithRequest:request success:^(NSURLRequest *request, NSHTTPURLResponse *response, id JSON) { self.movies = [JSON objectForKey:@"results"]; [self.activityIndicatorView stopAnimating]; [self.tableView setHidden:NO]; [self.tableView reloadData]; } failure:^(NSURLRequest *request, NSHTTPURLResponse *response, NSError *error, id JSON) { NSLog(@"Request Failed with Error: %@, %@", error, error.userInfo); }]; [operation start]; } - (void)refresh:(UIRefreshControl *)sender { NSURL *url = [[NSURL alloc] initWithString:@"http://itunes.apple.com/search?term=rambo&country=us&entity=movie"]; NSURLRequest *request = [[NSURLRequest alloc] initWithURL:url]; AFJSONRequestOperation *operation = [AFJSONRequestOperation JSONRequestOperationWithRequest:request success:^(NSURLRequest *request, NSHTTPURLResponse *response, id JSON) { self.movies = [JSON objectForKey:@"results"]; [self.activityIndicatorView stopAnimating]; [self.tableView setHidden:NO]; [self.tableView reloadData]; } failure:^(NSURLRequest *request, NSHTTPURLResponse *response, NSError *error, id JSON) { NSLog(@"Request Failed with Error: %@, %@", error, error.userInfo); }]; [operation start]; [sender endRefreshing]; } - (void)viewDidUnload { [super viewDidUnload]; } - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { return YES; } // Table View Data Source Methods - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { if (self.movies && self.movies.count) { return self.movies.count; } else { return 0; } } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *cellID = @"Cell Identifier"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:cellID]; if (!cell) { cell = [[UITableViewCell alloc] initWithStyle:UITableViewCellStyleSubtitle reuseIdentifier:cellID]; } NSDictionary *movie = [self.movies objectAtIndex:indexPath.row]; cell.textLabel.text = [movie objectForKey:@"trackName"]; cell.detailTextLabel.text = [movie objectForKey:@"artistName"]; NSURL *url = [[NSURL alloc] initWithString:[movie objectForKey:@"artworkUrl100"]]; [cell.imageView setImageWithURL:url placeholderImage:[UIImage imageNamed:@"placeholder"]]; return cell; } @end

    Read the article

  • Oracle Excellence Award

    - by Hartmut Wiese
    CALL FOR NOMINATIONS 2014 Oracle Excellence Award: Sustainability Innovation Is your organization using an Oracle product to help with a sustainability initiative while reducing costs? Saving energy? Saving gas? Saving paper? For example, you may use Oracle’s Agile Product Lifecycle Management to design more eco-friendly products, Oracle Transportation Management to reduce fleet emissions, Oracle Exadata Database Machine to decrease power and cooling needs while increasing database performance, Oracle Business Intelligence to measure environmental impacts, or one of many other Oracle products. Your organization may be eligible for the 2014 Oracle Excellence Award: Sustainability Innovation. Submit a nomination form located here by Friday June 20 if your company is using any Oracle product to take an environmental lead as well as to reduce costs and improve business efficiencies by using green business practices. These awards will be presented during Oracle OpenWorld 2014 (September 28-October 2) in San Francisco.  Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 About the Award • Winners will be selected from the customer and/or partner nominations. Either a customer, their partner, or Oracle representative can submit the nomination form on behalf of the customer.• There is a nomination form here to discuss your use of Oracle products and how they have helped your sustainability efforts and reduced costs. • Winners will be selected based on the extent of the environmental impact they have had as well as the business efficiencies they have achieved through their combined use of Oracle products. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Nomination Eligibility • Your company uses at least one component of Oracle products, whether it's the Oracle database, business applications, Fusion Middleware, or Sun servers/storage. • This solution should be in production or in active development. • Nomination deadline: Friday June 20, 2014. Benefits to Award Winners • Award presented to winners during Oracle OpenWorld by Jeff Henley, Oracle Chairman of the Board • Free Oracle OpenWorld registration pass for each winning customer • 2014 Oracle Excellence Award: Sustainability Innovation award logo for inclusion on your own website &/or press release • Possible placement in Oracle Profit Magazine &/or Oracle Magazine • ‘Enable the Eco-Enterprise’ podcast opportunity See last year's winners here Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 ______________________________________________________________________________________ Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Questions? Send an email to: [email protected] Follow Oracle’s Sustainability Solutions on Twitter, LinkedIn, YouTube, and the Sustainability Matters blog Web page with award details:  http://www.oracle.com/us/products/applications/green/call-for-nominations-185050.html  

    Read the article

  • On a BPM Mission with Process Accelerators. Part 1: BPM as an ATV

    - by Cesare Rotundo
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Part 1: BPM as an ATV It’s always exciting to talk to customers that are in the middle of a BPM transformational journey. Their thirst for new processes to improve with BPM makes them explorers in a landscape of opportunities. They have discovered that with BPM the can “go places” they couldn’t reach before. In a way, learning how to generate value with BPM is like adopting a new mean of transportation. Apps are like regular cars: very efficient, but to be used on paved roads: the road/process has been traced, and there are fixed paths to follow to get from “opportunity to quote” or from “quote to cash”. Getting off the road is risky, and laying down new asphalt is slow and expensive. Custom development is like running: you can go virtually anywhere, following any path you like, yet it’s slow, and a lot of sweat. BPM allows you to go “off the beaten path” laid out by packaged apps, yet make fast progress compared to custom development. BPM is therefore more like an All-Terrain Vehicle (ATV): less efficient than a car, but much faster than running, with a powerful enough engine that can get you places. The similarities between BPM and ATVs don’t stop here: you must learn to ride it even if you already know how to drive a car; you can reach places but figuring out the path to your destination is harder. Ultimately, with BPM as with an ATV, you reach places that you thought you could never reach, and you discover new destinations that provide great benefit to you … and that you didn’t even know existed! That’s where the sense of accomplishment that we heard from our BPM customers comes from, as well as the desire to share their experience, or even, as in the case of a County, the willingness to contribute their BPM solutions to help other agencies that face the same challenges. The question we wanted to answer is how can we teach organizations to drive ATV/BPM, thus leading them to deeper success with BPM, while increasing their awareness of the potential for reaching new targets, and finally equip them with the right tools. Like with ATVs, getting from point A to point B is more of a work of art than cruising on the highway by car. There is a lot we can do: after all many sought after destinations are common: someone else has been on the same path before. If only you could learn from their experience …

    Read the article

  • How can I get the data from the json one by one by using javascript/jquery? [on hold]

    - by sandhus
    I have the working code which fetches all the records from the json, but how can I make it available one by one on the click of the button or link? The following code is working to fetch all the records: <!doctype html> <html> <head> <meta charset="utf-8"> <title>jQuery PHP Json Response</title> <style type="text/css"> div { text-align:center; padding:10px; } #msg { width: 500px; margin: 0px auto; } .members { width: 500px ; background-color: beige; } </style> </head> <body> <div id="msg"> <table id="userdata" border="1"> <thead> <th>Email</th> <th>Sex</th> <th>Location</th> <th>Picture</th> <th>audio</th> <th>video</th> </thead> <tbody></tbody> </table> </div> <script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js"> </script> <script type="text/javascript"> $(document).ready(function(){ var url="json.php"; $("#userdata tbody").html(""); $.getJSON(url,function(data){ $.each(data.members, function(i,user){ var tblRow = "<tr>" +"<td>"+user.email+"</td>" +"<td>"+user.sex+"</td>" +"<td>"+user.location+"</td>" +"<td>"+"<img src="+user.image+">"+"</td>" +"<td>"+"<audio src="+user.video+" controls>"+"</td>" +"<td>"+"<video src="+user.video+" controls>"+"</td>" +"</tr>" ; $(tblRow).appendTo("#userdata tbody"); }); }); }); </script> </body> </html> I used the json_encode function in the php file to encode the sql db. How can i achieve this?

    Read the article

  • Upgrading to 9.2 - Info You Can Use (part 1)

    - by John Webb
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Rebekah Jackson joins our blog with a series of helpful hints on planning your upgrade to PeopleSoft 9.2.   Find Features & Capabilities There are many ways that you might learn about new features and capabilities within our releases, but if you aren’t sure where to start or how best to go about it, we recommend: Go to www.peoplesoftinfo.com Select the product line you are interested in, and go to the ‘Release Content’ tab Use the Video Feature Overviews (VFOs) on YouTube and the Cumulative Feature Overview (CFO) tool to find features and functions. The VFOs are brief recordings that summarize some of our most popular capabilities. These recordings are great tools for learning about new features, or helping others to visualize the value they can bring to your organization. The VFOs focus on some of our highest value and most compelling new capabilities. We also provide summarized ‘Why Upgrade to 9.2’ VFOs for HCM, Financials, and Supply Chain. The CFO is a spreadsheet based tool that allows you to select the release you are currently on, and compare it to the new release. It will return the list of all new features and capabilities, by product. You can browse the full list and / or highlight areas that look particularly interesting. Once you have a list of features by product, use the Release Value Proposition, Pre-Release Notes, and the Release Notes documents to get more details on and supporting value statements about why those features will be helpful. Gather additional data and supporting information, including: Go to the Product Data Sheets tab, and review the respective data sheets. These summarize the capabilities in the product, and provide succinct value statements for the product and capabilities. The PeopleSoft 9.2 Upgrade page, which has many helpful resources. Important Notes:   -  We recommend that you go through the above steps for the application areas of interest, as well as for PeopleTools. There are many areas in PeopleTools 8.53 and the 9.2 application releases that combine technical and functional capabilities to deliver transformative value.    - We also recommend that you review the Portal Solutions content. With your license to PeopleSoft applications, you have access to many of the most powerful capabilities within the Interaction Hub.    -  If you have recently upgraded to PeopleSoft 9.1, and an immediate upgrade to 9.2 is simply not realistic, you can apply the same approaches described here to find untapped capabilities in your current products. Many of the features in 9.2 were delivered first in our 9.1 Feature Packs. To find the Release Value Proposition, Pre-Release Notes, and Release Notes for these releases, search on ‘PeopleSoft 9.1 Documentation Home Page’ on My Oracle Support, and select your desired product area. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

  • EPPM Is a Must-Have Capability as Global Energy and Power Industries Eye US$38 Trillion in New Investments

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} “The process manufacturing industry is facing an unprecedented challenge: from now until 2035, cumulative worldwide investments of US$38 trillion will be required for drilling, power generation, and other energy projects,” Iain Graham, director of energy and process manufacturing for Oracle’s Primavera, said in a recent webcast. He adds that process manufacturing organizations such as oil and gas, utilities, and chemicals must manage this level of investment in an environment of constrained capital markets, erratic supply and demand, aging infrastructure, heightened regulations, and declining global skills. In the following interview, Graham explains how the right enterprise project portfolio management (EPPM) technology can help the industry meet these imperatives. Q: Why is EPPM so important for today’s process manufacturers? A: If the industry invests US$38 trillion without proper cost controls in place, a huge amount of resources will be put at risk, especially when it comes to cost overruns that may occur in large capital projects. Process manufacturing companies must not only control costs, but also monitor all the various contractors that will be involved in each project. If you’re not managing your own workers and all the interdependencies among the different contractors, then you’ve got problems. Q: What else should process manufacturers look for? A: It’s also important that an EPPM solution has the ability to manage more than just capital projects. For example, it’s best to manage maintenance and capital projects in the same system. Say you’re due to install a new transformer in a power station as part of a capital project, but routine maintenance in that area of the facility is scheduled for that morning. The lack of coordination could lead to unforeseen delays. There are also IT considerations that impact capital projects, such as adding servers and network cable for a control system in a power station. What organizations need is a true EPPM system that’s not just for capital projects, maintenance, or IT activities, but instead an enterprisewide solution that provides visibility into all types of projects. Read the complete Q&A here and discover the practical framework for successfully managing this massive capital spending.

    Read the article

  • How can I make this script output each categories item per category [closed]

    - by Duice352
    Ok so here is the deal currently this script outputs all the products in a parent category as well as the products in the child categories. What i would like to do is seperate the output based on child categories. All the child categories are in the array $children and the string $childs. The parent category is the first array element of $children with the following ones being the actual children. The category names are stored in the database $result as " $cat_name ". I want to first Display the cat_name then the products that fall in that category and then display the next child cat_name and items, ect. Any suggestions of how to manipulate the while loop that cylcles through the rows? <?php $productsPerRow = 3; $productsPerPage = 15; //$productList = getProductList($catId); $children = array_merge(array($catId), getChildCategories(NULL, $catId)); $childs = ' (' . implode(', ', $children) . ')'; $sql = "SELECT pd_id, pd_name, pd_price, pd_thumbnail, pd_qty, c.cat_id, c.cat_name FROM tbl_product pd, tbl_category c WHERE pd.cat_id = c.cat_id AND pd.cat_id IN $childs ORDER BY pd_name"; $result = dbQuery(getPagingQuery($sql, $productsPerPage)); $pagingLink = getPagingLink($sql, $productsPerPage, "c=$catId"); $numProduct = dbNumRows($result); // the product images are arranged in a table. to make sure // each image gets equal space set the cell width here $columnWidth = (int)(100 / $productsPerRow); ?> <p><?php if(isset($_GET['m'])){echo "You must select a model first! After you select your model you can customize your dragster parts.";} ?> </p> <p align="center"><?php echo $pagingLink; ?></p> <table width="100%" border="0" cellspacing="0" cellpadding="20"> <?php if ($numProduct > 0 ) { $i = 0; while ($row = dbFetchAssoc($result)) { extract($row); if ($pd_thumbnail) { $pd_thumbnail = WEB_ROOT . 'images/product/' .$pd_thumbnail; } else { $pd_thumbnail = 'images/no-image-small.png'; } if ($i % $productsPerRow == 0) { echo '<tr>'; } // format how we display the price $pd_price = displayAmount($pd_price); echo "<td width=\"$columnWidth%\" align=\"center\"><a href=\"" . $_SERVER['PHP_SELF'] . "?c=$catId&p=$pd_id" . "\"><img src=\"$pd_thumbnail\" border=\"0\"><br>$pd_name</a><br>Price : $pd_price <br> $cat_id - $cat_name"; // if the product is no longer in stock, tell the customer if ($pd_qty <= 0) { echo "<br>Out Of Stock"; } echo "</td>\r\n"; if ($i % $productsPerRow == $productsPerRow - 1) { echo '</tr>'; } $i += 1; } if ($i % $productsPerRow > 0) { echo '<td colspan="' . ($productsPerRow - ($i % $productsPerRow)) . '">&nbsp;</td>'; }

    Read the article

  • How can I change the color of the text in my iFrame? [closed]

    - by VinylScratch
    I have code here: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html> <head> <title>Frag United Banlist</title> </head> <body> <h1>Tekkit Banlist</h1> <?php // change these things $server = "server-host"; $dbuser = "correct-user"; $dbpass = "correct-password"; $dbname = "correct-database"; mysql_connect($server, $dbuser, $dbpass); mysql_select_db($dbname); $result = mysql_query("SELECT * FROM banlist ORDER BY id DESC"); //This will display the most recent by id edit this query how you see fit. Limit, Order, ect. echo "<table width=100% border=1 cellpadding=3 cellspacing=0>"; echo "<tr style=\"font-weight:bold\"> <td>ID</td> <td>User</td> <td>Reason</td> <td>Admin/Mod</td> <td>Time</td> <td>Ban Length</td> </tr>"; while($row = mysql_fetch_assoc($result)){ if($col == "#eeeeee"){ $col = "#ffffff"; }else{ $col = "#eeeeee"; } echo "<tr bgcolor=$col>"; echo "<td>".$row['id']."</td>"; echo "<td>".$row['user']."</td>"; echo "<td>".$row['reason']."</td>"; echo "<td>".$row['admin']."</td>"; //Convert Epoch Time to Standard format $datetime = date("F j, Y, g:i a", $row['time']); echo "<td>$datetime</td>"; $dateconvert = date("F j, Y, g:i a", $row['length']); if($row['length'] == "0"){ echo "<td>None</td>"; }else{ echo "<td>$dateconvert</td>"; } echo "<td>".$row['id']."</td>"; echo "</tr>"; } echo"</table>" ?> </div> </body></html> And I am trying to make it so that when I put it in this iframe: <iframe src="http://bans.fragunited.net/" width="100%" length="100%"><p>Your browser does not support iframes.</p></iframe> But if you go to this page, fragunited.net/bans, (not bans.fragunited.net) the text is black and I want it to be white so you can actually see it. Sorry for the large amount of code, however I don't know where you have to put the code to change the color.

    Read the article

  • Oracle Enterprise Computing Summit??!??????/EM????????

    - by Oracle Japan Marketing
    .NewsType1107 img{border:none; vertical-align:bottom;} .NewsType1107 p{margin:0; padding:0;} .NewsType1107 td{color:#333333; line-height:1.5; font-family:"MS P????", Osaka, Hiragino Kaku Gothic Pro; font-size:12px;} .NewsType1107 table.t10 td, .small{font-size:10px;} .NewsType1107 a:link, a:visited{color:#ff0000;} .NewsType1107 a:hover, a:active{color:#ff0000; text-decoration:none;} .NewsType1107 a.l01:link, a.l01:visited, a.l01:hover, a.l01:active{color:#333333;} .NewsType1107 span.r, td.r{color:#ff0000;} .NewsType1107 table.tbl-semi td{padding:5px;} ?????BCP????????????????????????! ??????????????????????????????Oracle Enterprise Computing? ??????????????????????????????????Oracle Enterprise Computing?????????????????????? ??·?????????? >> ????????????????IT???????????????????? ???????????????????????ID??·??????????·?????3??????????????????????????????? ????????? >> ??????????????????????????????????????????????????????? Oracle EPM & BI Summit???????·?????????????????????????????????????·???????????????????????? ??·?????????? >> ???????????????????????????? ???5??????????????????????????????????????????????????????????????????????????????????? ??·?????????? >> -- ?????????????????????? ????????????????????????! ???????????????????????????????????????? ????????? >> ?????????????!??????????????? ? Sun????&?????·?????????????????????IT????????? ? ???????????·???????????????????IT???????????? ????????????? ? ?????????·????????????????????BI?? more solutions ? LIXIL ?????ERP?????????????????????????????????????????????? ? ?????????? Oracle EBS???·?????????????????????????????????????????? ? ????? Oracle EBS???/??????????????????????????????????????????????????????? more success stories IT?????????????????????????????????????·???·?????? >> ???????????????????????? ?? ???? ?? 7/6(?)10:30~18:00 ?????????? 2011 ?????????(??) 7/7(?)14:00~19:00 Java SE 7 ?????? ?????? ??????????(??) 7/13(?)13:30~16:45 ?????????????????????????? ??????????(??) 7/15(?)13:00~18:00 ?????!??????????????????????? ??????????(??) 7/20(?)9:30~17:50 ????·????????&????????2011 ?????????????(??) 7/20(?)13:30~17:00 ???????????????????????????? ??????????(??) 7/25(?)14:00~17:00 MySQL?????????????? ??????????(??) 7/26(?)13:30~17:45 Oracle Enterprise Computing Summit ???????????(??) Copyright © 2011, Oracle.All Rights Reserved. ???????????? | ???????????? | ??????????/????????

    Read the article

  • ????My Oracle Support????

    - by Steve He(???)
    Normal 0 7.8 ? 0 2 false false false EN-US ZH-CN X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:????; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.5pt; mso-bidi-font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:??; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-font-kerning:1.0pt;} Normal 0 7.8 ? 0 2 false false false EN-US ZH-CN X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:????; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.5pt; mso-bidi-font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:??; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-font-kerning:1.0pt;} ????????My Oracle Support???10????????????????????   My Oracle Support?????????????,????,????????,?????Oracle??,??????????????????My Oracle Support???,????(SR),????????,My Oracle Support???????????????,??????????,????????????????????,??Oracle??,??????,?????????????   ?????2???:1.???My Oracle Support,???“??(community)”???,????????????My Oracle Support???2. ????????https??My Oracle Support??:  https://communities.oracle.com/ ???????????????????,???????????????????????,??????????????????????????,??Oracle?????,?????Oracle?????????? ????????????????,?????????????????,???????????????????,????????????????,????????????????!??????“https://communities.oracle.com/”?????????,????????????

    Read the article

  • Using jQuery to Insert a New Database Record

    - by Stephen Walther
    The goal of this blog entry is to explore the easiest way of inserting a new record into a database using jQuery and .NET. I’m going to explore two approaches: using Generic Handlers and using a WCF service (In a future blog entry I’ll take a look at OData and WCF Data Services). Create the ASP.NET Project I’ll start by creating a new empty ASP.NET application with Visual Studio 2010. Select the menu option File, New Project and select the ASP.NET Empty Web Application project template. Setup the Database and Data Model I’ll use my standard MoviesDB.mdf movies database. This database contains one table named Movies that looks like this: I’ll use the ADO.NET Entity Framework to represent my database data: Select the menu option Project, Add New Item and select the ADO.NET Entity Data Model project item. Name the data model MoviesDB.edmx and click the Add button. In the Choose Model Contents step, select Generate from database and click the Next button. In the Choose Your Data Connection step, leave all of the defaults and click the Next button. In the Choose Your Data Objects step, select the Movies table and click the Finish button. Unfortunately, Visual Studio 2010 cannot spell movie correctly :) You need to click on Movy and change the name of the class to Movie. In the Properties window, change the Entity Set Name to Movies. Using a Generic Handler In this section, we’ll use jQuery with an ASP.NET generic handler to insert a new record into the database. A generic handler is similar to an ASP.NET page, but it does not have any of the overhead. It consists of one method named ProcessRequest(). Select the menu option Project, Add New Item and select the Generic Handler project item. Name your new generic handler InsertMovie.ashx and click the Add button. Modify your handler so it looks like Listing 1: Listing 1 – InsertMovie.ashx using System.Web; namespace WebApplication1 { /// <summary> /// Inserts a new movie into the database /// </summary> public class InsertMovie : IHttpHandler { private MoviesDBEntities _dataContext = new MoviesDBEntities(); public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/plain"; // Extract form fields var title = context.Request["title"]; var director = context.Request["director"]; // Create movie to insert var movieToInsert = new Movie { Title = title, Director = director }; // Save new movie to DB _dataContext.AddToMovies(movieToInsert); _dataContext.SaveChanges(); // Return success context.Response.Write("success"); } public bool IsReusable { get { return true; } } } } In Listing 1, the ProcessRequest() method is used to retrieve a title and director from form parameters. Next, a new Movie is created with the form values. Finally, the new movie is saved to the database and the string “success” is returned. Using jQuery with the Generic Handler We can call the InsertMovie.ashx generic handler from jQuery by using the standard jQuery post() method. The following HTML page illustrates how you can retrieve form field values and post the values to the generic handler: Listing 2 – Default.htm <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Add Movie</title> <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> </head> <body> <form> <label>Title:</label> <input name="title" /> <br /> <label>Director:</label> <input name="director" /> </form> <button id="btnAdd">Add Movie</button> <script type="text/javascript"> $("#btnAdd").click(function () { $.post("InsertMovie.ashx", $("form").serialize(), insertCallback); }); function insertCallback(result) { if (result == "success") { alert("Movie added!"); } else { alert("Could not add movie!"); } } </script> </body> </html>     When you open the page in Listing 2 in a web browser, you get a simple HTML form: Notice that the page in Listing 2 includes the jQuery library. The jQuery library is included with the following SCRIPT tag: <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> The jQuery library is included on the Microsoft Ajax CDN so you can always easily include the jQuery library in your applications. You can learn more about the CDN at this website: http://www.asp.net/ajaxLibrary/cdn.ashx When you click the Add Movie button, the jQuery post() method is called to post the form data to the InsertMovie.ashx generic handler. Notice that the form values are serialized into a URL encoded string by calling the jQuery serialize() method. The serialize() method uses the name attribute of form fields and not the id attribute. Notes on this Approach This is a very low-level approach to interacting with .NET through jQuery – but it is simple and it works! And, you don’t need to use any JavaScript libraries in addition to the jQuery library to use this approach. The signature for the jQuery post() callback method looks like this: callback(data, textStatus, XmlHttpRequest) The second parameter, textStatus, returns the HTTP status code from the server. I tried returning different status codes from the generic handler with an eye towards implementing server validation by returning a status code such as 400 Bad Request when validation fails (see http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html ). I finally figured out that the callback is not invoked when the textStatus has any value other than “success”. Using a WCF Service As an alternative to posting to a generic handler, you can create a WCF service. You create a new WCF service by selecting the menu option Project, Add New Item and selecting the Ajax-enabled WCF Service project item. Name your WCF service InsertMovie.svc and click the Add button. Modify the WCF service so that it looks like Listing 3: Listing 3 – InsertMovie.svc using System.ServiceModel; using System.ServiceModel.Activation; namespace WebApplication1 { [ServiceBehavior(IncludeExceptionDetailInFaults=true)] [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class MovieService { private MoviesDBEntities _dataContext = new MoviesDBEntities(); [OperationContract] public bool Insert(string title, string director) { // Create movie to insert var movieToInsert = new Movie { Title = title, Director = director }; // Save new movie to DB _dataContext.AddToMovies(movieToInsert); _dataContext.SaveChanges(); // Return movie (with primary key) return true; } } }   The WCF service in Listing 3 uses the Entity Framework to insert a record into the Movies database table. The service always returns the value true. Notice that the service in Listing 3 includes the following attribute: [ServiceBehavior(IncludeExceptionDetailInFaults=true)] You need to include this attribute if you want to get detailed error information back to the client. When you are building an application, you should always include this attribute. When you are ready to release your application, you should remove this attribute for security reasons. Using jQuery with the WCF Service Calling a WCF service from jQuery requires a little more work than calling a generic handler from jQuery. Here are some good blog posts on some of the issues with using jQuery with WCF: http://encosia.com/2008/06/05/3-mistakes-to-avoid-when-using-jquery-with-aspnet-ajax/ http://encosia.com/2008/03/27/using-jquery-to-consume-aspnet-json-web-services/ http://weblogs.asp.net/scottgu/archive/2007/04/04/json-hijacking-and-how-asp-net-ajax-1-0-mitigates-these-attacks.aspx http://www.west-wind.com/Weblog/posts/896411.aspx http://www.west-wind.com/weblog/posts/324917.aspx http://professionalaspnet.com/archive/tags/WCF/default.aspx The primary requirement when calling WCF from jQuery is that the request use JSON: The request must include a content-type:application/json header. Any parameters included with the request must be JSON encoded. Unfortunately, jQuery does not include a method for serializing JSON (Although, oddly, jQuery does include a parseJSON() method for deserializing JSON). Therefore, we need to use an additional library to handle the JSON serialization. The page in Listing 4 illustrates how you can call a WCF service from jQuery. Listing 4 – Default2.aspx <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Add Movie</title> <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> <script src="Scripts/json2.js" type="text/javascript"></script> </head> <body> <form> <label>Title:</label> <input id="title" /> <br /> <label>Director:</label> <input id="director" /> </form> <button id="btnAdd">Add Movie</button> <script type="text/javascript"> $("#btnAdd").click(function () { // Convert the form into an object var data = { title: $("#title").val(), director: $("#director").val() }; // JSONify the data data = JSON.stringify(data); // Post it $.ajax({ type: "POST", contentType: "application/json; charset=utf-8", url: "MovieService.svc/Insert", data: data, dataType: "json", success: insertCallback }); }); function insertCallback(result) { // unwrap result result = result["d"]; if (result === true) { alert("Movie added!"); } else { alert("Could not add movie!"); } } </script> </body> </html> There are several things to notice about Listing 4. First, notice that the page includes both the jQuery library and Douglas Crockford’s JSON2 library: <script src="Scripts/json2.js" type="text/javascript"></script> You need to include the JSON2 library to serialize the form values into JSON. You can download the JSON2 library from the following location: http://www.json.org/js.html When you click the button to submit the form, the form data is converted into a JavaScript object: // Convert the form into an object var data = { title: $("#title").val(), director: $("#director").val() }; Next, the data is serialized into JSON using the JSON2 library: // JSONify the data var data = JSON.stringify(data); Finally, the form data is posted to the WCF service by calling the jQuery ajax() method: // Post it $.ajax({   type: "POST",   contentType: "application/json; charset=utf-8",   url: "MovieService.svc/Insert",   data: data,   dataType: "json",   success: insertCallback }); You can’t use the standard jQuery post() method because you must set the content-type of the request to be application/json. Otherwise, the WCF service will reject the request for security reasons. For details, see the Scott Guthrie blog post: http://weblogs.asp.net/scottgu/archive/2007/04/04/json-hijacking-and-how-asp-net-ajax-1-0-mitigates-these-attacks.aspx The insertCallback() method is called when the WCF service returns a response. This method looks like this: function insertCallback(result) {   // unwrap result   result = result["d"];   if (result === true) {       alert("Movie added!");   } else {     alert("Could not add movie!");   } } When we called the jQuery ajax() method, we set the dataType to JSON. That causes the jQuery ajax() method to deserialize the response from the WCF service from JSON into a JavaScript object automatically. The following value is passed to the insertCallback method: {"d":true} For security reasons, a WCF service always returns a response with a “d” wrapper. The following line of code removes the “d” wrapper: // unwrap result result = result["d"]; To learn more about the “d” wrapper, I recommend that you read the following blog posts: http://encosia.com/2009/02/10/a-breaking-change-between-versions-of-aspnet-ajax/ http://encosia.com/2009/06/29/never-worry-about-asp-net-ajaxs-d-again/ Summary In this blog entry, I explored two methods of inserting a database record using jQuery and .NET. First, we created a generic handler and called the handler from jQuery. This is a very low-level approach. However, it is a simple approach that works. Next, we looked at how you can call a WCF service using jQuery. This approach required a little more work because you need to serialize objects into JSON. We used the JSON2 library to perform the serialization. In the next blog post, I want to explore how you can use jQuery with OData and WCF Data Services.

    Read the article

  • The Execute SQL Task

    In this article we are going to take you through the Execute SQL Task in SQL Server Integration Services for SQL Server 2005 (although it appies just as well to SQL Server 2008).  We will be covering all the essentials that you will need to know to effectively use this task and make it as flexible as possible. The things we will be looking at are as follows: A tour of the Task. The properties of the Task. After looking at these introductory topics we will then get into some examples. The examples will show different types of usage for the task: Returning a single value from a SQL query with two input parameters. Returning a rowset from a SQL query. Executing a stored procedure and retrieveing a rowset, a return value, an output parameter value and passing in an input parameter. Passing in the SQL Statement from a variable. Passing in the SQL Statement from a file. Tour Of The Task Before we can start to use the Execute SQL Task in our packages we are going to need to locate it in the toolbox. Let's do that now. Whilst in the Control Flow section of the package expand your toolbox and locate the Execute SQL Task. Below is how we found ours. Now drag the task onto the designer. As you can see from the following image we have a validation error appear telling us that no connection manager has been assigned to the task. This can be easily remedied by creating a connection manager. There are certain types of connection manager that are compatable with this task so we cannot just create any connection manager and these are detailed in a few graphics time. Double click on the task itself to take a look at the custom user interface provided to us for this task. The task will open on the general tab as shown below. Take a bit of time to have a look around here as throughout this article we will be revisting this page many times. Whilst on the general tab, drop down the combobox next to the ConnectionType property. In here you will see the types of connection manager which this task will accept. As with SQL Server 2000 DTS, SSIS allows you to output values from this task in a number of formats. Have a look at the combobox next to the Resultset property. The major difference here is the ability to output into XML. If you drop down the combobox next to the SQLSourceType property you will see the ways in which you can pass a SQL Statement into the task itself. We will have examples of each of these later on but certainly when we saw these for the first time we were very excited. Next to the SQLStatement property if you click in the empty box next to it you will see ellipses appear. Click on them and you will see the very basic query editor that becomes available to you. Alternatively after you have specified a connection manager for the task you can click on the Build Query button to bring up a completely different query editor. This is slightly inconsistent. Once you've finished looking around the general tab, move on to the next tab which is the parameter mapping tab. We shall, again, be visiting this tab throughout the article but to give you an initial heads up this is where you define the input, output and return values from your task. Note this is not where you specify the resultset. If however you now move on to the ResultSet tab this is where you define what variable will receive the output from your SQL Statement in whatever form that is. Property Expressions are one of the most amazing things to happen in SSIS and they will not be covered here as they deserve a whole article to themselves. Watch out for this as their usefulness will astound you. For a more detailed discussion of what should be the parameter markers in the SQL Statements on the General tab and how to map them to variables on the Parameter Mapping tab see Working with Parameters and Return Codes in the Execute SQL Task. Task Properties There are two places where you can specify the properties for your task. One is in the task UI itself and the other is in the property pane which will appear if you right click on your task and select Properties from the context menu. We will be doing plenty of property setting in the UI later so let's take a moment to have a look at the property pane. Below is a graphic showing our properties pane. Now we shall take you through all the properties and tell you exactly what they mean. A lot of these properties you will see across all tasks as well as the package because of everything's base structure The Container. BypassPrepare Should the statement be prepared before sending to the connection manager destination (True/False) Connection This is simply the name of the connection manager that the task will use. We can get this from the connection manager tray at the bottom of the package. DelayValidation Really interesting property and it tells the task to not validate until it actually executes. A usage for this may be that you are operating on table yet to be created but at runtime you know the table will be there. Description Very simply the description of your Task. Disable Should the task be enabled or not? You can also set this through a context menu by right clicking on the task itself. DisableEventHandlers As a result of events that happen in the task, should the event handlers for the container fire? ExecValueVariable The variable assigned here will get or set the execution value of the task. Expressions Expressions as we mentioned earlier are a really powerful tool in SSIS and this graphic below shows us a small peek of what you can do. We select a property on the left and assign an expression to the value of that property on the right causing the value to be dynamically changed at runtime. One of the most obvious uses of this is that the property value can be built dynamically from within the package allowing you a great deal of flexibility FailPackageOnFailure If this task fails does the package? FailParentOnFailure If this task fails does the parent container? A task can he hosted inside another container i.e. the For Each Loop Container and this would then be the parent. ForcedExecutionValue This property allows you to hard code an execution value for the task. ForcedExecutionValueType What is the datatype of the ForcedExecutionValue? ForceExecutionResult Force the task to return a certain execution result. This could then be used by the workflow constraints. Possible values are None, Success, Failure and Completion. ForceExecutionValue Should we force the execution result? IsolationLevel This is the transaction isolation level of the task. IsStoredProcedure Certain optimisations are made by the task if it knows that the query is a Stored Procedure invocation. The docs say this will always be false unless the connection is an ADO connection. LocaleID Gets or sets the LocaleID of the container. LoggingMode Should we log for this container and what settings should we use? The value choices are UseParentSetting, Enabled and Disabled. MaximumErrorCount How many times can the task fail before we call it a day? Name Very simply the name of the task. ResultSetType How do you want the results of your query returned? The choices are ResultSetType_None, ResultSetType_SingleRow, ResultSetType_Rowset and ResultSetType_XML. SqlStatementSource Your Query/SQL Statement. SqlStatementSourceType The method of specifying the query. Your choices here are DirectInput, FileConnection and Variables TimeOut How long should the task wait to receive results? TransactionOption How should the task handle being asked to join a transaction? Usage Examples As we move through the examples we will only cover in them what we think you must know and what we think you should see. This means that some of the more elementary steps like setting up variables will be covered in the early examples but skipped and simply referred to in later ones. All these examples used the AventureWorks database that comes with SQL Server 2005. Returning a Single Value, Passing in Two Input Parameters So the first thing we are going to do is add some variables to our package. The graphic below shows us those variables having been defined. Here the CountOfEmployees variable will be used as the output from the query and EndDate and StartDate will be used as input parameters. As you can see all these variables have been scoped to the package. Scoping allows us to have domains for variables. Each container has a scope and remember a package is a container as well. Variable values of the parent container can be seen in child containers but cannot be passed back up to the parent from a child. Our following graphic has had a number of changes made. The first of those changes is that we have created and assigned an OLEDB connection manager to this Task ExecuteSQL Task Connection. The next thing is we have made sure that the SQLSourceType property is set to Direct Input as we will be writing in our statement ourselves. We have also specified that only a single row will be returned from this query. The expressions we typed in was: SELECT COUNT(*) AS CountOfEmployees FROM HumanResources.Employee WHERE (HireDate BETWEEN ? AND ?) Moving on now to the Parameter Mapping tab this is where we are going to tell the task about our input paramaters. We Add them to the window specifying their direction and datatype. A quick word here about the structure of the variable name. As you can see SSIS has preceeded the variable with the word user. This is a default namespace for variables but you can create your own. When defining your variables if you look at the variables window title bar you will see some icons. If you hover over the last one on the right you will see it says "Choose Variable Columns". If you click the button you will see a list of checkbox options and one of them is namespace. after checking this you will see now where you can define your own namespace. The next tab, result set, is where we need to get back the value(s) returned from our statement and assign to a variable which in our case is CountOfEmployees so we can use it later perhaps. Because we are only returning a single value then if you remember from earlier we are allowed to assign a name to the resultset but it must be the name of the column (or alias) from the query. A really cool feature of Business Intelligence Studio being hosted by Visual Studio is that we get breakpoint support for free. In our package we set a Breakpoint so we can break the package and have a look in a watch window at the variable values as they appear to our task and what the variable value of our resultset is after the task has done the assignment. Here's that window now. As you can see the count of employess that matched the data range was 2. Returning a Rowset In this example we are going to return a resultset back to a variable after the task has executed not just a single row single value. There are no input parameters required so the variables window is nice and straight forward. One variable of type object. Here is the statement that will form the soure for our Resultset. select p.ProductNumber, p.name, pc.Name as ProductCategoryNameFROM Production.ProductCategory pcJOIN Production.ProductSubCategory pscON pc.ProductCategoryID = psc.ProductCategoryIDJOIN Production.Product pON psc.ProductSubCategoryID = p.ProductSubCategoryID We need to make sure that we have selected Full result set as the ResultSet as shown below on the task's General tab. Because there are no input parameters we can skip the parameter mapping tab and move straight to the Result Set tab. Here we need to Add our variable defined earlier and map it to the result name of 0 (remember we covered this earlier) Once we run the task we can again set a breakpoint and have a look at the values coming back from the task. In the following graphic you can see the result set returned to us as a COM object. We can do some pretty interesting things with this COM object and in later articles that is exactly what we shall be doing. Return Values, Input/Output Parameters and Returning a Rowset from a Stored Procedure This example is pretty much going to give us a taste of everything. We have already covered in the previous example how to specify the ResultSet to be a Full result set so we will not cover it again here. For this example we are going to need 4 variables. One for the return value, one for the input parameter, one for the output parameter and one for the result set. Here is the statement we want to execute. Note how much cleaner it is than if you wanted to do it using the current version of DTS. In the Parameter Mapping tab we are going to Add our variables and specify their direction and datatypes. In the Result Set tab we can now map our final variable to the rowset returned from the stored procedure. It really is as simple as that and we were amazed at how much easier it is than in DTS 2000. Passing in the SQL Statement from a Variable SSIS as we have mentioned is hugely more flexible than its predecessor and one of the things you will notice when moving around the tasks and the adapters is that a lot of them accept a variable as an input for something they need. The ExecuteSQL task is no different. It will allow us to pass in a string variable as the SQL Statement. This variable value could have been set earlier on from inside the package or it could have been populated from outside using a configuration. The ResultSet property is set to single row and we'll show you why in a second when we look at the variables. Note also the SQLSourceType property. Here's the General Tab again. Looking at the variable we have in this package you can see we have only two. One for the return value from the statement and one which is obviously for the statement itself. Again we need to map the Result name to our variable and this can be a named Result Name (The column name or alias returned by the query) and not 0. The expected result into our variable should be the amount of rows in the Person.Contact table and if we look in the watch window we see that it is.   Passing in the SQL Statement from a File The final example we are going to show is a really interesting one. We are going to pass in the SQL statement to the task by using a file connection manager. The file itself contains the statement to run. The first thing we are going to need to do is create our file connection mananger to point to our file. Click in the connections tray at the bottom of the designer, right click and choose "New File Connection" As you can see in the graphic below we have chosen to use an existing file and have passed in the name as well. Have a look around at the other "Usage Type" values available whilst you are here. Having set that up we can now see in the connection manager tray our file connection manager sitting alongside our OLE-DB connection we have been using for the rest of these examples. Now we can go back to the familiar General Tab to set up how the task will accept our file connection as the source. All the other properties in this task are set up exactly as we have been doing for other examples depending on the options chosen so we will not cover them again here.   We hope you will agree that the Execute SQL Task has changed considerably in this release from its DTS predecessor. It has a lot of options available but once you have configured it a few times you get to learn what needs to go where. We hope you have found this article useful.

    Read the article

  • CodePlex Daily Summary for Friday, December 10, 2010

    CodePlex Daily Summary for Friday, December 10, 2010Popular ReleasesFree Silverlight & WPF Chart Control - Visifire: Visifire Silverlight, WPF Charts v3.6.5 Released: Hi, Today we are releasing final version of Visifire, v3.6.5 with the following new feature: * New property AutoFitToPlotArea has been introduced in DataSeries. AutoFitToPlotArea will bring bubbles inside the PlotArea in order to avoid clipping of bubbles in bubble chart. You can visit Visifire documentation to know more. http://www.visifire.com/visifirechartsdocumentation.php Also this release includes few bug fixes: * Chart threw exception while adding new Axis in Chart using Vi...PHPExcel: PHPExcel 1.7.5 Production: DonationsDonate via PayPal via PayPal. If you want to, we can also add your name / company on our Donation Acknowledgements page. PEAR channelWe now also have a full PEAR channel! Here's how to use it: New installation: pear channel-discover pear.pearplex.net pear install pearplex/PHPExcel Or if you've already installed PHPExcel before: pear upgrade pearplex/PHPExcel The official page can be found at http://pearplex.net. Want to contribute?Please refer the Contribute page.UserVoice Helper for WebMatrix: UserVoice Helper v0.9: This version will work with ASP.NET WebPages and ASP.NET MVC ApplicationsDNN Simple Article: DNNSimpleArticle Module V00.00.03: The initial release of the DNNSimpleArticle module (labelled V00.00.03) There are C# and VB versions of this module for this initial release. No promises that going forward there will be packages for both languages provided for future releases. This module provides the following functionality Create and display articles Display a paged list of articles Articles get created as DNN ContentItems Categorization provided through DNN Taxonomy SEO functionality for article display providi...UOB & ME: UOB_ME 2.5: latest versionCouchDB.NET: CouchDB.NET 0.1: CouchDB.NET ------- Libraries and providers to use CouchDB features from .NET This distribution includes the following projects: - MachineKeyGenerator: Command line tool to generate a machine key string for use in App.Config and Web.Config files. - CouchDB.NET: Library to facilitate the use of CouchDB features. It uses Hadi Hariri's EasyHttp library to communicate with the CouchDB server. More info at: https://github.com/hhariri/EasyHttp - CouchDb.ASP.NET: ASP.NET Membership Provider and ASP...AutoLoL: AutoLoL v1.4.3: AutoLoL now supports importing the build pages from Mobafire.com as well! Just insert the url to the build and voila. (For example: http://www.mobafire.com/league-of-legends/build/unforgivens-guide-how-to-build-a-successful-mordekaiser-24061) Stable release of AutoChat (It is still recommended to use with caution and to read the documentation) It is now possible to associate *.lolm files with AutoLoL to quickly open them The selected spells are now displayed in the masteries tab for qu...SubtitleTools: SubtitleTools 1.2: - Added auto insertion of RLE (RIGHT-TO-LEFT EMBEDDING) Unicode character for the RTL languages. - Fixed delete rows issue.PHP Manager for IIS: PHP Manager 1.1 for IIS 7: This is a final stable release of PHP Manager 1.1 for IIS 7. This is a minor incremental release that contains all the functionality available in 53121 plus additional features listed below: Improved detection logic for existing PHP installations. Now PHP Manager detects the location to php.ini file in accordance to the PHP specifications Configuring date.timezone. PHP Manager can automatically set the date.timezone directive which is required to be set starting from PHP 5.3 Ability to ...Algorithmia: Algorithmia 1.1: Algorithmia v1.1, released on December 8th, 2010.SuperSocket, an extensible socket application framework: SuperSocket 1.0 SP1: Fixed bugs: fixed a potential bug that the running state hadn't been updated after socket server stopped fixed a synchronization issue when clearing timeout session fixed a bug in ArraySegmentList fixed a bug on getting configuration valueCslaGenFork: CslaGenFork 4.0 CTP 2: The version is 4.0.1 CTP2 and was released 2010 December 7 and includes the following files: CslaGenFork 4.0.1-2010-12-07 Setup.msi Templates-2010-10-07.zip For getting started instructions, refer to How to section. Overview of the changes Since CTP1 there were 53 work items closed (28 features, 24 issues and 1 task). During this 60 days a lot of work has been done on several areas. First the stereotypes: EditableRoot is OK EditableChild is OK EditableRootCollection is OK Editable...My Web Pages Starter Kit: 1.3.1 Production Release (Security HOTFIX): Due to a critical security issue, it's strongly advised to update the My Web Pages Starter Kit to this version. Possible attackers could misuse the image upload to transmit any type of file to the website. If you already have a running version of My Web Pages Starter Kit 1.3.0, you can just replace the ftb.imagegallery.aspx file in the root directory with the one attached to this release.EnhSim: EnhSim 2.2.0 ALPHA: 2.2.0 ALPHAThis release adds in the changes for 4.03a. at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Updated En...ASP.NET MVC Project Awesome (jQuery Ajax helpers): 1.4: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager new stuff: popup WhiteSpaceFilterAttribute tested on mozilla, safari, chrome, opera, ie 9b/8/7/6nopCommerce. ASP.NET open source shopping cart: nopCommerce 1.90: To see the full list of fixes and changes please visit the release notes page (http://www.nopCommerce.com/releasenotes.aspx).myCollections: Version 1.2: New in version 1.2: Big performance improvement. New Design (Added Outlook style View, New detail view, New Groub By...) Added Sort by Media Added Manage Movie Studio Zoom preference is now saved. Media name are now editable. Added Portuguese version You can now Hide details panel Add support for FLAC tags You can now imports books from BibTex Xml file BugFixingmytrip.mvc (CMS & e-Commerce): mytrip.mvc 1.0.49.0 beta: mytrip.mvc 1.0.49.0 beta web Web for install hosting System Requirements: NET 4.0, MSSQL 2008 or MySql (auto creation table to database) if .\SQLEXPRESS auto creation database (App_Data folder) mytrip.mvc 1.0.49.0 beta src System Requirements: Visual Studio 2010 or Web Deweloper 2010 MSSQL 2008 or MySql (auto creation table to database) if .\SQLEXPRESS auto creation database (App_Data folder) Connector/Net 6.3.4, MVC3 RC WARNING For run and debug mytrip.mvc 1.0.49.0 beta src download and ...Menu and Context Menu for Silverlight 4.0: Silverlight Menu and Context Menu v2.3 Beta: - Added keyboard navigation support with access keys - Shortcuts like Ctrl-Alt-A are now supported(where the browser permits it) - The PopupMenuSeparator is now completely based on the PopupMenuItem class - Moved item manipulation code to a partial class in PopupMenuItemsControl.cs - Moved menu management and keyboard navigation code to the new PopupMenuManager class - Simplified the layout by removing the RootGrid element(all content is now placed in OverlayCanvas and is accessed by the new ...MiniTwitter: 1.62: MiniTwitter 1.62 ???? ?? ??????????????????????????????????????? 140 ?????????????????????????? ???????????????????????????????? ?? ??????????????????????????????????New ProjectsAccountingGuid: for testing onlyChinese Nag Screen: This is a simple but effective program for learning to recognize Mandarin characters. The application sits in the system tray and displays a character random through your day. You can only get rid of it by typing in the pinyin.CouchDB.NET: .NET libraries to use CouchDB from .NET. Included are Membership and Roles provider so that you may use CouchDB as your integrated DB backend on your ASP.NET projects. Please see the readme.txt file for instructions.DataSetMapper: The idea behind DataSetMapper is to provide support for the automatic mapping of legacy DataSet based structures to proper domain objects. In essence the aim is to create the Mapping aspect of an ORM without the persistence concerns.EasyXnaAudio: EasyXnaAudio is a simple component for use in XNA Game Studio 3.1/4.0 projects that provides an easy interface to load, play, and manage songs and sounds in your game.FixMailboxSD - Exchange Mailbox Security Descriptor Canonicalizer: This is a small utility to fix mailbox security descriptors in Microsoft Exchange that have become non-canonical. It must be run on a machine with Exchange System Manager for Exchange 2003 installed, but it will work against mailboxes on 2003 or 2007 (not 2010).GearSynth Plugin: a plugin for graphsynth that makes gear trainsGroceryList: TBD with first versionIBMS Suite Build on the Associate Platform: A new way of approaching Information Systems. From the UI, users of the IS will be able to build and manipulate the IS to whatever way fits their needs. We have simplified development, removed the chasm between management and IT and give the power of simplification to the user!Ivy Nasha Framework: A PHP FrameworkjQuery helpers for ASP.NET and ASP.NET MVC: jQuery helpers makes it easier for ASP.NET developers to build jQuery scripts. It's developed in C#. JSTest.NET: JSTest.NET enabled JavaScript unit tests to be run directly in the test framework of your choice (MSTest, NUnit, xUnit, etc) and all without the need for a web browser. JSTest.NET utilizes the Windows Script Host (CScript) to run fast, fully debuggable JavaScript unit tests!Multicore Task Framework: MTF is a visual tool to simplify building robust component based .NET applications. MTF is designed to make full use of the power of multi-core processors.Nazha Script On DLR: NazhaPascalESE - a Delphi/Pascal class library for Microsoft ESENT database API: This pascal class library, primarily written for Delphi's Object Pascal, provides a lightweight and easy-to-use wrapper around the ESENT API. Perpetuum Hangar: A Character planner for the online game "Perpetuum"Projeto Exemplo: Projeto exemplo para a atividade 3 da disciplina.PSiteCode: PSiteCode Manager rScript Engine: rScript scripting engine is a managed script engine wrote in C# that supports Visual Basic and C# syntax based scripts. It provides Type's for dynamically getting and setting properties, invoking methods and run-time compilation of scripts.SharePoint 2010 User Profile WebPart: This webpart shows all user profile properties and values of the properties for a particular user profile. The results are shown in a table containing the display and technical name together with the user value.SHC: shriSHMTools: SHMTools is set of compatible software tools (mostly Matlab based) for structural health monitoring (SHM) research. This includes algorithms for system design, modeling, data acquisition, feature extraction, classification, and prognosis.SwapWin: SwapWin is a tiny and handy tool which swaps windows on different screens. Developed in C# and .NET 3.5.Teachers Diary: Teachers diary is application realizing electronic teacher's notepad with student marks. Current localization of the application is in czech language only.VkApp: Vk app for downloadingWebSpirit: A lightweighted web server implemented by C# which supports sufficient extendible feature. By zjuWPF & MEF Studio: WPF & MEF Studio

    Read the article

  • From AutoComplete textbox to database search and display?

    - by svebee
    Hello everyone, I have a small problem so I would be grateful if anyone could help me in any way. Thank you ;) I have this little "application", and I want when someone type in a AutoComplete textbox for example "New" it automatically displays "New York" as a option and that (AutoComplete function) works fine. But I want when user type in full location (or AutoComplete do it for him) - that text (location) input is forwarded to a database search which then searches through database and "collects" all rows with user-typed location. For example if user typed in "New York", database search would find all rows with "New York" in it. When it finds one/more row(s) it would display them below. In images... I have this when user is typing... http://www.imagesforme.com/show.php/1093305_SNAG0000.jpg I have this when user choose a AutoComplete location (h)ttp://www.imagesforme.com/show.php/1093306_SNAG0001.jpg (remove () on the beggining) But I wanna this when user choose a AutoComplete location (h)ttp://www.imagesforme.com/show.php/1093307_CopyofSNAG0001.jpg (remove () on the beggining) Complete Code package com.svebee.prijevoz; import android.app.Activity; import android.database.Cursor; import android.database.sqlite.SQLiteDatabase; import android.os.Bundle; import android.util.Log; import android.widget.ArrayAdapter; import android.widget.AutoCompleteTextView; import android.widget.TextView; public class ZelimDoci extends Activity { TextView lista; static final String[] STANICE = new String[] { "New York", "Chicago", "Dallas", "Los Angeles" }; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.zelimdoci); AutoCompleteTextView textView = (AutoCompleteTextView) findViewById(R.id.autocomplete_country); ArrayAdapter<String> adapter = new ArrayAdapter<String>(this, R.layout.list_item, STANICE); textView.setAdapter(adapter); lista = (TextView)findViewById(R.id.lista); SQLiteDatabase myDB= null; String TableName = "Database"; String Data=""; /* Create a Database. */ try { myDB = this.openOrCreateDatabase("Database", MODE_PRIVATE, null); /* Create a Table in the Database. */ myDB.execSQL("CREATE TABLE IF NOT EXISTS " + TableName + " (Field1 INT(3) UNIQUE, Field2 INT(3) UNIQUE, Field3 VARCHAR UNIQUE, Field4 VARCHAR UNIQUE);"); Cursor a = myDB.rawQuery("SELECT * FROM Database where Field1 == 1", null); a.moveToFirst(); if (a == null) { /* Insert data to a Table*/ myDB.execSQL("INSERT INTO " + TableName + " (Field1, Field2, Field3, Field4)" + " VALUES (1, 119, 'New York', 'Dallas');"); myDB.execSQL("INSERT INTO " + TableName + " (Field1, Field2, Field3, Field4)" + " VALUES (9, 587, 'California', 'New York');"); } myDB.execSQL("INSERT INTO " + TableName + " (Field1, Field2, Field3, Field4)" + " VALUES (87, 57, 'Canada', 'London');"); } /*retrieve data from database */ Cursor c = myDB.rawQuery("SELECT * FROM " + TableName , null); int Column1 = c.getColumnIndex("Field1"); int Column2 = c.getColumnIndex("Field2"); int Column3 = c.getColumnIndex("Field3"); int Column4 = c.getColumnIndex("Field4"); // Check if our result was valid. c.moveToFirst(); if (c != null) { // Loop through all Results do { String LocationA = c.getString(Column3); String LocationB = c.getString(Column4); int Id = c.getInt(Column1); int Linija = c.getInt(Column2); Data =Data +Id+" | "+Linija+" | "+LocationA+"-"+LocationB+"\n"; }while(c.moveToNext()); } lista.setText(String.valueOf(Data)); } catch(Exception e) { Log.e("Error", "Error", e); } finally { if (myDB != null) myDB.close(); } } } .xml file <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent" > <TextView android:layout_width="fill_parent" android:layout_height="wrap_content" android:textSize="20sp" android:gravity="center_horizontal" android:padding="10sp" android:text="Test AutoComplete"/> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="horizontal" android:layout_width="fill_parent" android:layout_height="wrap_content" android:padding="5dp"> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="AutoComplete" /> <AutoCompleteTextView android:id="@+id/autocomplete_country" android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_marginLeft="5dp"/> </LinearLayout> </LinearLayout>

    Read the article

  • jQuery Templates with ASP.NET MVC

    - by hajan
    In my three previous blogs, I’ve shown how to use Templates in your ASPX website. Introduction to jQuery TemplatesjQuery Templates - tmpl(), template() and tmplItem()jQuery Templates - {Supported Tags}Now, I will show one real-world example which you may use it in your daily work of developing applications with ASP.NET MVC and jQuery. In the following example I will use Pubs database so that I will retrieve values from the authors table. To access the data, I’m using Entity Framework. Let’s pass throughout each step of the scenario: 1. Create new ASP.NET MVC Web application 2. Add new View inside Home folder but do not select a master page, and add Controller for your View 3. BODY code in the HTML <body>     <div>         <h1>Pubs Authors</h1>         <div id="authorsList"></div>     </div> </body> As you can see  in the body we have only one H1 tag and a div with id authorsList where we will append the data from database.   4. Now, I’ve created Pubs model which is connected to the Pub database and I’ve selected only the authors table in my EDMX model. You can use your own database. 5. Next, lets create one method of JsonResult type which will get the data from database and serialize it into JSON string. public JsonResult GetAuthors() {     pubsEntities pubs = new pubsEntities();     var authors = pubs.authors.ToList();     return Json(authors, JsonRequestBehavior.AllowGet); } So, I’m creating object instance of pubsEntities and get all authors in authors list. Then returning the authors list by serializing it to JSON using Json method. The JsonRequestBehaviour.AllowGet parameter is used to make the GET requests from the client become allowed. By default in ASP.NET MVC 2 the GET is not allowed because of security issue with JSON hijacking.   6. Next, lets create jQuery AJAX function which will call the GetAuthors method. We will use $.getJSON jQuery method. <script language="javascript" type="text/javascript">     $(function () {         $.getJSON("GetAuthors", "", function (data) {             $("#authorsTemplate").tmpl(data).appendTo("#authorsList");         });     }); </script>   Once the web page is downloaded, the method will be called. The first parameter of $.getJSON() is url string in our case the method name. The second parameter (which in the example is empty string) is the key value pairs that will be send to the server, and the third function is the callback function or the result which is going to be returned from the server. Inside the callback function we have code that renders data with template which has id #authorsTemplate and appends it to element which has #authorsList ID.   7. The jQuery Template <script id="authorsTemplate" type="text/html">     <div id="author">         ${au_lname} ${au_fname}         <div id="address">${address}, ${city}</div>         <div id="contractType">                     {{if contract}}             <font color="green">Has contract with the publishing house</font>         {{else}}             <font color="red">Without contract</font>         {{/if}}         <br />         <em> ${printMessage(state)} </em>         <br />                     </div>     </div> </script> As you can see, I have tags containing fields (au_lname, au_fname… etc.) that corresponds to the table in the EDMX model which is the same as in the database. One more thing to note here is that I have printMessage(state) function which is called inside ${ expression/function/field } tag. The printMessage function <script language="javascript" type="text/javascript">     function printMessage(s) {         if (s=="CA") return "The author is from California";         else return "The author is not from California";     } </script> So, if state is “CA” print “The author is from California” else “The author is not from California”   HERE IS THE COMPLETE ASPX CODE <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server">     <title>Database Example :: jQuery Templates</title>     <style type="text/css">         body           {             font-family:Verdana,Arial,Courier New, Sans-Serif;             color:Black;             padding:2px, 2px, 2px, 2px;             background-color:#FF9640;         }         #author         {             display:block;             float:left;             text-decoration:none;             border:1px solid black;             background-color:White;             padding:20px 20px 20px 20px;             margin-top:2px;             margin-right:2px;             font-family:Verdana;             font-size:12px;             width:200px;             height:70px;}         #address           {             font-style:italic;             color:Blue;             font-size:12px;             font-family:Verdana;         }         .author_hover {background-color:Yellow;}     </style>     <script src="http://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.4.4.min.js" type="text/javascript"></script>     <script src="http://ajax.aspnetcdn.com/ajax/jquery.templates/beta1/jquery.tmpl.js" type="text/javascript"></script>     <script language="javascript" type="text/javascript">         function printMessage(s) {             if (s=="CA") return "The author is from California";             else return "The author is not from California";         }     </script>     <script id="authorsTemplate" type="text/html">         <div id="author">             ${au_lname} ${au_fname}             <div id="address">${address}, ${city}</div>             <div id="contractType">                         {{if contract}}                 <font color="green">Has contract with the publishing house</font>             {{else}}                 <font color="red">Without contract</font>             {{/if}}             <br />             <em> ${printMessage(state)} </em>             <br />                         </div>         </div>     </script>     <script language="javascript" type="text/javascript">         $(function () {             $.getJSON("GetAuthors", "", function (data) {                 $("#authorsTemplate").tmpl(data).appendTo("#authorsList");             });         });     </script> </head>     <body>     <div id="title">Pubs Authors</div>     <div id="authorsList"></div> </body> </html> So, in the complete example you also have the CSS style I’m using to stylize the output of my page. Here is print screen of the end result displayed on the web page: You can download the complete source code including examples shown in my previous blog posts about jQuery templates and PPT presentation from my last session I had in the local .NET UG meeting in the following DOWNLOAD LINK. Do let me know your feedback. Regards, Hajan

    Read the article

  • How do I rotate only some views when working with a uinavigationcontroller as a tab of a uitabbarcon

    - by maxpower
    Here is a flow that I can not figure out how to work. ( when I state (working) it means that in that current state the rules for orientation for that view are working correctly) First View: TableView on the stack of a UINavigationController that is a tab of UITabBarController. TableView is only allowed to be portrait. (working) When you rotate the TableView to landscape a modal comes up with a custom UIView that is like a coverflow (which i'll explain the problem there in a moment). A Selection made on tableview pushes a UIScrollview on to the stack. UIScrollView is allowed all orientations. (working) When UIScrollView is in landscape mode and the user hits back they are taken to the custom UIView that is like the coverflow and only allows landscape. The problem is here. Because the UIScrollView allows full rotation it permitted the TableView to rotate as well to landscape. I have a method attached to a notification "UIDeviceOrientationDidChangeNotification" that checks to see if the custom view is the current controller and if it is and if the user has rotated back to portrait I need to pop the custom view and show the table view. The table view has to rotate back to portrait, which really is okay as long as the user doesn't see it. When I create custom animations it works pretty good except for some odd invisible black box that seems to rotate with the device right before I fade out the customview to the tableview. Further inorder to ensure that my tableview will rotate to portrait I have to allow the customview to support all orientations because the system looks to the current view (in my code) as to whether or not that app is allowed to rotate to a certain orientation. Because of this I many proposed solutions will show the customview rotating to portrait as the table view comes back to focus. My other problem is very similar. If you are viewing the tableview and rotate the modalview of the customview is presented. When you make a selection on this view it pushes the UIScrollview onto the stack, but because the Tableview only supports portrait the UIScrollview comes in in portrait while the device is in landscape mode. How can I overcome these awful blocks? This is my current attempt: When it comes to working with UITabBarController the system really only cares what the tabbarcontroller has to say about rotation. Currently whenever a view loads it reports it supported orientations. TabBarController.m - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { switch (self.supportedOrientation) { case SupportPortraitOrientation: [[UIApplication sharedApplication] setStatusBarHidden:NO animated:YES]; return (interfaceOrientation == UIInterfaceOrientationPortrait); break; case SupportPortraitUpsideDownOrientation: [[UIApplication sharedApplication] setStatusBarHidden:NO animated:YES]; return (interfaceOrientation == UIInterfaceOrientationPortraitUpsideDown); break; case SupportPortraitAllOrientation: [[UIApplication sharedApplication] setStatusBarHidden:NO animated:YES]; return (interfaceOrientation == UIInterfaceOrientationPortrait || interfaceOrientation == UIInterfaceOrientationPortraitUpsideDown); break; case SupportLandscapeLeftOrientation: [[UIApplication sharedApplication] setStatusBarHidden:YES animated:YES]; return (interfaceOrientation == UIInterfaceOrientationLandscapeLeft); break; case SupportLandscapeRightOrienation: [[UIApplication sharedApplication] setStatusBarHidden:YES animated:YES]; return (interfaceOrientation == UIInterfaceOrientationLandscapeRight); break; case SupportLandscapeAllOrientation: [[UIApplication sharedApplication] setStatusBarHidden:YES animated:YES]; return (interfaceOrientation == UIInterfaceOrientationLandscapeLeft || interfaceOrientation == UIInterfaceOrientationLandscapeRight); break; case SupportAllOrientation: if (interfaceOrientation == UIInterfaceOrientationLandscapeLeft || interfaceOrientation == UIInterfaceOrientationLandscapeRight) { [[UIApplication sharedApplication] setStatusBarHidden:YES animated:YES]; }else { //[[UIApplication sharedApplication] setStatusBarHidden:NO animated:YES]; } return YES; break; default: return (interfaceOrientation == UIInterfaceOrientationPortrait); break; } } This block of code is part of my UINavigationController and is in a method that responds to the UIDeviceOrientationDidChangeNotification Notification. It is responsible for poping the customview and showing the tableview. There are two different versions in place that originally were for two different versions of the SDK but both are pretty close to solutions. The reason the first is not supported on 3.0 is for some reason you can't have a view showing and then showen as a modal view. Not sure if that is a bug or a feature. The second solution works pretty good except that I see an outer box rotating around the iphone. if ([[self topViewController] isKindOfClass:FlowViewController.class]) { NSString *iphoneVersion = [[UIDevice currentDevice] systemVersion]; double version = [iphoneVersion doubleValue]; if(version > 3.0){ //1st solution //if the delivered app is not built with the 3.1 SDK I don't think this will happen anyway //we need to test this [self presentModalViewController:self.flowViewController animated:NO]; //[self toInterfaceOrientation:UIDeviceOrientationPortrait animated:NO]; [self popViewControllerAnimated:NO]; [self setNavigationBarHidden:NO animated:NO]; [self dismissModalViewControllerAnimated:YES]; }else{ //2nd solution DLog(@"3.0!!"); //[self toInterfaceOrientation:UIDeviceOrientationPortrait animated:NO]; CATransition *transition = [CATransition animation]; transition.duration = 0.50; transition.timingFunction = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionEaseInEaseOut]; transition.type = kCATransitionPush; transition.subtype = kCATransitionFade; CATransition *tabBarControllerLayer = [CATransition animation]; tabBarControllerLayer.duration = 0.50; tabBarControllerLayer.timingFunction = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionEaseInEaseOut]; tabBarControllerLayer.type = kCATransitionPush; tabBarControllerLayer.subtype = kCATransitionFade; [self.tabBarController.view.layer addAnimation:transition forKey:kCATransition]; [self.view.layer addAnimation:transition forKey:kCATransition]; [self popViewControllerAnimated:NO]; [self setNavigationBarHidden:NO animated:NO]; } [self performSelector:@selector(resetFlow) withObject:nil afterDelay:0.75]; } I'm near convinced there is no solution except for manual rotation which messes up the keyboard rotation. Any advice would be appreciated! Thanks.

    Read the article

  • Unable to use factory girl with Cucumber and rails 3 (bundler problem)

    - by jbpros
    Hi there, I'm trying to run cucumber features with factory girl factories on a fresh Rails 3 application. Here is my Gemfile: source "http://gemcutter.org" gem "rails", "3.0.0.beta" gem "pg" gem "factory_girl", :git => "git://github.com/thoughtbot/factory_girl.git", :branch => "rails3" gem "rspec-rails", ">= 2.0.0.beta.4" gem "capybara" gem "database_cleaner" gem "cucumber-rails", :require => false Then the bundle install commande just runs smoothly: $ bundle install /usr/lib/ruby/gems/1.8/gems/bundler-0.9.3/lib/bundler/installer.rb:81:Warning: Gem::Dependency#version_requirements is deprecated and will be removed on or after August 2010. Use #requirement Updating git://github.com/thoughtbot/factory_girl.git Fetching source index from http://gemcutter.org Resolving dependencies Installing abstract (1.0.0) from system gems Installing actionmailer (3.0.0.beta) from system gems Installing actionpack (3.0.0.beta) from system gems Installing activemodel (3.0.0.beta) from system gems Installing activerecord (3.0.0.beta) from system gems Installing activeresource (3.0.0.beta) from system gems Installing activesupport (3.0.0.beta) from system gems Installing arel (0.2.1) from system gems Installing builder (2.1.2) from system gems Installing bundler (0.9.13) from system gems Installing capybara (0.3.6) from system gems Installing cucumber (0.6.3) from system gems Installing cucumber-rails (0.3.0) from system gems Installing culerity (0.2.9) from system gems Installing database_cleaner (0.5.0) from system gems Installing diff-lcs (1.1.2) from system gems Installing erubis (2.6.5) from system gems Installing factory_girl (1.2.3) from git://github.com/thoughtbot/factory_girl.git (at rails3) Installing ffi (0.6.3) from system gems Installing i18n (0.3.6) from system gems Installing json_pure (1.2.3) from system gems Installing mail (2.1.3) from system gems Installing memcache-client (1.7.8) from system gems Installing mime-types (1.16) from system gems Installing nokogiri (1.4.1) from system gems Installing pg (0.9.0) from system gems Installing polyglot (0.3.0) from system gems Installing rack (1.1.0) from system gems Installing rack-mount (0.4.7) from system gems Installing rack-test (0.5.3) from system gems Installing rails (3.0.0.beta) from system gems Installing railties (3.0.0.beta) from system gems Installing rake (0.8.7) from system gems Installing rspec (2.0.0.beta.4) from system gems Installing rspec-core (2.0.0.beta.4) from system gems Installing rspec-expectations (2.0.0.beta.4) from system gems Installing rspec-mocks (2.0.0.beta.4) from system gems Installing rspec-rails (2.0.0.beta.4) from system gems Installing selenium-webdriver (0.0.17) from system gems Installing term-ansicolor (1.0.5) from system gems Installing text-format (1.0.0) from system gems Installing text-hyphen (1.0.0) from system gems Installing thor (0.13.4) from system gems Installing treetop (1.4.4) from system gems Installing tzinfo (0.3.17) from system gems Installing webrat (0.7.0) from system gems Your bundle is complete! When I run cucumber, here is the error I get: $ rake cucumber (in /home/jbpros/projects/deorbitburn) /usr/lib/ruby/gems/1.8/gems/bundler-0.9.3/lib/bundler/resolver.rb:97:Warning: Gem::Dependency#version_requirements is deprecated and will be removed on or after August 2010. Use #requirement NOTICE: CREATE TABLE will create implicit sequence "posts_id_seq" for serial column "posts.id" NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "posts_pkey" for table "posts" /usr/bin/ruby1.8 -I "/usr/lib/ruby/gems/1.8/gems/cucumber-0.6.3/lib:lib" "/usr/lib/ruby/gems/1.8/gems/cucumber-0.6.3/bin/cucumber" --profile default Using the default profile... git://github.com/thoughtbot/factory_girl.git (at rails3) is not checked out. Please run `bundle install` (Bundler::PathError) /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/source.rb:282:in `load_spec_files' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/source.rb:190:in `local_specs' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/environment.rb:36:in `runtime_gems' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/environment.rb:35:in `each' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/environment.rb:35:in `runtime_gems' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/index.rb:5:in `build' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/environment.rb:34:in `runtime_gems' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/environment.rb:14:in `index' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/index.rb:5:in `build' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/environment.rb:13:in `index' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/environment.rb:55:in `resolve_locally' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/environment.rb:28:in `specs' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/environment.rb:65:in `specs_for' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/environment.rb:23:in `requested_specs' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler/runtime.rb:18:in `setup' /home/jbpros/.bundle/gems/bundler-0.9.13/lib/bundler.rb:68:in `setup' /home/jbpros/projects/deorbitburn/config/boot.rb:7 /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `polyglot_original_require' /usr/lib/ruby/gems/1.8/gems/polyglot-0.3.0/lib/polyglot.rb:65:in `require' /home/jbpros/projects/deorbitburn/config/application.rb:1 /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `polyglot_original_require' /usr/lib/ruby/gems/1.8/gems/polyglot-0.3.0/lib/polyglot.rb:65:in `require' /home/jbpros/projects/deorbitburn/config/environment.rb:2 /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `polyglot_original_require' /usr/lib/ruby/gems/1.8/gems/polyglot-0.3.0/lib/polyglot.rb:65:in `require' /home/jbpros/projects/deorbitburn/features/support/env.rb:8 /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `polyglot_original_require' /usr/lib/ruby/gems/1.8/gems/polyglot-0.3.0/lib/polyglot.rb:65:in `require' /usr/lib/ruby/gems/1.8/gems/cucumber-0.6.3/bin/../lib/cucumber/rb_support/rb_language.rb:124:in `load_code_file' /usr/lib/ruby/gems/1.8/gems/cucumber-0.6.3/bin/../lib/cucumber/step_mother.rb:85:in `load_code_file' /usr/lib/ruby/gems/1.8/gems/cucumber-0.6.3/bin/../lib/cucumber/step_mother.rb:77:in `load_code_files' /usr/lib/ruby/gems/1.8/gems/cucumber-0.6.3/bin/../lib/cucumber/step_mother.rb:76:in `each' /usr/lib/ruby/gems/1.8/gems/cucumber-0.6.3/bin/../lib/cucumber/step_mother.rb:76:in `load_code_files' /usr/lib/ruby/gems/1.8/gems/cucumber-0.6.3/bin/../lib/cucumber/cli/main.rb:48:in `execute!' /usr/lib/ruby/gems/1.8/gems/cucumber-0.6.3/bin/../lib/cucumber/cli/main.rb:20:in `execute' /usr/lib/ruby/gems/1.8/gems/cucumber-0.6.3/bin/cucumber:8 rake aborted! Command failed with status (1): [/usr/bin/ruby1.8 -I "/usr/lib/ruby/gems/1....] (See full trace by running task with --trace) Do I have to do something special for bundler to check out factory girl's repository on github?

    Read the article

  • Uploadify Minimum Image Width And Height

    - by Richard Knop
    So I am using the Uplodify plugin to allow users to upload multiple images at once. The problem is I need to set a minimum width and height for images. Let's say 150x150px is the smallest image users can upload. How can I set this limitation in the Uploadify plugin? When user tries to upload smaller picture, I would like to display some error message as well. Here is the PHP file that is called bu the plugin to upload images: <?php define('BASE_PATH', substr(dirname(dirname(__FILE__)), 0, -22)); // set the include path set_include_path(BASE_PATH . '/../library' . PATH_SEPARATOR . BASE_PATH . '/library' . PATH_SEPARATOR . get_include_path()); // autoload classes from the library function __autoload($class) { include str_replace('_', '/', $class) . '.php'; } $configuration = new Zend_Config_Ini(BASE_PATH . '/application' . '/configs/application.ini', 'development'); $dbAdapter = Zend_Db::factory($configuration->database); Zend_Db_Table_Abstract::setDefaultAdapter($dbAdapter); function _getTable($table) { include BASE_PATH . '/application/modules/default/models/' . $table . '.php'; return new $table(); } $albums = _getTable('Albums'); $media = _getTable('Media'); if (false === empty($_FILES)) { $tempFile = $_FILES['Filedata']['tmp_name']; $extension = end(explode('.', $_FILES['Filedata']['name'])); // insert temporary row into the database $data = array(); $data['type'] = 'photo'; $data['type2'] = 'public'; $data['status'] = 'temporary'; $data['user_id'] = $_REQUEST['user_id']; $paths = $media->add($data, $extension, $dbAdapter); // save the photo move_uploaded_file($tempFile, BASE_PATH . '/public/' . $paths[0]); // create a thumbnail include BASE_PATH . '/library/My/PHPThumbnailer/ThumbLib.inc.php'; $thumb = PhpThumbFactory::create(BASE_PATH . '/public/' . $paths[0]); $thumb->adaptiveResize(85, 85); $thumb->save(BASE_PATH . '/public/' . $paths[1]); // add watermark to the bottom right corner $pathToFullImage = BASE_PATH . '/public/' . $paths[0]; $size = getimagesize($pathToFullImage); switch ($extension) { case 'gif': $im = imagecreatefromgif($pathToFullImage); break; case 'jpg': $im = imagecreatefromjpeg($pathToFullImage); break; case 'png': $im = imagecreatefrompng($pathToFullImage); break; } if (false !== $im) { $white = imagecolorallocate($im, 255, 255, 255); $font = BASE_PATH . '/public/fonts/arial.ttf'; imagefttext($im, 13, // font size 0, // angle $size[0] - 132, // x axis (top left is [0, 0]) $size[1] - 13, // y axis $white, $font, 'HunnyHive.com'); switch ($extension) { case 'gif': imagegif($im, $pathToFullImage); break; case 'jpg': imagejpeg($im, $pathToFullImage, 100); break; case 'png': imagepng($im, $pathToFullImage, 0); break; } imagedestroy($im); } echo "1"; } And here's the javascript: $(document).ready(function() { $('#photo').uploadify({ 'uploader' : '/flash-uploader/scripts/uploadify.swf', 'script' : '/flash-uploader/scripts/upload-public-photo.php', 'cancelImg' : '/flash-uploader/cancel.png', 'scriptData' : {'user_id' : 'USER_ID'}, 'queueID' : 'fileQueue', 'auto' : true, 'multi' : true, 'sizeLimit' : 2097152, 'fileExt' : '*.jpg;*.jpeg;*.gif;*.png', 'wmode' : 'transparent', 'onComplete' : function() { $.get('/my-account/temporary-public-photos', function(data) { $('#temporaryPhotos').html(data); }); } }); $('#upload_public_photo').hover(function() { var titles = '{'; $('.title').each(function() { var title = $(this).val(); if ('Title...' != title) { var id = $(this).attr('name'); id = id.substr(5); title = jQuery.trim(title); if (titles.length > 1) { titles += ','; } titles += '"' + id + '"' + ':"' + title + '"'; } }); titles += '}'; $('#titles').val(titles); }); }); Now bear in mind that I know how to check images dimensions in the PHP file. But I'm not sure how to modify the javascript so it won't upload images with very small dimensions.

    Read the article

  • What are good design practices when working with Entity Framework

    - by AD
    This will apply mostly for an asp.net application where the data is not accessed via soa. Meaning that you get access to the objects loaded from the framework, not Transfer Objects, although some recommendation still apply. This is a community post, so please add to it as you see fit. Applies to: Entity Framework 1.0 shipped with Visual Studio 2008 sp1. Why pick EF in the first place? Considering it is a young technology with plenty of problems (see below), it may be a hard sell to get on the EF bandwagon for your project. However, it is the technology Microsoft is pushing (at the expense of Linq2Sql, which is a subset of EF). In addition, you may not be satisfied with NHibernate or other solutions out there. Whatever the reasons, there are people out there (including me) working with EF and life is not bad.make you think. EF and inheritance The first big subject is inheritance. EF does support mapping for inherited classes that are persisted in 2 ways: table per class and table the hierarchy. The modeling is easy and there are no programming issues with that part. (The following applies to table per class model as I don't have experience with table per hierarchy, which is, anyway, limited.) The real problem comes when you are trying to run queries that include one or many objects that are part of an inheritance tree: the generated sql is incredibly awful, takes a long time to get parsed by the EF and takes a long time to execute as well. This is a real show stopper. Enough that EF should probably not be used with inheritance or as little as possible. Here is an example of how bad it was. My EF model had ~30 classes, ~10 of which were part of an inheritance tree. On running a query to get one item from the Base class, something as simple as Base.Get(id), the generated SQL was over 50,000 characters. Then when you are trying to return some Associations, it degenerates even more, going as far as throwing SQL exceptions about not being able to query more than 256 tables at once. Ok, this is bad, EF concept is to allow you to create your object structure without (or with as little as possible) consideration on the actual database implementation of your table. It completely fails at this. So, recommendations? Avoid inheritance if you can, the performance will be so much better. Use it sparingly where you have to. In my opinion, this makes EF a glorified sql-generation tool for querying, but there are still advantages to using it. And ways to implement mechanism that are similar to inheritance. Bypassing inheritance with Interfaces First thing to know with trying to get some kind of inheritance going with EF is that you cannot assign a non-EF-modeled class a base class. Don't even try it, it will get overwritten by the modeler. So what to do? You can use interfaces to enforce that classes implement some functionality. For example here is a IEntity interface that allow you to define Associations between EF entities where you don't know at design time what the type of the entity would be. public enum EntityTypes{ Unknown = -1, Dog = 0, Cat } public interface IEntity { int EntityID { get; } string Name { get; } Type EntityType { get; } } public partial class Dog : IEntity { // implement EntityID and Name which could actually be fields // from your EF model Type EntityType{ get{ return EntityTypes.Dog; } } } Using this IEntity, you can then work with undefined associations in other classes // lets take a class that you defined in your model. // that class has a mapping to the columns: PetID, PetType public partial class Person { public IEntity GetPet() { return IEntityController.Get(PetID,PetType); } } which makes use of some extension functions: public class IEntityController { static public IEntity Get(int id, EntityTypes type) { switch (type) { case EntityTypes.Dog: return Dog.Get(id); case EntityTypes.Cat: return Cat.Get(id); default: throw new Exception("Invalid EntityType"); } } } Not as neat as having plain inheritance, particularly considering you have to store the PetType in an extra database field, but considering the performance gains, I would not look back. It also cannot model one-to-many, many-to-many relationship, but with creative uses of 'Union' it could be made to work. Finally, it creates the side effet of loading data in a property/function of the object, which you need to be careful about. Using a clear naming convention like GetXYZ() helps in that regards. Compiled Queries Entity Framework performance is not as good as direct database access with ADO (obviously) or Linq2SQL. There are ways to improve it however, one of which is compiling your queries. The performance of a compiled query is similar to Linq2Sql. What is a compiled query? It is simply a query for which you tell the framework to keep the parsed tree in memory so it doesn't need to be regenerated the next time you run it. So the next run, you will save the time it takes to parse the tree. Do not discount that as it is a very costly operation that gets even worse with more complex queries. There are 2 ways to compile a query: creating an ObjectQuery with EntitySQL and using CompiledQuery.Compile() function. (Note that by using an EntityDataSource in your page, you will in fact be using ObjectQuery with EntitySQL, so that gets compiled and cached). An aside here in case you don't know what EntitySQL is. It is a string-based way of writing queries against the EF. Here is an example: "select value dog from Entities.DogSet as dog where dog.ID = @ID". The syntax is pretty similar to SQL syntax. You can also do pretty complex object manipulation, which is well explained [here][1]. Ok, so here is how to do it using ObjectQuery< string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); The first time you run this query, the framework will generate the expression tree and keep it in memory. So the next time it gets executed, you will save on that costly step. In that example EnablePlanCaching = true, which is unnecessary since that is the default option. The other way to compile a query for later use is the CompiledQuery.Compile method. This uses a delegate: static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => ctx.DogSet.FirstOrDefault(it => it.ID == id)); or using linq static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => (from dog in ctx.DogSet where dog.ID == id select dog).FirstOrDefault()); to call the query: query_GetDog.Invoke( YourContext, id ); The advantage of CompiledQuery is that the syntax of your query is checked at compile time, where as EntitySQL is not. However, there are other consideration... Includes Lets say you want to have the data for the dog owner to be returned by the query to avoid making 2 calls to the database. Easy to do, right? EntitySQL string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)).Include("Owner"); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); CompiledQuery static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => (from dog in ctx.DogSet.Include("Owner") where dog.ID == id select dog).FirstOrDefault()); Now, what if you want to have the Include parametrized? What I mean is that you want to have a single Get() function that is called from different pages that care about different relationships for the dog. One cares about the Owner, another about his FavoriteFood, another about his FavotireToy and so on. Basicly, you want to tell the query which associations to load. It is easy to do with EntitySQL public Dog Get(int id, string include) { string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)) .IncludeMany(include); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); } The include simply uses the passed string. Easy enough. Note that it is possible to improve on the Include(string) function (that accepts only a single path) with an IncludeMany(string) that will let you pass a string of comma-separated associations to load. Look further in the extension section for this function. If we try to do it with CompiledQuery however, we run into numerous problems: The obvious static readonly Func<Entities, int, string, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, string, Dog>((ctx, id, include) => (from dog in ctx.DogSet.Include(include) where dog.ID == id select dog).FirstOrDefault()); will choke when called with: query_GetDog.Invoke( YourContext, id, "Owner,FavoriteFood" ); Because, as mentionned above, Include() only wants to see a single path in the string and here we are giving it 2: "Owner" and "FavoriteFood" (which is not to be confused with "Owner.FavoriteFood"!). Then, let's use IncludeMany(), which is an extension function static readonly Func<Entities, int, string, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, string, Dog>((ctx, id, include) => (from dog in ctx.DogSet.IncludeMany(include) where dog.ID == id select dog).FirstOrDefault()); Wrong again, this time it is because the EF cannot parse IncludeMany because it is not part of the functions that is recognizes: it is an extension. Ok, so you want to pass an arbitrary number of paths to your function and Includes() only takes a single one. What to do? You could decide that you will never ever need more than, say 20 Includes, and pass each separated strings in a struct to CompiledQuery. But now the query looks like this: from dog in ctx.DogSet.Include(include1).Include(include2).Include(include3) .Include(include4).Include(include5).Include(include6) .[...].Include(include19).Include(include20) where dog.ID == id select dog which is awful as well. Ok, then, but wait a minute. Can't we return an ObjectQuery< with CompiledQuery? Then set the includes on that? Well, that what I would have thought so as well: static readonly Func<Entities, int, ObjectQuery<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, ObjectQuery<Dog>>((ctx, id) => (ObjectQuery<Dog>)(from dog in ctx.DogSet where dog.ID == id select dog)); public Dog GetDog( int id, string include ) { ObjectQuery<Dog> oQuery = query_GetDog(id); oQuery = oQuery.IncludeMany(include); return oQuery.FirstOrDefault; } That should have worked, except that when you call IncludeMany (or Include, Where, OrderBy...) you invalidate the cached compiled query because it is an entirely new one now! So, the expression tree needs to be reparsed and you get that performance hit again. So what is the solution? You simply cannot use CompiledQueries with parametrized Includes. Use EntitySQL instead. This doesn't mean that there aren't uses for CompiledQueries. It is great for localized queries that will always be called in the same context. Ideally CompiledQuery should always be used because the syntax is checked at compile time, but due to limitation, that's not possible. An example of use would be: you may want to have a page that queries which two dogs have the same favorite food, which is a bit narrow for a BusinessLayer function, so you put it in your page and know exactly what type of includes are required. Passing more than 3 parameters to a CompiledQuery Func is limited to 5 parameters, of which the last one is the return type and the first one is your Entities object from the model. So that leaves you with 3 parameters. A pitance, but it can be improved on very easily. public struct MyParams { public string param1; public int param2; public DateTime param3; } static readonly Func<Entities, MyParams, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, MyParams, IEnumerable<Dog>>((ctx, myParams) => from dog in ctx.DogSet where dog.Age == myParams.param2 && dog.Name == myParams.param1 and dog.BirthDate > myParams.param3 select dog); public List<Dog> GetSomeDogs( int age, string Name, DateTime birthDate ) { MyParams myParams = new MyParams(); myParams.param1 = name; myParams.param2 = age; myParams.param3 = birthDate; return query_GetDog(YourContext,myParams).ToList(); } Return Types (this does not apply to EntitySQL queries as they aren't compiled at the same time during execution as the CompiledQuery method) Working with Linq, you usually don't force the execution of the query until the very last moment, in case some other functions downstream wants to change the query in some way: static readonly Func<Entities, int, string, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, IEnumerable<Dog>>((ctx, age, name) => from dog in ctx.DogSet where dog.Age == age && dog.Name == name select dog); public IEnumerable<Dog> GetSomeDogs( int age, string name ) { return query_GetDog(YourContext,age,name); } public void DataBindStuff() { IEnumerable<Dog> dogs = GetSomeDogs(4,"Bud"); // but I want the dogs ordered by BirthDate gridView.DataSource = dogs.OrderBy( it => it.BirthDate ); } What is going to happen here? By still playing with the original ObjectQuery (that is the actual return type of the Linq statement, which implements IEnumerable), it will invalidate the compiled query and be force to re-parse. So, the rule of thumb is to return a List< of objects instead. static readonly Func<Entities, int, string, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, IEnumerable<Dog>>((ctx, age, name) => from dog in ctx.DogSet where dog.Age == age && dog.Name == name select dog); public List<Dog> GetSomeDogs( int age, string name ) { return query_GetDog(YourContext,age,name).ToList(); //<== change here } public void DataBindStuff() { List<Dog> dogs = GetSomeDogs(4,"Bud"); // but I want the dogs ordered by BirthDate gridView.DataSource = dogs.OrderBy( it => it.BirthDate ); } When you call ToList(), the query gets executed as per the compiled query and then, later, the OrderBy is executed against the objects in memory. It may be a little bit slower, but I'm not even sure. One sure thing is that you have no worries about mis-handling the ObjectQuery and invalidating the compiled query plan. Once again, that is not a blanket statement. ToList() is a defensive programming trick, but if you have a valid reason not to use ToList(), go ahead. There are many cases in which you would want to refine the query before executing it. Performance What is the performance impact of compiling a query? It can actually be fairly large. A rule of thumb is that compiling and caching the query for reuse takes at least double the time of simply executing it without caching. For complex queries (read inherirante), I have seen upwards to 10 seconds. So, the first time a pre-compiled query gets called, you get a performance hit. After that first hit, performance is noticeably better than the same non-pre-compiled query. Practically the same as Linq2Sql When you load a page with pre-compiled queries the first time you will get a hit. It will load in maybe 5-15 seconds (obviously more than one pre-compiled queries will end up being called), while subsequent loads will take less than 300ms. Dramatic difference, and it is up to you to decide if it is ok for your first user to take a hit or you want a script to call your pages to force a compilation of the queries. Can this query be cached? { Dog dog = from dog in YourContext.DogSet where dog.ID == id select dog; } No, ad-hoc Linq queries are not cached and you will incur the cost of generating the tree every single time you call it. Parametrized Queries Most search capabilities involve heavily parametrized queries. There are even libraries available that will let you build a parametrized query out of lamba expressions. The problem is that you cannot use pre-compiled queries with those. One way around that is to map out all the possible criteria in the query and flag which one you want to use: public struct MyParams { public string name; public bool checkName; public int age; public bool checkAge; } static readonly Func<Entities, MyParams, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, MyParams, IEnumerable<Dog>>((ctx, myParams) => from dog in ctx.DogSet where (myParams.checkAge == true && dog.Age == myParams.age) && (myParams.checkName == true && dog.Name == myParams.name ) select dog); protected List<Dog> GetSomeDogs() { MyParams myParams = new MyParams(); myParams.name = "Bud"; myParams.checkName = true; myParams.age = 0; myParams.checkAge = false; return query_GetDog(YourContext,myParams).ToList(); } The advantage here is that you get all the benifits of a pre-compiled quert. The disadvantages are that you most likely will end up with a where clause that is pretty difficult to maintain, that you will incur a bigger penalty for pre-compiling the query and that each query you run is not as efficient as it could be (particularly with joins thrown in). Another way is to build an EntitySQL query piece by piece, like we all did with SQL. protected List<Dod> GetSomeDogs( string name, int age) { string query = "select value dog from Entities.DogSet where 1 = 1 "; if( !String.IsNullOrEmpty(name) ) query = query + " and dog.Name == @Name "; if( age > 0 ) query = query + " and dog.Age == @Age "; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>( query, YourContext ); if( !String.IsNullOrEmpty(name) ) oQuery.Parameters.Add( new ObjectParameter( "Name", name ) ); if( age > 0 ) oQuery.Parameters.Add( new ObjectParameter( "Age", age ) ); return oQuery.ToList(); } Here the problems are: - there is no syntax checking during compilation - each different combination of parameters generate a different query which will need to be pre-compiled when it is first run. In this case, there are only 4 different possible queries (no params, age-only, name-only and both params), but you can see that there can be way more with a normal world search. - Noone likes to concatenate strings! Another option is to query a large subset of the data and then narrow it down in memory. This is particularly useful if you are working with a definite subset of the data, like all the dogs in a city. You know there are a lot but you also know there aren't that many... so your CityDog search page can load all the dogs for the city in memory, which is a single pre-compiled query and then refine the results protected List<Dod> GetSomeDogs( string name, int age, string city) { string query = "select value dog from Entities.DogSet where dog.Owner.Address.City == @City "; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>( query, YourContext ); oQuery.Parameters.Add( new ObjectParameter( "City", city ) ); List<Dog> dogs = oQuery.ToList(); if( !String.IsNullOrEmpty(name) ) dogs = dogs.Where( it => it.Name == name ); if( age > 0 ) dogs = dogs.Where( it => it.Age == age ); return dogs; } It is particularly useful when you start displaying all the data then allow for filtering. Problems: - Could lead to serious data transfer if you are not careful about your subset. - You can only filter on the data that you returned. It means that if you don't return the Dog.Owner association, you will not be able to filter on the Dog.Owner.Name So what is the best solution? There isn't any. You need to pick the solution that works best for you and your problem: - Use lambda-based query building when you don't care about pre-compiling your queries. - Use fully-defined pre-compiled Linq query when your object structure is not too complex. - Use EntitySQL/string concatenation when the structure could be complex and when the possible number of different resulting queries are small (which means fewer pre-compilation hits). - Use in-memory filtering when you are working with a smallish subset of the data or when you had to fetch all of the data on the data at first anyway (if the performance is fine with all the data, then filtering in memory will not cause any time to be spent in the db). Singleton access The best way to deal with your context and entities accross all your pages is to use the singleton pattern: public sealed class YourContext { private const string instanceKey = "On3GoModelKey"; YourContext(){} public static YourEntities Instance { get { HttpContext context = HttpContext.Current; if( context == null ) return Nested.instance; if (context.Items[instanceKey] == null) { On3GoEntities entity = new On3GoEntities(); context.Items[instanceKey] = entity; } return (YourEntities)context.Items[instanceKey]; } } class Nested { // Explicit static constructor to tell C# compiler // not to mark type as beforefieldinit static Nested() { } internal static readonly YourEntities instance = new YourEntities(); } } NoTracking, is it worth it? When executing a query, you can tell the framework to track the objects it will return or not. What does it mean? With tracking enabled (the default option), the framework will track what is going on with the object (has it been modified? Created? Deleted?) and will also link objects together, when further queries are made from the database, which is what is of interest here. For example, lets assume that Dog with ID == 2 has an owner which ID == 10. Dog dog = (from dog in YourContext.DogSet where dog.ID == 2 select dog).FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; Person owner = (from o in YourContext.PersonSet where o.ID == 10 select dog).FirstOrDefault(); //dog.OwnerReference.IsLoaded == true; If we were to do the same with no tracking, the result would be different. ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>) (from dog in YourContext.DogSet where dog.ID == 2 select dog); oDogQuery.MergeOption = MergeOption.NoTracking; Dog dog = oDogQuery.FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; ObjectQuery<Person> oPersonQuery = (ObjectQuery<Person>) (from o in YourContext.PersonSet where o.ID == 10 select o); oPersonQuery.MergeOption = MergeOption.NoTracking; Owner owner = oPersonQuery.FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; Tracking is very useful and in a perfect world without performance issue, it would always be on. But in this world, there is a price for it, in terms of performance. So, should you use NoTracking to speed things up? It depends on what you are planning to use the data for. Is there any chance that the data your query with NoTracking can be used to make update/insert/delete in the database? If so, don't use NoTracking because associations are not tracked and will causes exceptions to be thrown. In a page where there are absolutly no updates to the database, you can use NoTracking. Mixing tracking and NoTracking is possible, but it requires you to be extra careful with updates/inserts/deletes. The problem is that if you mix then you risk having the framework trying to Attach() a NoTracking object to the context where another copy of the same object exist with tracking on. Basicly, what I am saying is that Dog dog1 = (from dog in YourContext.DogSet where dog.ID == 2).FirstOrDefault(); ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>) (from dog in YourContext.DogSet where dog.ID == 2 select dog); oDogQuery.MergeOption = MergeOption.NoTracking; Dog dog2 = oDogQuery.FirstOrDefault(); dog1 and dog2 are 2 different objects, one tracked and one not. Using the detached object in an update/insert will force an Attach() that will say "Wait a minute, I do already have an object here with the same database key. Fail". And when you Attach() one object, all of its hierarchy gets attached as well, causing problems everywhere. Be extra careful. How much faster is it with NoTracking It depends on the queries. Some are much more succeptible to tracking than other. I don't have a fast an easy rule for it, but it helps. So I should use NoTracking everywhere then? Not exactly. There are some advantages to tracking object. The first one is that the object is cached, so subsequent call for that object will not hit the database. That cache is only valid for the lifetime of the YourEntities object, which, if you use the singleton code above, is the same as the page lifetime. One page request == one YourEntity object. So for multiple calls for the same object, it will load only once per page request. (Other caching mechanism could extend that). What happens when you are using NoTracking and try to load the same object multiple times? The database will be queried each time, so there is an impact there. How often do/should you call for the same object during a single page request? As little as possible of course, but it does happens. Also remember the piece above about having the associations connected automatically for your? You don't have that with NoTracking, so if you load your data in multiple batches, you will not have a link to between them: ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>)(from dog in YourContext.DogSet select dog); oDogQuery.MergeOption = MergeOption.NoTracking; List<Dog> dogs = oDogQuery.ToList(); ObjectQuery<Person> oPersonQuery = (ObjectQuery<Person>)(from o in YourContext.PersonSet select o); oPersonQuery.MergeOption = MergeOption.NoTracking; List<Person> owners = oPersonQuery.ToList(); In this case, no dog will have its .Owner property set. Some things to keep in mind when you are trying to optimize the performance. No lazy loading, what am I to do? This can be seen as a blessing in disguise. Of course it is annoying to load everything manually. However, it decreases the number of calls to the db and forces you to think about when you should load data. The more you can load in one database call the better. That was always true, but it is enforced now with this 'feature' of EF. Of course, you can call if( !ObjectReference.IsLoaded ) ObjectReference.Load(); if you want to, but a better practice is to force the framework to load the objects you know you will need in one shot. This is where the discussion about parametrized Includes begins to make sense. Lets say you have you Dog object public class Dog { public Dog Get(int id) { return YourContext.DogSet.FirstOrDefault(it => it.ID == id ); } } This is the type of function you work with all the time. It gets called from all over the place and once you have that Dog object, you will do very different things to it in different functions. First, it should be pre-compiled, because you will call that very often. Second, each different pages will want to have access to a different subset of the Dog data. Some will want the Owner, some the FavoriteToy, etc. Of course, you could call Load() for each reference you need anytime you need one. But that will generate a call to the database each time. Bad idea. So instead, each page will ask for the data it wants to see when it first request for the Dog object: static public Dog Get(int id) { return GetDog(entity,"");} static public Dog Get(int id, string includePath) { string query = "select value o " + " from YourEntities.DogSet as o " +

    Read the article

  • Hibernate mapping one-to-many problem

    - by Xorty
    Hello, I am not very experienced with Hibernate and I am trying to create one-to-many mapping. Here are relevant tables: And here are my mapping files: <hibernate-mapping package="com.xorty.mailclient.server.domain"> <class name="Attachment" table="Attachment"> <id name="id"> <column name="idAttachment"></column> </id> <property name="filename"> <column name="name"></column> </property> <property name="blob"> <column name="file"></column> <type name="blob"></type> </property> <property name="mailId"> <column name="mail_idmail"></column> </property> </class> </hibernate-mapping> <hibernate-mapping> <class name="com.xorty.mailclient.server.domain.Mail" table="mail"> <id name="id" type="integer" column="idmail"></id> <property name="content"> <column name="body"></column> </property> <property name="ownerAddress"> <column name="account_address"></column> </property> <property name="title"> <column name="head"></column> </property> <set name="receivers" table="mail_has_contact" cascade="all"> <key column="mail_idmail"></key> <many-to-many column="contact_address" class="com.xorty.mailclient.client.domain.Contact"></many-to-many> </set> <list name="attachments" cascade="save-update, delete" inverse="true"> <key column="mail_idmail" not-null="true"/> <index column="fk_Attachment_mail1"></index> <one-to-many class="com.xorty.mailclient.server.domain.Attachment"/> </list> </class> </hibernate-mapping> In plain english, one mail has more attachments. When I try to do CRUD on mail without attachments, everyting works just fine. When I add some attachment to mail, I cannot perform any CRUD operation. I end up with following trace: org.hibernate.exception.ConstraintViolationException: Could not execute JDBC batch update at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:96) at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66) at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:275) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:268) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:184) at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321) at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:51) at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1216) at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:383) at org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:133) at domain.DatabaseTest.testPersistMailWithAttachment(DatabaseTest.java:355) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at junit.framework.TestCase.runTest(TestCase.java:168) at junit.framework.TestCase.runBare(TestCase.java:134) at junit.framework.TestResult$1.protect(TestResult.java:110) at junit.framework.TestResult.runProtected(TestResult.java:128) at junit.framework.TestResult.run(TestResult.java:113) at junit.framework.TestCase.run(TestCase.java:124) at junit.framework.TestSuite.runTest(TestSuite.java:232) at junit.framework.TestSuite.run(TestSuite.java:227) at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:49) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) Caused by: java.sql.BatchUpdateException: Cannot add or update a child row: a foreign key constraint fails (`maildb`.`attachment`, CONSTRAINT `fk_Attachment_mail1` FOREIGN KEY (`mail_idmail`) REFERENCES `mail` (`idmail`) ON DELETE NO ACTION ON UPDATE NO ACTION) at com.mysql.jdbc.PreparedStatement.executeBatchSerially(PreparedStatement.java:1666) at com.mysql.jdbc.PreparedStatement.executeBatch(PreparedStatement.java:1082) at org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:70) at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:268) ... 27 more Thank you

    Read the article

< Previous Page | 557 558 559 560 561 562 563 564 565 566 567 568  | Next Page >