Search Results

Search found 1528 results on 62 pages for 'neil smith'.

Page 46/62 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • How to install Uniconverter (command-line app) on Mac OS 10.7.2 (Lion)?

    - by RecentlyAFish
    Uniconverter is a command-line tool that shares code with the sK1 Project. it's used to convert from one type of vector graphic file to another like this: uniconverter before.eps after.svg I'm looking for a step by step solution to install this tool on my laptop. A similar question posted on the Uniconverter Forum back in August is still unanswered. I read about Uniconverter in an answer posted by Neil but don't grok how to send him a message directly for more details.

    Read the article

  • Performance Gains using Indexed Views and Computed Columns

    - by NeilHambly
    Hello This is a quick follow-up blog to the Presention I gave last night @ the London UG Meeting ( 17th March 2010 ) It was a great evening and we had a big full house (over 120 Registered for this event), due to time constraints we had I was unable to spend enough time on this topic to really give it justice or any the myriad of questions that arose form the session, I will be gathering all my material and putting a comprehensive BLOG entry on this topic in the next couple of days.. In the meantime here is the slides from last night if you wanted to again review it or if you where not @ the meeting If you wish to contact me then please feel free to send me emails @ Neil[email protected] Finally  - a quick thanks to Tony Rogerson for allowing me to be a Presenter last night (so we know who we can blame !)  and all the other presenters for thier support Watch this space Folks more to follow soon.. 

    Read the article

  • Whats new in My Life:Robotics,Azure

    - by sonam
    AZURE: I haven’t blogged from long time.I was actually busy with doing some Azure. For any starters with Azure,I would recommend to go with Neil: http://nmackenzie.spaces.live.com/Blog/cns!B863FF075995D18A!564.entry Awesome content.   Another thing that has come in my interests:Robotics Yes,I am finally reading up on robotics, specially the mobile robotics. Since,I don’t have any prof to guide yet,I am doing it independently by reading research papers and books. My first robot is not autonomous but i am actually making it for RoboWars. I got inspired by this video of Steve jobs and I think,I love to work on robotics.Perhaps ,thats my love. http://www.youtube.com/watch?v=Hd_ptbiPoXM Cya

    Read the article

  • New e learning course on Business Intelligence

    - by simonsabin
    I just got this from fello SQL MVP Chris Testa O'Neil   "I am pleased to announce the release of the Author Model eCourseCollection 6233 AE: Implementing and Maintaining Business Intelligence in Microsoft® SQL Server® 2008: Integration Services, Reporting Services and Analysis Services This 24-hour collection provides you with the skills and knowledge required for implementing and maintaining business intelligence solutions on SQL Server 2008. You will learn about the SQL Server technologies, such as Integration Services, Analysis Services, and Reporting Services. This collection also helps students to prepare for Exam 70-448 and can be accessed from: http://www.microsoft.com/learning/elearning/course/6233.mspx   

    Read the article

  • Welcome to Jackstown

    - by fatherjack
    I live in a small town, the population count isn't that great but let me introduce you to some of the population. We'll start with Martin the Doc, he fixes up anything that gets poorly, so much so that he could be classed as the doctor, the vet and even the garage mechanic. He's got a reputation that he can fix anything and that hasn't been proved wrong yet. He's great friends with Brian (who gets called "Brains") the teacher who seems to have a sound understanding of any topic you care to pass his way. If he isn't sure he tells you and then goes to find out and comes back with a full answer real quick. Its good to have that sort of research capability close at hand. Brains is also great at encouraging anyone who needs a bit of support to get them up to speed and working on their jobs. Steve sees Brains regularly, that's because he is the librarian, he keeps all sorts of reading material and nowadays there's even video to watch about any topic you like. Steve keeps scouring all sorts of places to get the content that's needed and he keeps it in good order so that what ever is needed can be found quickly. He also has to make sure that old stuff gets marked as probably out of date so that anyone reading it wont get mislead. Over the road from him is Greg, he's the town crier. We don't have a newspaper here so Greg keeps us all informed of what's going on "out of town" - what new stuff we might make use of and what wont work in a small place like this. If we are interested he goes ahead and gets people in to demonstrate their products  and tell us about the details. Greg is pretty good at getting us discounts too. Now Greg's brother Ian works for the mayors office in the "waste management department" nowadays its all about the recycling but he still has to make sure that the stuff that cant be used any more gets disposed of properly. It depends on the type of waste he's dealing with that decides how it need to be treated and he has to know a lot about the different methods and when to use which ones. There are two people that keep the peace in town, Brent is the detective, investigating wrong doings and applying justice where necessary and Bart is the diplomat who smooths things over when any people have a dispute or disagreement. Brent is meticulous in his investigations and fair in the way he handles any situation he finds. Discretion is his byword. There's a rumour that Bart used to work for the United Nations but what ever his history there is no denying his ability to get apparently irreconcilable parties working together to their combined benefit. Someone who works closely with Bart is Brad, he is the translator in town. He has several languages that he can converse in but he can also explain things from someone's point of view or  and make it understandable to someone else. To keep things on the straight and narrow from a legal perspective is Ben the solicitor, making sure we all abide by the rules.Two people who make for an interesting evening's conversation if you get them together are Aaron and Grant, Aaron is the local planning inspector and Grant is an inventor of some reputation. Anything being constructed around here needs Aarons agreement. He's quite flexible in his rules though; if you can justify what you want to do with solid logic but he wont stand for any development going on without his inclusion. That gets a demolition notice and there's no argument. Grant as I mentioned is the inventor in town, if something can be improved or created then Grant is your man. He mainly works on his own but isnt averse to getting specific advice and assistance from specialist from out of town if they can help him finish his creations.There aren't too many people left for you to meet in the town, there's Rob, he's an ex professional sportsman. He played Hockey, Football, Cricket, you name it. He was in his element as goal keeper / wicket keeper and that shows in his personal life. He just goes about his business and people often don't even know that he's helped them. Really low profile, doesn't get any glory but saves people from lots of problems, even disasters on occasion. There goes Neil, he's a bit of an odd person, some people say he's gifted with special clairvoyant powers, personally I think he's got his ear to the ground and knows where to find out the important news as soon as its made public. Anyone getting a visit from Neil is best off to follow his advice though, he's usually spot on and you wont be caught by surprise if you follow his recommendations – wherever it comes from.Poor old Andrew is the last person to introduce you to. Andrew doesn't show himself too often but when he does it seems that people find a reason to blame him for their problems, whether he had anything to do with their predicament or not. In all honesty, without fail, and to his great credit, he takes it in good grace and never retaliates or gets annoyed when he's out and about.  It pays off too as its very often the case that those who were blaming him recently suddenly find they need his help and they readily forget the issues pretty rapidly.And then there's me, what do I do in town? Well, I'm just a DBA with a lot of hats. (Jackstown Pop. 1)

    Read the article

  • It's 2011-Do You Know Where Your Children Are?

    - by andyleonard
    Introduction This is not a post about children. I was feeling plucky when I wrote this post at the end of last year. Sometimes when I feel plucky I'm inspired to create awesome blog post titles and ideas. Other times, this happens. 2011 Is Here! I was born in 1963. As I child I watched Neil Armstrong and Buzz Aldrin walk on the moon while Walter Cronkite narrated. At 11, I was fortunate enough to live next door to an engineer who taught me Motorola 6800 machine code and then BASIC . I have a long...(read more)

    Read the article

  • EPM and Business Analytics Talking-head Videos from Oracle OpenWorld 2013

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE Here is a selection of 2 to 3 minute video interviews at this year’s Oracle OpenWorld: 1. George Somogyi, Solutions Architect, New Edge Group, talks about the importance of having their integrated Oracle Hyperion Platform consisting of Oracle Hyperion Financial Management, Oracle Hyperion Financial Data Quality Management, Oracle E-Business Suite R12 and Oracle Business Intelligence Extended Edition plus their use of Oracle Managed Cloud Services. Speaker: George Somogyi @ http://youtu.be/kWn0dQxCUy8 2. Gregg Thompson, Director of Financial Systems for ADT, talks about using Oracle Data Relationship Management prior to implementing an Enterprise Performance Management solution. Gregg confirmed that there are big benefits to bringing the full Oracle Hyperion Financial Close suite online with Oracle DRM as the metadata source. Reduced maintenance time and use of external consultants translates into significant time and cost savings and faster implementation times. Speaker: Gregg Thompson @ http://youtu.be/XnFrR9Uk4xk 3. Jeff Spangler, Director Financial Planning and Analysis for Speedy Cash Holdings Corp, talked to us about the benefits achieved through implementing Oracle Hyperion Planning and financial reporting solutions. He also describes how the use of Data Relationship Management will keep the process running smoothly now and in the future. Speaker: Jeff Spangler @ http://youtu.be/kkkuMkgJ22U 4. Marc Seewald, Senior Director of Product Management for Oracle Hyperion Tax Provision at Oracle, talks about Oracle Hyperion Tax Provision, how it is an integral part of the financial close process and that it provides better internal controls and automation of this task. Marc talks about Oracle Partners and customers alike who are seeing great value. Speaker: Marc Seewald @ http://youtu.be/lM_nfvACGuA 5. Matt Bradley, SVP of Product Development for Enterprise Performance Management (EPM) Applications at Oracle, talked to us about different deployment options for Oracle EPM. Cloud services (SaaS), managed services, on-premise, off-premise all have their merits, and organizations need flexibility to easily move between them as their companies evolve. Speaker: Matt Bradley @ http://youtu.be/ATO7Z9dbE-o 6. Neil Sellers, Partner, Qubix International talks about their experience with previewing Oracle’s new Planning and Budgeting Cloud Service. He describes the benefits of the step-by-step task lists, the speed of getting the application up and running, and the huge benefits of not having to manage the software and hardware side of the planning process. Speaker: Neil Sellers @ http://youtu.be/xmosO28e4_I 7. Praveen Pasupuleti, Senior Business Intelligence Development Manager of Citrix Systems Inc., talks about their Oracle Hyperion Planning upgrade and the huge performance improvement now experienced in forecasting. He also talked about the benefits of Oracle Hyperion Workforce Planning achieved by Citrix. Speaker: Praveen Pasupuleti @ http://youtu.be/d1e_4hLqw8c 8. CheckPoint Consulting, talked to us about how Enterprise Performance Management should be viewed as an entire solution, rather than as a bunch of applications in silos, to provide significant benefits; and how Data Relationship Management can tie it all together effectively. Speaker: Ron Dimon @ http://youtu.be/sRwbdbbXvUE 9. Sonal Kulkarni, Enterprise Performance Management Leader, Cummins Inc., talks about their use of Oracle Hyperion Financial Close Management (Account Reconciliation Manager), Oracle Hyperion Financial Management and Oracle Hyperion Financial Data Quality Management and how this is providing efficiency, visibility and compliance benefits. Speaker: Sonal Kulkarni @ http://youtu.be/OEgup5dKyVc 10. Todd Renard, Manager Financial Planning and Business Analytics for B/E Aerospace Inc., talks about the huge benefits that B/E Aerospace is experiencing from Oracle Financial Close Suite. He was extremely excited about Oracle Hyperion Financial Data Quality Management and how this helps them integrate a new business in as little as three weeks. Speaker: Todd Renard @ http://youtu.be/nIfqK46uVI8 11. Peter Smolianski, Chief Technology Officer for the District of Columbia Courts, talked to us about how D.C. Courts is using Oracle Scorecard and Strategy Management to push their 5 year plan forward, to report results to their constituents, and take accountability for process changes to become more efficient. Speaker: Peter Smolianski @ http://www.youtube.com/watch?v=T-DtB5pl-uk 12. Rich Wilkie, Senior Director of Product Management for Financial Close Suite at Oracle, talked to us about Oracle Financial Management Analytics. He told us how the prebuilt dashboards on top of Oracle Hyperion Financial Close Suite make it easy for everyone to see the numbers and understand where they are in the close process, and if there is an issue, they can see where it is. Executives are excited to get this information on mobile devices too. Speaker: Rich Wilkie @ http://www.youtube.com/watch?v=4UHuHgx74Yg 13. Dinesh Balebail, Senior Director of Software Development for Oracle Hyperion Profitability and Cost Management, talked to us about the power and speed of Oracle Hyperion Profitability and Cost Management and how it is being used to do deep costing for Telecoms, Hospitals, Banks and other high transaction volume organizations effectively. Speaker: Dinesh Balebail @ http://youtu.be/ivx5AZCXAfs /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman"; mso-ansi-language:EN-US; mso-fareast-language:EN-US;}

    Read the article

  • ING: Scaling Role Management and Access Certification to Thousands of Applications

    - by Tanu Sood
    Organizations deal with employee and user access certifications in different ways.  There’s collation of multiple spreadsheets, an intense two-week exercise by managers or use of access certification tools to do so across a handful of applications. But for most organizations compliance is about certifying user access for thousands of employees across hundreds of systems. Managing and auditing millions of entitlement combinations on a periodic basis poses a huge scale challenge. ING solved the compliance scale challenge using an Identity Platform approach. Join the live webcast featuring ING’s enterprise architect, Mark Robison, as he discusses how a platform approach offers value that is greater than the sum of its parts and enables ING to successfully meet their security and compliance goals. Mark will also share his implementation experiences and discuss the key requirements to manage the complexity and scale of access certification efforts at ING. Mark will be joined by Neil Gandhi, Principal Product Manager for Oracle Identity Analytics. Live WebcastING: Scaling Role Management and Access Certification to Thousands of ApplicationsWednesday, April 11th at 10 am Pacific/ 1 pm EasternRegister Today

    Read the article

  • Crypted_password is null when using Authlogic to save a user

    - by kareem
    i'm getting a strange error on my production install when i try and create a new user using AL: ActiveRecord::StatementInvalid: Mysql::Error: Column 'crypted_password' cannot be null: INSERT INTO users especially strange b/c it works as expected on my local box. RUnning Rails 2.3.2 and ruby 1.8.7 on both boxes. user.rb: class User < ActiveRecord::Base before_create :set_username acts_as_authentic do |c| c.require_password_confirmation = false c.login_field = "email" c.validates_length_of_password_field_options = {:minimum => 4} c.validate_login_field = false #don't validate email field with additional validations end end Here's output from my production console: >> u = User.new => #<User id: nil, username: nil, email: nil, crypted_password: nil, password_salt: nil, persistence_token: nil, single_access_token: nil, perishable_token: nil, login_count: 0, failed_login_count: 0, last_request_at: nil, current_login_at: nil, last_login_at: nil, current_login_ip: nil, last_login_ip: nil, created_at: nil, updated_at: nil, is_admin: 0, first_name: nil, last_name: nil> >> u.full_name = 'john smith' => "john smith" >> u.password = 'test' => "test" >> u.email = '[email protected]' => "[email protected]" >> u.valid? => true >> u.save ActiveRecord::StatementInvalid: Mysql::Error: Column 'crypted_password' cannot be null: INSERT INTO `users` (`single_access_token`, `last_request_at`, `created_at`, `crypted_password`, `perishable_token`, `updated_at`, `username`, `failed_login_count`, `current_login_ip`, `password_salt`, `current_login_at`, `is_admin`, `persistence_token`, `login_count`, `last_name`, `last_login_ip`, `last_login_at`, `email`, `first_name`) VALUES('B-XSXwhO7hkbtISIOyEq', NULL, '2009-07-31 01:10:44', NULL, 'FK3mYS2Tp5Tzeq5IXE1z', '2009-07-31 01:10:44', 'john', 0, NULL, NULL, NULL, 0, '2c76b645f761eb3509353290e93874cecdb68a63caa165812ab1b126d63660757090ecf69995caef9e78f93d070b524e2542b3fec4ee050726088c2a9fdb0c9f', 0, 'smith', NULL, NULL, '[email protected]', 'john') from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/ active_record/connection_adapters/abstract_adapter.rb:212:in `log' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/ active_record/connection_adapters/mysql_adapter.rb:320:in `execute' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/ active_record/connection_adapters/abstract/database_statements.rb: 259:in `insert_sql' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/ active_record/connection_adapters/mysql_adapter.rb:330:in `insert_sql' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/ active_record/connection_adapters/abstract/database_statements.rb: 44:in `insert_without_query_dirty' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/ active_record/connection_adapters/abstract/query_cache.rb:18:in `insert' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/ active_record/base.rb:2902:in `create_without_timestamps' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/ active_record/timestamp.rb:29:in `create_without_callbacks' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/ active_record/callbacks.rb:266:in `create' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/ active_record/base.rb:2868:in `create_or_update_without_callbacks' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/ active_record/callbacks.rb:250:in `create_or_update' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/ active_record/base.rb:2539:in `save_without_validation' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/ active_record/validations.rb:1009:in `save_without_dirty' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/ active_record/dirty.rb:79:in `save_without_transactions' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/ active_record/transactions.rb:229:in `send' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/ active_record/transactions.rb:229:in `with_transaction_returning_status' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/ active_record/connection_adapters/abstract/database_statements.rb: 136:in `transaction' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/ active_record/transactions.rb:182:in `transaction' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/ active_record/transactions.rb:228:in `with_transaction_returning_status' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/ active_record/transactions.rb:196:in `save' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/ active_record/transactions.rb:208:in `rollback_active_record_state!' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/ active_record/transactions.rb:196:in `save' No idea why this is happening, and especially why this saves a new user on dev but not on production. Any help is much appreciated, thanks! edit: using Apache & Passenger 2.2.4

    Read the article

  • MySQL Query GROUP_CONCAT Over Multiple Rows

    - by PeteGO
    I'm getting name and address data out of generic question / answer data to create some kind of normalised reporting database. The query I've got uses group_concat and works for individual sets of questions but not for multiple sets. I've tried to simplify what I'm doing by using just forename and surname and just 3 records, 2 for 1 person and 1 for another. In reality though there are more than 300,000 records. Example of results with qs.Id = 1. QuestionSetId Forename Surname ------------------------------------------------------- 1 Bob Jones Example of results with qs.Id IN (1, 2, 3). QuestionSetId Forename Surname ------------------------------------------------------- 3 Bob,Bob,Frank Jones,Jones,Smith What I would like to see for qs.Id IN (1, 2, 3). QuestionSetId Forename Surname ------------------------------------------------------- 1 Bob Jones 2 Bob Jones 3 Frank Smith So how can I make the 2nd example return a separate row for each set of name and address information? I realise the current way the data is stored is "questionable" but I cannot change the way the data is stored. I can get sets of individual answers but not sure how to combine the others. My simplified Schema that I cannot change: CREATE TABLE StaticQuestion ( Id INT NOT NULL, StaticText VARCHAR(500) NOT NULL); CREATE TABLE Question ( Id INT NOT NULL, Text VARCHAR(500) NOT NULL); CREATE TABLE StaticQuestionQuestionLink ( Id INT NOT NULL, StaticQuestionId INT NOT NULL, QuestionId INT NOT NULL, DateEffective DATETIME NOT NULL); CREATE TABLE Answer ( Id INT NOT NULL, Text VARCHAR(500) NOT NULL); CREATE TABLE QuestionSet ( Id INT NOT NULL, DateEffective DATETIME NOT NULL); CREATE TABLE QuestionAnswerLink ( Id INT NOT NULL, QuestionSetId INT NOT NULL, QuestionId INT NOT NULL, AnswerId INT NOT NULL, StaticQuestionId INT NOT NULL); Some example data for only forename and surname. INSERT INTO StaticQuestion (Id, StaticText) VALUES (1, 'FirstName'), (2, 'LastName'); INSERT INTO Question (Id, Text) VALUES (1, 'What is your first name?'), (2, 'What is your forename?'), (3, 'What is your Surname?'); INSERT INTO StaticQuestionQuestionLink (Id, StaticQuestionId, QuestionId, DateEffective) VALUES (1, 1, 1, '2001-01-01'), (2, 1, 2, '2008-08-08'), (3, 2, 3, '2001-01-01'); INSERT INTO Answer (Id, Text) VALUES (1, 'Bob'), (2, 'Jones'), (3, 'Bob'), (4, 'Jones'), (5, 'Frank'), (6, 'Smith'); INSERT INTO QuestionSet (Id, DateEffective) VALUES (1, '2002-03-25'), (2, '2009-05-05'), (3, '2009-08-06'); INSERT INTO QuestionAnswerLink (Id, QuestionSetId, QuestionId, AnswerId, StaticQuestionId) VALUES (1, 1, 1, 1, 1), (2, 1, 3, 2, 2), (3, 2, 2, 3, 1), (4, 2, 3, 4, 2), (5, 3, 2, 5, 1), (6, 3, 3, 6, 2); Just in case SQLFiddle is down here are the 3 queries from the examples I've linked to: 1: - working query but only on 1 set of data. SELECT MAX(QuestionSetId) AS QuestionSetId, GROUP_CONCAT(Forename) AS Forename, GROUP_CONCAT(Surname) AS Surname FROM (SELECT x.QuestionSetId, CASE x.StaticQuestionId WHEN 1 THEN Text END AS Forename, CASE x.StaticQuestionId WHEN 2 THEN Text END AS Surname FROM (SELECT (SELECT link.StaticQuestionId FROM StaticQuestionQuestionLink link WHERE link.Id = qa.QuestionId AND link.DateEffective <= qs.DateEffective AND link.StaticQuestionId IN (1, 2) ORDER BY link.DateEffective DESC LIMIT 1) AS StaticQuestionId, a.Text, qa.QuestionSetId FROM QuestionSet qs INNER JOIN QuestionAnswerLink qa ON qs.Id = qa.QuestionSetId INNER JOIN Answer a ON qa.AnswerId = a.Id WHERE qs.Id IN (1)) x) y 2: - working query but undesired results on multiple sets of data. SELECT MAX(QuestionSetId) AS QuestionSetId, GROUP_CONCAT(Forename) AS Forename, GROUP_CONCAT(Surname) AS Surname FROM (SELECT x.QuestionSetId, CASE x.StaticQuestionId WHEN 1 THEN Text END AS Forename, CASE x.StaticQuestionId WHEN 2 THEN Text END AS Surname FROM (SELECT (SELECT link.StaticQuestionId FROM StaticQuestionQuestionLink link WHERE link.Id = qa.QuestionId AND link.DateEffective <= qs.DateEffective AND link.StaticQuestionId IN (1, 2) ORDER BY link.DateEffective DESC LIMIT 1) AS StaticQuestionId, a.Text, qa.QuestionSetId FROM QuestionSet qs INNER JOIN QuestionAnswerLink qa ON qs.Id = qa.QuestionSetId INNER JOIN Answer a ON qa.AnswerId = a.Id WHERE qs.Id IN (1, 2, 3)) x) y 3: - working query on multiple sets of data only on 1 field (answer) though. SELECT qs.Id AS QuestionSet, a.Text AS Answer FROM QuestionSet qs INNER JOIN QuestionAnswerLink qalink ON qs.Id = qalink.QuestionSetId INNER JOIN StaticQuestionQuestionLink sqqlink ON qalink.QuestionId = sqqlink.QuestionId INNER JOIN Answer a ON qalink.AnswerId = a.Id WHERE sqqlink.StaticQuestionId = 1 /* FirstName */ AND sqqlink.DateEffective = (SELECT DateEffective FROM StaticQuestionQuestionLink WHERE StaticQuestionId = 1 AND DateEffective <= qs.DateEffective ORDER BY DateEffective DESC LIMIT 1)

    Read the article

  • How to overcome shortcomings in reporting from EAV database?

    - by David Archer
    The major shortcomings with Entity-Attribute-Value database designs in SQL all seem to be related to being able to query and report on the data efficiently and quickly. Most of the information I read on the subject warn against implementing EAV due to these problems and the commonality of querying/reporting for almost all applications. I am currently designing a system where almost all the fields necessary for data storage are not known at design/compile time and are defined by the end-user of the system. EAV seems like a good fit for this requirement but due to the problems I've read about, I am hesitant in implementing it as there are also some pretty heavy reporting requirements for this system as well. I think I've come up with a way around this but would like to pose the question to the SO community. Given that typical normalized database (OLTP) still isn't always the best option for running reports, a good practice seems to be having a "reporting" database (OLAP) where the data from the normalized database is copied to, indexed extensively, and possibly denormalized for easier querying. Could the same idea be used to work around the shortcomings of an EAV design? The main downside I see are the increased complexity of transferring the data from the EAV database to reporting as you may end up having to alter the tables in the reporting database as new fields are defined in the EAV database. But that is hardly impossible and seems to be an acceptable tradeoff for the increased flexibility given by the EAV design. This downside also exists if I use a non-SQL data store (i.e. CouchDB or similar) for the main data storage since all the standard reporting tools are expecting a SQL backend to query against. Do the issues with EAV systems mostly go away if you have a seperate reporting database for querying? EDIT: Thanks for the comments so far. One of the important things about the system I'm working on it that I'm really only talking about using EAV for one of the entities, not everything in the system. The whole gist of the system is to be able to pull data from multiple disparate sources that are not known ahead of time and crunch the data to come up with some "best known" data about a particular entity. So every "field" I'm dealing with is multi-valued and I'm also required to track history for each. The normalized design for this ends up being 1 table per field which makes querying it kind of painful anyway. Here are the table schemas and sample data I'm looking at (obviously changed from what I'm working on but I think it illustrates the point well): EAV Tables Person ------------------- - Id - Name - ------------------- - 123 - Joe Smith - ------------------- Person_Value ------------------------------------------------------------------- - PersonId - Source - Field - Value - EffectiveDate - ------------------------------------------------------------------- - 123 - CIA - HomeAddress - 123 Cherry Ln - 2010-03-26 - - 123 - DMV - HomeAddress - 561 Stoney Rd - 2010-02-15 - - 123 - FBI - HomeAddress - 676 Lancas Dr - 2010-03-01 - ------------------------------------------------------------------- Reporting Table Person_Denormalized ---------------------------------------------------------------------------------------- - Id - Name - HomeAddress - HomeAddress_Confidence - HomeAddress_EffectiveDate - ---------------------------------------------------------------------------------------- - 123 - Joe Smith - 123 Cherry Ln - 0.713 - 2010-03-26 - ---------------------------------------------------------------------------------------- Normalized Design Person ------------------- - Id - Name - ------------------- - 123 - Joe Smith - ------------------- Person_HomeAddress ------------------------------------------------------ - PersonId - Source - Value - Effective Date - ------------------------------------------------------ - 123 - CIA - 123 Cherry Ln - 2010-03-26 - - 123 - DMV - 561 Stoney Rd - 2010-02-15 - - 123 - FBI - 676 Lancas Dr - 2010-03-01 - ------------------------------------------------------ The "Confidence" field here is generated using logic that cannot be expressed easily (if at all) using SQL so my most common operation besides inserting new values will be pulling ALL data about a person for all fields so I can generate the record for the reporting table. This is actually easier in the EAV model as I can do a single query. In the normalized design, I end up having to do 1 query per field to avoid a massive cartesian product from joining them all together.

    Read the article

  • Flex chart not displaying right values along x axis.

    - by Shah Al
    I don't know of any better way to ask this question. If the below code is run (i know the cData sections are not visible in the preview, something causes it to be ignored). The result does not represent the data correctly. 1. Flex ignores missing date 24 aug for DECKER. 2. It wrongly associates 42.77 to 23-Aug instead of 24-AUG. Is there a way in flex, where the x-axis is a union of all available points ? The below code is entirely from : Adobe website link I have only commented 2 data points. //{date:"23-Aug-05", close:45.74}, and //{date:"24-Aug-05", close:150.71}, <?xml version="1.0"?> [Bindable] public var SMITH:ArrayCollection = new ArrayCollection([ {date:"22-Aug-05", close:41.87}, //{date:"23-Aug-05", close:45.74}, {date:"24-Aug-05", close:42.77}, {date:"25-Aug-05", close:48.06}, ]); [Bindable] public var DECKER:ArrayCollection = new ArrayCollection([ {date:"22-Aug-05", close:157.59}, {date:"23-Aug-05", close:160.3}, //{date:"24-Aug-05", close:150.71}, {date:"25-Aug-05", close:156.88}, ]); [Bindable] public var deckerColor:Number = 0x224488; [Bindable] public var smithColor:Number = 0x884422; ]] <mx:horizontalAxisRenderers> <mx:AxisRenderer placement="bottom" axis="{h1}"/> </mx:horizontalAxisRenderers> <mx:verticalAxisRenderers> <mx:AxisRenderer placement="left" axis="{v1}"> <mx:axisStroke>{h1Stroke}</mx:axisStroke> </mx:AxisRenderer> <mx:AxisRenderer placement="left" axis="{v2}"> <mx:axisStroke>{h2Stroke}</mx:axisStroke> </mx:AxisRenderer> </mx:verticalAxisRenderers> <mx:series> <mx:ColumnSeries id="cs1" horizontalAxis="{h1}" dataProvider="{SMITH}" yField="close" displayName="SMITH" > <mx:fill> <mx:SolidColor color="{smithColor}"/> </mx:fill> <mx:verticalAxis> <mx:LinearAxis id="v1" minimum="40" maximum="50"/> </mx:verticalAxis> </mx:ColumnSeries> <mx:LineSeries id="cs2" horizontalAxis="{h1}" dataProvider="{DECKER}" yField="close" displayName="DECKER" > <mx:verticalAxis> <mx:LinearAxis id="v2" minimum="150" maximum="170"/> </mx:verticalAxis> <mx:lineStroke> <mx:Stroke color="{deckerColor}" weight="4" alpha="1" /> </mx:lineStroke> </mx:LineSeries> </mx:series> </mx:ColumnChart> <mx:Legend dataProvider="{myChart}"/>

    Read the article

  • multi_index composite_key replace with iterator

    - by Rohit
    Is there anyway to loop through an index in a boost::multi_index and perform a replace? #include <iostream> #include <string> #include <boost/multi_index_container.hpp> #include <boost/multi_index/composite_key.hpp> #include <boost/multi_index/member.hpp> #include <boost/multi_index/ordered_index.hpp> using namespace boost::multi_index; using namespace std; struct name_record { public: name_record(string given_name_,string family_name_,string other_name_) { given_name=given_name_; family_name=family_name_; other_name=other_name_; } string given_name; string family_name; string other_name; string get_name() const { return given_name + " " + family_name + " " + other_name; } void setnew(string chg) { given_name = given_name + chg; family_name = family_name + chg; } }; struct NameIndex{}; typedef multi_index_container< name_record, indexed_by< ordered_non_unique< tag<NameIndex>, composite_key < name_record, BOOST_MULTI_INDEX_MEMBER(name_record,string, name_record::given_name), BOOST_MULTI_INDEX_MEMBER(name_record,string, name_record::family_name) > > > > name_record_set; typedef boost::multi_index::index<name_record_set,NameIndex>::type::iterator IteratorType; typedef boost::multi_index::index<name_record_set,NameIndex>::type NameIndexType; void printContainer(name_record_set & ns) { cout << endl << "PrintContainer" << endl << "-------------" << endl; IteratorType it1 = ns.begin(); IteratorType it2 = ns.end (); while (it1 != it2) { cout<<it1->get_name()<<endl; it1++; } cout << "--------------" << endl << endl; } void modifyContainer(name_record_set & ns) { cout << endl << "ModifyContainer" << endl << "-------------" << endl; IteratorType it3; IteratorType it4; NameIndexType & idx1 = ns.get<NameIndex>(); IteratorType it1 = idx1.begin(); IteratorType it2 = idx1.end(); while (it1 != it2) { cout<<it1->get_name()<<endl; name_record nr = *it1; nr.setnew("_CHG"); bool res = idx1.replace(it1,nr); cout << "result is: " << res << endl; it1++; } cout << "--------------" << endl << endl; } int main() { name_record_set ns; ns.insert( name_record("Joe","Smith","ENTRY1") ); ns.insert( name_record("Robert","Brown","ENTRY2") ); ns.insert( name_record("Robert","Nightingale","ENTRY3") ); ns.insert( name_record("Marc","Tuxedo","ENTRY4") ); printContainer (ns); modifyContainer (ns); printContainer (ns); return 0; } PrintContainer ------------- Joe Smith ENTRY1 Marc Tuxedo ENTRY4 Robert Brown ENTRY2 Robert Nightingale ENTRY3 -------------- ModifyContainer ------------- Joe Smith ENTRY1 result is: 1 Marc Tuxedo ENTRY4 result is: 1 Robert Brown ENTRY2 result is: 1 -------------- PrintContainer ------------- Joe_CHG Smith_CHG ENTRY1 Marc_CHG Tuxedo_CHG ENTRY4 Robert Nightingale ENTRY3 Robert_CHG Brown_CHG ENTRY2 --------------

    Read the article

  • Pulling record from mySQL database only working for userid and not email

    - by user2908467
    This function works because I search by userid: private void showList_Click(object sender, EventArgs e) { int id = 0; for (int i = 0; i <= sqlClient.Count("UserList"); i++) { Dictionary<string, string> dik = sqlClient.Select("UserList", "userid = " + id); var lines = dik.Select(kv => kv.Key + ": " + kv.Value.ToString()); userList.AppendText(string.Join(Environment.NewLine, lines)); userList.AppendText(Environment.NewLine); userList.AppendText("--------------------------------------"); id++; } } This function does not work because I search by email: private void login_Click(object sender, EventArgs e) { string email = lemail.Text; Dictionary<string, string> dik = sqlClient.Select("UserList", "firstname = " + email); var lines = dik.Select(kv => kv.Key + ": " + kv.Value.ToString()); logged.AppendText(string.Join(Environment.NewLine, lines)); } This is the error message I receive when I click on the login button: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '@aol.com' at line 1 The email I searched for in the database was "[email protected]" without quotes. I'm lead to believe by the error message the @ sign is causing conflict as I know it is a special character but I am having a hard time figuring out what phrase to search for to help me. Also, here is the function that is being called: public Dictionary<string, string> Select(string table, string WHERE) { //This methods selects from the database, it retrieves data from it. //You must make a dictionary to use this since it both saves the column //and the value. i.e. "age" and "33" so you can easily search for values. //Example: SELECT * FROM names WHERE name='John Smith' // This example would retrieve all data about the entry with the name "John Smith" //Code = Dictionary<string, string> myDictionary = Select("names", "name='John Smith'"); //This code creates a dictionary and fills it with info from the database. string query = "SELECT * FROM " + table + " WHERE " + WHERE + ""; Dictionary<string, string> selectResult = new Dictionary<string, string>(); if (this.Open()) { MySqlCommand cmd = new MySqlCommand(query, conn); MySqlDataReader dataReader = cmd.ExecuteReader(); try { while (dataReader.Read()) { for (int i = 0; i < dataReader.FieldCount; i++) { selectResult.Add(dataReader.GetName(i).ToString(), dataReader.GetValue(i).ToString()); } } dataReader.Close(); } catch { } this.Close(); return selectResult; } else { return selectResult; } } My database table is called "UserList" The fields in order are as follows: userid, email, password, lastname, firstname Any help would be greatly appreciated. This site is amazing!

    Read the article

  • SQL SERVER – Update Statistics are Sampled By Default

    - by pinaldave
    After reading my earlier post SQL SERVER – Create Primary Key with Specific Name when Creating Table on Statistics, I have received another question by a blog reader. The question is as follows: Question: Are the statistics sampled by default? Answer: Yes. The sampling rate can be specified by the user and it can be anywhere between a very low value to 100%. Let us do a small experiment to verify if the auto update on statistics is left on. Also, let’s examine a very large table that is created and statistics by default- whether the statistics are sampled or not. USE [AdventureWorks] GO -- Create Table CREATE TABLE [dbo].[StatsTest]( [ID] [int] IDENTITY(1,1) NOT NULL, [FirstName] [varchar](100) NULL, [LastName] [varchar](100) NULL, [City] [varchar](100) NULL, CONSTRAINT [PK_StatsTest] PRIMARY KEY CLUSTERED ([ID] ASC) ) ON [PRIMARY] GO -- Insert 1 Million Rows INSERT INTO [dbo].[StatsTest] (FirstName,LastName,City) SELECT TOP 1000000 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Update the statistics UPDATE STATISTICS [dbo].[StatsTest] GO -- Shows the statistics DBCC SHOW_STATISTICS ("StatsTest"PK_StatsTest) GO -- Clean up DROP TABLE [dbo].[StatsTest] GO Now let us observe the result of the DBCC SHOW_STATISTICS. The result shows that Resultset is for sure sampling for a large dataset. The percentage of sampling is based on data distribution as well as the kind of data in the table. Before dropping the table, let us check first the size of the table. The size of the table is 35 MB. Now, let us run the above code with lesser number of the rows. USE [AdventureWorks] GO -- Create Table CREATE TABLE [dbo].[StatsTest]( [ID] [int] IDENTITY(1,1) NOT NULL, [FirstName] [varchar](100) NULL, [LastName] [varchar](100) NULL, [City] [varchar](100) NULL, CONSTRAINT [PK_StatsTest] PRIMARY KEY CLUSTERED ([ID] ASC) ) ON [PRIMARY] GO -- Insert 1 Hundred Thousand Rows INSERT INTO [dbo].[StatsTest] (FirstName,LastName,City) SELECT TOP 100000 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Update the statistics UPDATE STATISTICS [dbo].[StatsTest] GO -- Shows the statistics DBCC SHOW_STATISTICS ("StatsTest"PK_StatsTest) GO -- Clean up DROP TABLE [dbo].[StatsTest] GO You can see that Rows Sampled is just the same as Rows of the table. In this case, the sample rate is 100%. Before dropping the table, let us also check the size of the table. The size of the table is less than 4 MB. Let us compare the Result set just for a valid reference. Test 1: Total Rows: 1000000, Rows Sampled: 255420, Size of the Table: 35.516 MB Test 2: Total Rows: 100000, Rows Sampled: 100000, Size of the Table: 3.555 MB The reason behind the sample in the Test1 is that the data space is larger than 8 MB, and therefore it uses more than 1024 data pages. If the data space is smaller than 8 MB and uses less than 1024 data pages, then the sampling does not happen. Sampling aids in reducing excessive data scan; however, sometimes it reduces the accuracy of the data as well. Please note that this is just a sample test and there is no way it can be claimed as a benchmark test. The result can be dissimilar on different machines. There are lots of other information can be included when talking about this subject. I will write detail post covering all the subject very soon. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Statistics

    Read the article

  • SQL SERVER – Puzzle – Statistics are not Updated but are Created Once

    - by pinaldave
    After having excellent response to my quiz – Why SELECT * throws an error but SELECT COUNT(*) does not?I have decided to ask another puzzling question to all of you. I am running this test on SQL Server 2008 R2. Here is the quick scenario about my setup. Create Table Insert 1000 Records Check the Statistics Now insert 10 times more 10,000 indexes Check the Statistics – it will be NOT updated Note: Auto Update Statistics and Auto Create Statistics for database is TRUE Expected Result – Statistics should be updated – SQL SERVER – When are Statistics Updated – What triggers Statistics to Update Now the question is why the statistics are not updated? The common answer is – we can update the statistics ourselves using UPDATE STATISTICS TableName WITH FULLSCAN, ALL However, the solution I am looking is where statistics should be updated automatically based on algorithm mentioned here. Now the solution is to ____________________. Vinod Kumar is not allowed to take participate over here as he is the one who has helped me to build this puzzle. I will publish the solution on next week. Please leave a comment and if your comment consist valid answer, I will publish with due credit. Here is the script to reproduce the scenario which I mentioned. -- Execution Plans Difference -- Create Sample Database CREATE DATABASE SampleDB GO USE SampleDB GO -- Create Table CREATE TABLE ExecTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Insert One Thousand Records -- INSERT 1 INSERT INTO ExecTable (ID,FirstName,LastName,City) SELECT TOP 1000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%20 = 1 THEN 'New York' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 5 THEN 'San Marino' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 3 THEN 'Los Angeles' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 7 THEN 'La Cinega' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 13 THEN 'San Diego' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 17 THEN 'Las Vegas' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Display statistics of the table - none listed sp_helpstats N'ExecTable', 'ALL' GO -- Select Statement SELECT FirstName, LastName, City FROM ExecTable WHERE City  = 'New York' GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Replace your Statistics over here -- NOTE: Replace your _WA_Sys with stats from above query DBCC SHOW_STATISTICS('ExecTable', _WA_Sys_00000004_7D78A4E7); GO -------------------------------------------------------------- -- Round 2 -- Insert Ten Thousand Records -- INSERT 2 INSERT INTO ExecTable (ID,FirstName,LastName,City) SELECT TOP 10000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%20 = 1 THEN 'New York' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 5 THEN 'San Marino' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 3 THEN 'Los Angeles' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 7 THEN 'La Cinega' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 13 THEN 'San Diego' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 17 THEN 'Las Vegas' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Select Statement SELECT FirstName, LastName, City FROM ExecTable WHERE City  = 'New York' GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Replace your Statistics over here -- NOTE: Replace your _WA_Sys with stats from above query DBCC SHOW_STATISTICS('ExecTable', _WA_Sys_00000004_7D78A4E7); GO -- You will notice that Statistics are still updated with 1000 rows -- Clean up Database DROP TABLE ExecTable GO USE MASTER GO ALTER DATABASE SampleDB SET SINGLE_USER WITH ROLLBACK IMMEDIATE; GO DROP DATABASE SampleDB GO Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Statistics, Statistics

    Read the article

  • Silverlight Cream for March 09, 2011 -- #1057

    - by Dave Campbell
    In this Issue: Dennis Doomen, Peter Kuhn, Michael Crump, Joe McBride, Martin Krüger, Jeremy Likness, Manas Patnaik, Jesse Liberty(-2-), WindowsPhoneGeek(-2-). Above the Fold: Silverlight: "A highlighting AutoCompleteBox in Silverlight" Peter Kuhn WP7: "WP7 WatermarkedTextBox custom control" WindowsPhoneGeek Training: "" Shoutouts: Karl Shifflett announced that he and Josh Smith have heard the developers and released a demo: Mole 2010 Demo Released This is a somewhat older post, but the material is good and I was reminded of it while talking to Josh Smith at the MVP summit last week: Advanced MVVM ... money well-spent From SilverlightCream.com: Introducing the Silverlight Cookbook Dennis Doomen unveils a Codeplex site "containing a Silverlight 4 app that includes most of the complexities you might run into" ... I'm tagging this in my WynApse outlookbar... great stuff, Dennis! A highlighting AutoCompleteBox in Silverlight Peter Kuhn took on a task in response to a forum query and created a highlighting AutoCompleteBox, and is giving it to us... this really looks cool, Peter, and great explanation. Taking a look at the Mindscape Phone Elements for WP7. Michael Crump takes a good look at the Mindscape Phone Elements for WP7... and if you read closely you might still be able to get a free license! Windows Phone – “Can’t connect to your phone. Disconnect it, Restart it, then try connecting again.” Joe McBride explains a way out of an issue that many should be seeing as we repave or replace machines... how to get our device recognized on the updated machine... without giving cryptic messages. How to: only with the full visibility of an application in the browser window start an action Martin Krüger continues his journey in starting storyboards and tackles the condition that the application is completely in the browser window prior to the storyboard starting. A Numeric Input Control for Windows Phone 7 Jeremy Likness came up with a great idea for numeric input for WP7 ... you'll smile when you see it, but what a great idea... and a NumericTextBox to go along with it. Performing CRUD on Relational Data (Multiple table) using RIA in SL4 Manas Patnaik has a post up that breaks the normal blog post or demo mold by having two tables with a relational constraint and doing CRUD operations on them. Plenty of diagrams and good information. Select Many: Reactive Extensions’ Mother Of All Operators [Chaining] Jesse Liberty has part 9 in his Rx series up, and is looking at SelectMany this time, and chaining calls. He's using WPF for the sample, but the goodness is all there for us Silverlight guys too. The Full Stack 8–Adding Search to the Phone Client Jesse Liberty and Jon Galloway have part 8 of their Full Stack series up ... this is the MVC3, ASP.NET, Silverlight, and WP7 app development series... this time out they're putting Search in the Phone client. All about ResourceDictionary in WP7 WindowsPhoneGeek is discussing ResourceDictionaries in this post... beginning with What is a ResourceDictionary and continuing out through creating and using one, plus a good comment on merging. WP7 WatermarkedTextBox custom control In his next post, WindowsPhoneGeek walks us through the creation of a WatermarkedTextBox for WP7 right from the derivation from TextBox... very nice tutorial and lots of code/examples. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Entity Framework - adding new items via a navigation property

    - by Robert
    I have come across what appears to be very peculiar behaviour using entity framework 4.0. I have a User entity, which has a Company (A Company has many Users). If I do this, everything works as expected and I get a nice shiny new user in the database: var company = _model.Companies.First(); company.Users.Add(new User(1, "John", "Smith")); _model.SaveChanges(); However, if I do this, then I get nothing new in the database, and no exceptions thrown: var existingUser = _model.Users.First(); var company = existingUser.Company; company.Users.Add(new User(1, "John", "Smith")); _model.SaveChanges(); So it appears that if I add a User to the Company that is pulled directly from the model, then everything works fine. However if the User is added to a Company that is pulled as a navigation property of another object, then it doesn't work. Can someone tell me if this is expected behaviour, or if there is something I can do to make it so that it is? Thanks!

    Read the article

  • Binding TabControl ItemsSource to an ObservableCollection of ViewModels causes content to refresh on

    - by Brent
    I'm creating an WPF application using the MVVM framework, and I've adopted several features from Josh Smith's article on MVVM here... Most importantly, I'm binding a TabControl to an ObservableCollection of ViewModels. This means that am using a tabbed MDI interface that displays a UserControl as the content of a TabItem. The issue I'm seeing in my application is that when I have several tabs and I flip back and forth between tabs, the content is being refersh each time I change tabs. If you download Josh Smith's source code, you'll see that his app has the same problem. For example, click on the "View All Customers" button and scroll down to the bottom the ListView. Next click on the "Create New Customer" button. When you switch back to the All Customer view you'll notice that the ListView scrolls back to the top. If you switch back to the New Customer tab and place your cursor in one of the TextBoxes, then switch to All Customers tab and back, you'll notice that the cursor is now gone. I imagine that this is because I'm using an ObservableCollection, but I can't be sure. Is there any way to prevent the tab's content from refreshing when it receives the focus? EDIT: I found my problem when I ran the profiler on my application. I'm defining a DataTemplate for my ViewModels so it knows how to render the ViewModel when it is displayed in the tab... like so: <DataTemplate DataType="{x:Type vm:CustomerViewModel}"> <vw:CustomerView/> </DataTemplate> So whenever I switch to a different tab, it has to re-create the ViewModel again. I fixed it temporarily by changing my ObservableCollection of ViewModels to an ObservableCollection of UserControls. However, I would really still like to use DataTemplates if possible. Is there a way to make a DataTemplate work?

    Read the article

  • OS X, Mercurial and MediaTemple problem

    - by bschaeffer
    I've installed Mercurial per MT's knowledge base file here. Working with it server side using ssh from my Mac works fine. I can initialize repositories and the like, but pulling from the server or pushing from my Mac produces an error I don't understand. Here's what I get when call hg push from my local installation (hash marks represent my server number): remote: Traceback (most recent call last): remote: File "/home/#####/users/.home/data/mercurial-1.5/hg", line 27, in ? remote: mercurial.dispatch.run() remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/dispatch.py", line 16, in run remote: sys.exit(dispatch(sys.argv[1:])) remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/dispatch.py", line 21, in dispatch remote: u = _ui.ui() remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/ui.py", line 38, in __init__ remote: for f in util.rcpath(): remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/util.py", line 1200, in rcpath remote: _rcpath = os_rcpath() remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/util.py", line 1174, in os_rcpath remote: path = system_rcpath() remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/posix.py", line 41, in system_rcpath remote: path.extend(rcfiles(os.path.dirname(sys.argv[0]) + remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/posix.py", line 30, in rcfiles remote: rcs.extend([os.path.join(rcdir, f) remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/demandimport.py", line 75, in __getattribute__ remote: self._load() remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/demandimport.py", line 47, in _load remote: mod = _origimport(head, globals, locals) remote: ImportError: No module named osutil abort: no suitable response from remote hg! Mercurial on my Mac is configured as follows [ui] username = John Smith editor = te -w remotecmd = ~/data/mercurial-1.5/hg My local single repo is configured as follows (hash marks represent my server number): [paths] default = ssh://mysite.com@s#####.gridserver.com/domains/mysite.com/html Mercurial on the server is configured with a just a username: [ui] username = John Smith The server .bash_profile is configured as follows (per the installation guide): # Aliases alias ls-a='ls -a -l' # Added this as suggested by the MediaTemple guide export PYTHONPATH=${HOME}/lib/python:$PYTHONPATH export PATH=${HOME}/bin:$PATH I understand this probably isn't a MediaTemple problem, but more likely an installation problem. I would really appreciate any assitance on this problem. Thanks in advance!

    Read the article

  • Binding TabControl ItemsSource to an ObservableCollection causes content to refresh on focus

    - by Brent
    I'm creating an WPF application using the MVVM framework, and I've adopted several features from Josh Smith's article on MVVM here... Most importantly, I'm binding a TabControl to an ObservableCollection of ViewModels. This means that am using a tabbed MDI interface that displays a UserControl as the content of a TabItem. The issue I'm seeing in my application is that when I have several tabs and I flip back and forth between tabs, the content is being refersh each time I change tabs. If you download Josh Smith's source code, you'll see that his app has the same problem. For example, click on the "View All Customers" button and scroll down to the bottom the ListView. Next click on the "Create New Customer" button. When you switch back to the All Customer view you'll notice that the ListView scrolls back to the top. If you switch back to the New Customer tab and place your cursor in one of the TextBoxes, then switch to All Customers tab and back, you'll notice that the cursor is now gone. I imagine that this is because I'm using an ObservableCollection, but I can't be sure. Is there any way to prevent the tab's content from refreshing when it receives the focus?

    Read the article

  • Webbrowser control: Get element value and store it into a variable.

    - by Khou
    Winform: Web browser Control The web browser has the following displayed content, within a html table. [Element] [Value] Name John Smith Email [email protected] For the example above, the html code, might look something like this <table> <tbody> <tr> <td><label class="label">Name</label></td> <td class="normaltext">John Smith</td> </tr> <tr> <td><label class="label">Email</label></td> <td><span class="normaltext">[email protected]</span></td> </tr> </tr> </tbody> </table> . I want to get the element value, the value to the right of the label. What is the best way to do this? .

    Read the article

  • XML Attributes or Element Nodes?

    - by Camsoft
    Example XML using element nodes: <?xml version="1.0" encoding="utf-8"?> <users> <user> <name>David Smith</name> <phone>0441 234443</phone> <email>[email protected]</email> <addresses> <address> <street>1 Some Street</street> <town>Toy Town</town> <country>UK</country> </address> <address> <street>5 New Street</street> <town>Lego City</town> <country>US</country> </address> </addresses> </user> </users> Example XML using attributes: <?xml version="1.0" encoding="utf-8"?> <users> <user name="David Smith" phone="0441 234443" email="[email protected]"> <addresses> <address street="1 Some Street" town="Toy Town" country="UK" /> <address street="5 New Street" town="Lego City" country="US" /> </addresses> </user> </users> I'm needing to build an XML file based on data from a relation database and can't work out whether I should use attributes or elements. What is best practice when building XML files?

    Read the article

  • How to convert this code-based WPF tooltip to Silverlight?

    - by Edward Tanguay
    The following ToolTip code works in WPF. I'm trying to get it to work in Silverlight. But it gives me these errors: TextBlock does not contain a definition for ToolTip. Cursors does not contain a definition for Help. ToolTipService does not contain a definition for SetInitialShowDelay. How can I get this to work in Silverlight? using System.Windows; using System.Windows.Controls; using System.Windows.Input; using System.Windows.Media; namespace TestHover29282 { public partial class Window1 : Window { public Window1() { InitializeComponent(); AddCustomer("Jim Smith"); AddCustomer("Joe Jones"); AddCustomer("Angie Jones"); AddCustomer("Josh Smith"); } void AddCustomer(string name) { TextBlock tb = new TextBlock(); tb.Text = name; ToolTip tt = new ToolTip(); tt.Content = "This is some info on " + name + "."; tb.ToolTip = tt; tt.Cursor = Cursors.Help; ToolTipService.SetInitialShowDelay(tb, 0); MainStackPanel.Children.Add(tb); } } }

    Read the article

  • Problem merging similar XML files with XSL

    - by LOlliffe
    I have two documents that I need to merge, that happen in a way that I don't seem to be able to find covered in other examples. Namely, that it needs to match not only on a node's attribute at one level, but also on the value of an attribute a node level below that, to get that node's value. I'm trying to take this sample: <?xml version="1.0" encoding="UTF-8" ?> <marc:collection xmlns:marc="http://www.loc.gov/MARC21/slim" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <marc:record> <marc:datafield tag="035" ind1=" " ind2=" "> <marc:subfield code="a">12345</marc:subfield> </marc:datafield> <marc:datafield tag="041" ind1=" " ind2=" "> <marc:subfield code="a">eng</marc:subfield> </marc:datafield> <marc:datafield tag="650" ind1=" " ind2="4"> <marc:subfield code="a">Art</marc:subfield> </marc:datafield> <marc:datafield tag="949" ind1=" " ind2=" "> <marc:subfield code="i">Review of conference proceedings</marc:subfield> </marc:datafield> </marc:record> <marc:record> <marc:datafield tag="035" ind1=" " ind2=" "> <marc:subfield code="a">54321</marc:subfield> </marc:datafield> <marc:datafield tag="041" ind1=" " ind2=" "> <marc:subfield code="a">eng</marc:subfield> </marc:datafield> <marc:datafield tag="650" ind1=" " ind2="4"> <marc:subfield code="a">Byzantine</marc:subfield> </marc:datafield> </marc:record> </marc:collection> And when the value of "datafield" '035', "subfield" 'a' matches e.g. "12345" <marc:collection xmlns:marc="http://www.loc.gov/MARC21/slim" xmlns:fn="http://www.w3.org/2005/xpath-functions" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:fo="http://www.w3.org/1999/XSL/Format"> <marc:record> <marc:datafield ind2=" " ind1=" " tag="035"> <marc:subfield code="a">12345</marc:subfield> </marc:datafield> <marc:datafield ind2="4" ind1=" " tag="650"> <marc:subfield code="a">General works</marc:subfield> <marc:subfield code="x">Historians and critics</marc:subfield> <marc:subfield code="x">Smith, John, 1834-1917</marc:subfield> </marc:datafield> <marc:datafield ind2="4" ind1=" " tag="650"> <marc:subfield code="a">Généralités</marc:subfield> <marc:subfield code="x">Historiens et critiques d'art</marc:subfield> <marc:subfield code="x">Dietrichson, Lorentz, 1834-1917</marc:subfield> </marc:datafield> <marc:datafield ind2=" " ind1=" " tag="654"> <marc:subfield code="a">General works</marc:subfield> </marc:datafield> <marc:datafield ind2=" " ind1=" " tag="654"> <marc:subfield code="a">Généralités</marc:subfield> <marc:subfield code="b">Historiens et critiques d'art</marc:subfield> <marc:subfield code="b">Smith, John, 1834-1917</marc:subfield> </marc:datafield> </marc:record> <marc:record> <marc:datafield ind2=" " ind1=" " tag="035"> <marc:subfield code="a">54321</marc:subfield> </marc:datafield> <marc:datafield ind2="4" ind1=" " tag="650"> <marc:subfield code="a">General works</marc:subfield> <marc:subfield code="x">Historians and critics</marc:subfield> <marc:subfield code="x">Lange, Julius Henrik, 1838-1896</marc:subfield> </marc:datafield> </marc:record> </marc:collection> The result should be: <?xml version="1.0" encoding="UTF-8" ?> <marc:collection xmlns:marc="http://www.loc.gov/MARC21/slim" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <marc:record> <marc:datafield tag="035" ind1=" " ind2=" "> <marc:subfield code="a">12345</marc:subfield> </marc:datafield> <marc:datafield tag="041" ind1=" " ind2=" "> <marc:subfield code="a">eng</marc:subfield> </marc:datafield> <marc:datafield tag="650" ind1=" " ind2="4"> <marc:subfield code="a">Art</marc:subfield> </marc:datafield> <marc:datafield ind2="4" ind1=" " tag="650"> <marc:subfield code="a">General works</marc:subfield> <marc:subfield code="x">Historians and critics</marc:subfield> <marc:subfield code="x">Smith, John, 1834-1917</marc:subfield> </marc:datafield> <marc:datafield ind2="4" ind1=" " tag="650"> <marc:subfield code="a">Généralités</marc:subfield> <marc:subfield code="x">Historiens et critiques d'art</marc:subfield> <marc:subfield code="x">Dietrichson, Lorentz, 1834-1917</marc:subfield> </marc:datafield> <marc:datafield ind2=" " ind1=" " tag="654"> <marc:subfield code="a">General works</marc:subfield> </marc:datafield> <marc:datafield ind2=" " ind1=" " tag="654"> <marc:subfield code="a">Généralités</marc:subfield> <marc:subfield code="b">Historiens et critiques d'art</marc:subfield> <marc:subfield code="b">Smith, John, 1834-1917</marc:subfield> </marc:datafield> <marc:datafield tag="949" ind1=" " ind2=" "> <marc:subfield code="i">Review of conference proceedings</marc:subfield> </marc:datafield> </marc:record> <marc:record> <marc:datafield tag="035" ind1=" " ind2=" "> <marc:subfield code="a">54321</marc:subfield> </marc:datafield> <marc:datafield tag="041" ind1=" " ind2=" "> <marc:subfield code="a">eng</marc:subfield> </marc:datafield> <marc:datafield tag="650" ind1=" " ind2="4"> <marc:subfield code="a">Byzantine</marc:subfield> </marc:datafield> <marc:datafield ind2="4" ind1=" " tag="650"> <marc:subfield code="a">General works</marc:subfield> <marc:subfield code="x">Historians and critics</marc:subfield> <marc:subfield code="x">Lange, Julius Henrik, 1838-1896</marc:subfield> </marc:datafield> </marc:record> </marc:collection> I've tried using examples that I've found that did lookups, but none of them seemed to work. I didn't include any of my XSL, because all of my results were disasterous. I keep looking at it, like it must be simple, but I'm just not getting any decent results. Any help or pointers would be greatly appreciated. Thanks!

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >