Search Results

Search found 14378 results on 576 pages for 'record count'.

Page 441/576 | < Previous Page | 437 438 439 440 441 442 443 444 445 446 447 448  | Next Page >

  • Function Folding in #PowerQuery

    - by Darren Gosbell
    Originally posted on: http://geekswithblogs.net/darrengosbell/archive/2014/05/16/function-folding-in-powerquery.aspxLooking at a typical Power Query query you will noticed that it's made up of a number of small steps. As an example take a look at the query I did in my previous post about joining a fact table to a slowly changing dimension. It was roughly built up of the following steps: Get all records from the fact table Get all records from the dimension table do an outer join between these two tables on the business key (resulting in an increase in the row count as there are multiple records in the dimension table for each business key) Filter out the excess rows introduced in step 3 remove extra columns that are not required in the final result set. If Power Query was to execute a query like this literally, following the same steps in the same order it would not be overly efficient. Particularly if your two source tables were quite large. However Power Query has a feature called function folding where it can take a number of these small steps and push them down to the data source. The degree of function folding that can be performed depends on the data source, As you might expect, relational data sources like SQL Server, Oracle and Teradata support folding, but so do some of the other sources like OData, Exchange and Active Directory. To explore how this works I took the data from my previous post and loaded it into a SQL database. Then I converted my Power Query expression to source it's data from that database. Below is the resulting Power Query which I edited by hand so that the whole thing can be shown in a single expression: let     SqlSource = Sql.Database("localhost", "PowerQueryTest"),     BU = SqlSource{[Schema="dbo",Item="BU"]}[Data],     Fact = SqlSource{[Schema="dbo",Item="fact"]}[Data],     Source = Table.NestedJoin(Fact,{"BU_Code"},BU,{"BU_Code"},"NewColumn"),     LeftJoin = Table.ExpandTableColumn(Source, "NewColumn"                                   , {"BU_Key", "StartDate", "EndDate"}                                   , {"BU_Key", "StartDate", "EndDate"}),     BetweenFilter = Table.SelectRows(LeftJoin, each (([Date] >= [StartDate]) and ([Date] <= [EndDate])) ),     RemovedColumns = Table.RemoveColumns(BetweenFilter,{"StartDate", "EndDate"}) in     RemovedColumns If the above query was run step by step in a literal fashion you would expect it to run two queries against the SQL database doing "SELECT * …" from both tables. However a profiler trace shows just the following single SQL query: select [_].[BU_Code],     [_].[Date],     [_].[Amount],     [_].[BU_Key] from (     select [$Outer].[BU_Code],         [$Outer].[Date],         [$Outer].[Amount],         [$Inner].[BU_Key],         [$Inner].[StartDate],         [$Inner].[EndDate]     from [dbo].[fact] as [$Outer]     left outer join     (         select [_].[BU_Key] as [BU_Key],             [_].[BU_Code] as [BU_Code2],             [_].[BU_Name] as [BU_Name],             [_].[StartDate] as [StartDate],             [_].[EndDate] as [EndDate]         from [dbo].[BU] as [_]     ) as [$Inner] on ([$Outer].[BU_Code] = [$Inner].[BU_Code2] or [$Outer].[BU_Code] is null and [$Inner].[BU_Code2] is null) ) as [_] where [_].[Date] >= [_].[StartDate] and [_].[Date] <= [_].[EndDate] The resulting query is a little strange, you can probably tell that it was generated programmatically. But if you look closely you'll notice that every single part of the Power Query formula has been pushed down to SQL Server. Power Query itself ends up just constructing the query and passing the results back to Excel, it does not do any of the data transformation steps itself. So now you can feel a bit more comfortable showing Power Query to your less technical Colleagues knowing that the tool will do it's best fold all the  small steps in Power Query down the most efficient query that it can against the source systems.

    Read the article

  • Disqus integration in website.. what is wrong??

    - by Thieme Hennis
    hi, I try to embed a disqus forum in a website I created. I used the exact code and instructions they give on the installation instructions. I just don't get it. Not much on Google either. Is something wrong in the code? Should I change anything? <!DOCTYPE HTML> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <link rel="shortcut icon" href="../favicon.ico"> <title>Little Louie | Hennis &amp; Blaisse Lovers Productions</title> <META NAME="keywords" CONTENT="some,tags"> <link href="../style2.css" rel="stylesheet" type="text/css"> </head> <body> <p><b>Some text</p> <p> <div id="disqus_thread"></div> <script type="text/javascript"> (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = 'http://littelouie.disqus.com/embed.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); </script> <noscript>Please enable JavaScript to view the <a href="http://disqus.com/?ref_noscript=littelouie">comments powered by Disqus.</a></noscript> <a href="http://disqus.com" class="dsq-brlink">blog comments powered by <span class="logo-disqus">Disqus</span></a> </p> <p> <script type="text/javascript"> var disqus_shortname = 'littelouie'; (function () { var s = document.createElement('script'); s.async = true; s.src = 'http://disqus.com/forums/littelouie/count.js'; (document.getElementsByTagName('HEAD')[0] || document.getElementsByTagName('BODY')[0]).appendChild(s); }()); </script> </p> </body> </html>

    Read the article

  • Particle System in XNA - cannot draw particle

    - by Dave Voyles
    I'm trying to implement a simple particle system in my XNA project. I'm going by RB Whitaker's tutorial, and it seems simple enough. I'm trying to draw particles within my menu screen. Below I've included the code which I think is applicable. I'm coming up with one error in my build, and it is stating that I need to create a new instance of the EmitterLocation from the particleEngine. When I hover over particleEngine.EmitterLocation = new Vector2(Mouse.GetState().X, Mouse.GetState().Y); it states that particleEngine is returning a null value. What could be causing this? /// <summary> /// Base class for screens that contain a menu of options. The user can /// move up and down to select an entry, or cancel to back out of the screen. /// </summary> abstract class MenuScreen : GameScreen ParticleEngine particleEngine; public void LoadContent(ContentManager content) { if (content == null) { content = new ContentManager(ScreenManager.Game.Services, "Content"); } base.LoadContent(); List<Texture2D> textures = new List<Texture2D>(); textures.Add(content.Load<Texture2D>(@"gfx/circle")); textures.Add(content.Load<Texture2D>(@"gfx/star")); textures.Add(content.Load<Texture2D>(@"gfx/diamond")); particleEngine = new ParticleEngine(textures, new Vector2(400, 240)); } public override void Update(GameTime gameTime, bool otherScreenHasFocus, bool coveredByOtherScreen) { base.Update(gameTime, otherScreenHasFocus, coveredByOtherScreen); // Update each nested MenuEntry object. for (int i = 0; i < menuEntries.Count; i++) { bool isSelected = IsActive && (i == selectedEntry); menuEntries[i].Update(this, isSelected, gameTime); } particleEngine.EmitterLocation = new Vector2(Mouse.GetState().X, Mouse.GetState().Y); particleEngine.Update(); } public override void Draw(GameTime gameTime) { // make sure our entries are in the right place before we draw them UpdateMenuEntryLocations(); GraphicsDevice graphics = ScreenManager.GraphicsDevice; SpriteBatch spriteBatch = ScreenManager.SpriteBatch; SpriteFont font = ScreenManager.Font; spriteBatch.Begin(); // Draw stuff logic spriteBatch.End(); particleEngine.Draw(spriteBatch); }

    Read the article

  • BizTalk 2009 - The Community ODBC Adapter: Schema Generation

    - by Stuart Brierley
    Having previously detailed the installation of the Community ODBC Adapter for BizTalk 2009, the next thing I will be looking at is the generation of schemas using this ODBC adapter. Within your BizTalk 2009 project, right click the project and select Add Generated Items.  In the resultant window choose Add Adapter Metadata and click Add to open the Add Adapter Wizard. Check that the BizTalk Server and Database names are correct, select the ODBC adapter and click next. You must now set the connection string. To start with choose set, then new DSN (data source name). You now need to define the Data Source you will be connecting to.  On the User DSN tab select Add add then driver you want to use. In this case I am going to use the MySQL ODBC Driver.  A User DSN will only be visible on the current machine with you as a user. * Although I initially set up a User DSN and this was fine for creating schemas with, I later realised that you actually need a system DSN as the BizTalk host service needs this to be able connect to the database on a receive or send port. You will then be asked to Set up the MySQL ODBC Data Source.  In my case this is a local database making use of named pipes, so I had to make sure that I ticked the "Force use of named pipes" check box and removed the "# The Pipe the MySQL Server will use socket=mysql" line from the mysql.ini; with this is place the connection would fail as there is no apparent way to specify the pipe name in the ODBC driver configuration. This will then update the User DSN tab with the new Data Source.  Make sure that you select it and press OK. Select it again in the Choose Data Source window and press OK.  On the ODBC transport window select next. You will now be presented with the Schema Information window, where you must supply the namespace, type and root element names for your schema. Next choose the type of statement that you will be using to create your schema - in this case I am using a stored procedure. *I later discovered that this option is fine for MySQL stored procedures without input parameters, but failed for MySQL stored procedures with input parameters.  (I will be posting on the way to handle input parameters soon) Next you will need to specify the name of the stored procedure.  In this case I have a simple stored procedure to return all the data held by my TestTable in MySQL. Select * from TestTable; The table itself has three columns: Name, Sex and Married. Selecting finish should now hopefully create your schemas based on the input and output from your stored procedure. In my case I have:   An empty schema for the request; after all I have no parameters for the stored procedure.  A response schema comprised of a Table Record with Name, Sex and Married children. Next I will be looking at the use of the ODBC adpater with: Receive ports Send ports

    Read the article

  • Taking HRMS to the Cloud to Simplify Human Resources Management

    - by HCM-Oracle
    By Anke Mogannam With human capital management (HCM) a top-of-mind issue for executives in every industry, human resources (HR) organizations are poised to have their day in the sun—proving not just their administrative worth but their strategic value as well.  To make good on that promise, however, HR must modernize. Indeed, if HR is to act as an agent of change—providing the swift reallocation of employees  and the rapid absorption of employee data required for enterprises to shift course on a dime—it must first deal with the disruptive change at its own front door. And increasingly, that means choosing the right technology and human resources management system (HRMS) for managing the entire employee lifecycle. Unfortunately, for most organizations, this task has proved easier said than done. This is because while much has been written about advances in HRMS technology, until recently, most of those advances took the form of disparate on-premises solutions designed to serve very specific purposes. Although this may have resulted in key competencies in certain areas, it also meant that processes for core HR functions like payroll and benefits were being carried out in separate systems from those used for talent management, workforce optimization, training, and so on. With no integration—and no single system of record—processes were disconnected, ease of use was impeded, user experience was diminished, and vital data was left untapped.  Today, however, that scenario has begun to change, and end-to-end cloud-based HCM solutions have moved from wished-for innovations to real-life solutions. Why, then, have HR organizations been so slow in adopting them? The answer—it would seem—is, “It’s complicated.” So complicated, in fact, that 45 percent of the respondents to PwC’s “Annual HR Technology Survey” (for 2013) reported having no formal HR software roadmap, and 40 percent stated that they “did not know” whether their organizations would be increasing their use of cloud or software as a service (SaaS) for HR.  Clearly, HR organizations need help sorting through the morass of HR software options confronting them. But just as clearly, there’s an enormous opportunity awaiting those that do. The trick will come in charting a course that allows HR to leverage existing technology while investing in the cloud-based solutions that will deliver the end-to-end processes, easy-to-understand analytics, and superior adaptability required to simplify—and add value to—every aspect of employee management. The Opportunity therefore is to cut costs, drive Innovation, and increase engagement by moving to cloud-based HCM.  Then you will benefit from one Interface, leverage many access points, and  gain at-a-glance insight across your entire workforce. With many legacy on-premises HR systems not being efficient anymore and cloud-based, integrated systems that span the range of HR functions finally reaching maturity, the time is ripe for moving core HR to the cloud. Indeed, for the first time ever there are more HRMS replacement initiatives than HRMS upgrade initiatives under way, and the majority of them involve moving to the cloud per Cedar Crestone’s 2013-2014 HRMS survey. To learn how you can launch your own cloud HCM initiative and begin using HR to power the enterprise, visit Oracle HRMS in the Cloud and Oracle’s new customer 2 cloud program. Anke Mogannam brings more than 16 years of marketing and human capital management experience in the technology industries to her role at Oracle where she is part of the Human Capital Management applications marketing team. In that role, Anke drives content marketing, messaging, go-to-market activities, integrated marketing campaigns, and field enablement. Prior to joining Oracle, Anke held several roles in communications, marketing, HCM product strategy and product management at PeopleSoft, SAP, Workday and Saba. Follow her on Twitter @amogannam

    Read the article

  • How can I gather client's data on Google App Engine without using Datastore/Backend Instances too much?

    - by ruslan
    One of the projects I'm working on is online survey engine. It's my first big commercial project on Google App Engine. I need your advice on how to collect stats and efficiently record them in DataStore without bankrupting me. Initial requirements are: After user finishes survey client sends list of pairs [ID (int) + PercentHit (double)]. This list shows how close answers of this user match predefined answers of reference answerers (which identified by IDs). I call them "target IDs". Creator of the survey wants to see aggregated % for given IDs for last hour, particular timeframe or from the beginning of the survey. Some surveys may have thousands of target/reference answerers. So I created entity public class HitsStatsDO implements Serializable { @Id transient private Long id; transient private Long version = (long) 0; transient private Long startDate; @Parent transient private Key parent; // fake parent which contains target id @Transient int targetId; private double avgPercent; private long hitCount; } But writing HitsStatsDO for each target from each user would give a lot of data. For instance I had a survey with 3000 targets which was answered by ~4 million people within one week with 300K people taking survey in first day. Even if we assume they were answering it evenly for 24 hours it would give us ~1040 writes/second. Obviously it hits concurrent writes limit of Datastore. I decided I'll collect data for one hour and save that, that's why there are avgPercent and hitCount in HitsStatsDO. GAE instances are stateless so I had to use dynamic backend instance. There I have something like this: // Contains stats for one hour private class Shard { ReadWriteLock lock = new ReentrantReadWriteLock(); Map<Integer, HitsStatsDO> map = new HashMap<Integer, HitsStatsDO>(); // Key is target ID public void saveToDatastore(); public void updateStats(Long startDate, Map<Integer, Double> hits); } and map with shard for current hour and previous hour (which doesn't stay here for long) private HashMap<Long, Shard> shards = new HashMap<Long, Shard>(); // Key is HitsStatsDO.startDate So once per hour I dump Shard for previous hour to Datastore. Plus I have class LifetimeStats which keeps Map<Integer, HitsStatsDO> in memcached where map-key is target ID. Also in my backend shutdown hook method I dump stats for unfinished hour to Datastore. There is only one major issue here - I have only ONE backend instance :) It raises following questions on which I'd like to hear your opinion: Can I do this without using backend instance ? What if one instance is not enough ? How can I split data between multiple dynamic backend instances? It hard because I don't know how many I have because Google creates new one as load increases. I know I can launch exact number of resident backend instances. But how many ? 2, 5, 10 ? What if I have no load at all for a week. Constantly running 10 backend instances is too expensive. What do I do with data from clients while backend instance is dead/restarting?

    Read the article

  • When a problem is resolved

    - by Rob Farley
    This month’s T-SQL Tuesday is hosted by Jen McCown, and she’s picked the topic of Resolutions. It’s a new year, and she’s thinking about what people have resolved to do this year. Unfortunately, I’ve never really done resolutions like that. I see too many people resolve to quit smoking, or lose weight, or whatever, and fail miserably. I’m not saying I don’t set goals, but it’s not a thing for New Year. The obvious joke is “1920x1080” as a resolution, but I’m not going there. I think Resolving is a strange word. It makes it sound like I’m having to solve a problem a second time, when actually, it’s more along the lines of solving a problem well enough for it to count as finished. If something has been resolved, a solution has been provided. There is a resolution, through the provision of a solution. It’s a strangeness of English. When I look up the word resolution at dictionary.com, it has 12 options, including “settling of a problem”. There’s a finality about resolution. If you resolve to do something, you’re saying “Yes. This is a done thing. I’m resolving to do it, which means that it may as well be complete already.” I like to think I resolve problems, rather than just solving them. I want my solving to be final and complete. If I tune a query, I don’t want to find that I’m back in there, re-tuning it at some point. Strangely, if I re-solve a problem, that implies that I didn’t resolve it in the first place. I only solved it. Temporarily. We “data-folk” live in a world where the most common answer is “It depends.” Frustratingly, the thing an answer depends on may still be changing in the system in question. That probably means that any solution that is put in place may need reinvestigating at some point later. So do I resolve things? Yes. Am I Chuck Norris, and solve things so well the world would break first? No. Do these two claims happily sit beside each other? No, unfortunately not. But I happily take responsibility for things, and let my clients depend on me to see it through. As far as they are concerned, it is resolved. And so I resolve to keep resolving, right through 2011.

    Read the article

  • How to suggest using an ORM instead of stored procedures?

    - by Wayne M
    I work at a company that only uses stored procedures for all data access, which makes it very annoying to keep our local databases in sync as every commit we have to run new procs. I have used some basic ORMs in the past and I find the experience much better and cleaner. I'd like to suggest to the development manager and rest of the team that we look into using an ORM Of some kind for future development (the rest of the team are only familiar with stored procedures and have never used anything else). The current architecture is .NET 3.5 written like .NET 1.1, with "god classes" that use a strange implementation of ActiveRecord and return untyped DataSets which are looped over in code-behind files - the classes work something like this: class Foo { public bool LoadFoo() { bool blnResult = false; if (this.FooID == 0) { throw new Exception("FooID must be set before calling this method."); } DataSet ds = // ... call to Sproc if (ds.Tables[0].Rows.Count > 0) { foo.FooName = ds.Tables[0].Rows[0]["FooName"].ToString(); // other properties set blnResult = true; } return blnResult; } } // Consumer Foo foo = new Foo(); foo.FooID = 1234; foo.LoadFoo(); // do stuff with foo... There is pretty much no application of any design patterns. There are no tests whatsoever (nobody else knows how to write unit tests, and testing is done through manually loading up the website and poking around). Looking through our database we have: 199 tables, 13 views, a whopping 926 stored procedures and 93 functions. About 30 or so tables are used for batch jobs or external things, the remainder are used in our core application. Is it even worth pursuing a different approach in this scenario? I'm talking about moving forward only since we aren't allowed to refactor the existing code since "it works" so we cannot change the existing classes to use an ORM, but I don't know how often we add brand new modules instead of adding to/fixing current modules so I'm not sure if an ORM is the right approach (too much invested in stored procedures and DataSets). If it is the right choice, how should I present the case for using one? Off the top of my head the only benefits I can think of is having cleaner code (although it might not be, since the current architecture isn't built with ORMs in mind so we would basically be jury-rigging ORMs on to future modules but the old ones would still be using the DataSets) and less hassle to have to remember what procedure scripts have been run and which need to be run, etc. but that's it, and I don't know how compelling an argument that would be. Maintainability is another concern but one that nobody except me seems to be concerned about.

    Read the article

  • The Three-Legged Milk Stool - Why Oracle Fusion Incentive Compensation makes the difference!

    - by Richard Lefebvre
    During the London Olympics, we were exposed to dozens of athletes who worked with sports psychologists to maximize their performance. Executives often hire business psychologists to coach their teams to excellence. In the same vein, Fusion Incentive Compensation can be used to get people to change their sales behavior so we can make our numbers. But what about using incentive compensation solutions in a non-sales scenario to drive change? Recently, I was working an opportunity where a company was having a low user adoption rate for Salesforce.com, which was causing problems for them. I suggested they use Fusion Incentive Comp to change the reps' behavior. We tossed around the idea of tracking user adoption by creating a variable bonus for reps based on how well they forecasted revenues in the new system. Another thought was to reward the reps for how often they logged into the system or for the percentage of leads that became opportunities and turned into revenue. A new twist on a great product. Fusion CRM's Sweet Spot I'm excited about the sales performance management (SPM) tools in Fusion CRM. This trio of Incentive Compensation, Territory Management, and Quota Management sets us apart from the competition because Oracle is the only vendor that provides all three of these capabilities on a single tech stack, in a single application, and with a single look and feel. The niche vendors offer standalone territory or incentive compensation solutions, but then the customer has to custom build the other tools and can end up with a Frankenstein-type environment. On average, companies overpay sales commissions by three to eight percent. You calculate that number for a company the size of Oracle for one quarter and it makes a pretty air-tight financial case for using SPM tools to figure accurate commissions. Plus when sales reps get the right compensation, they can be out selling rather than spending precious time figuring out what they didn't get paid or looking for another job. And one more thing ... Oracle knows incentive comp. We have been a Gartner Market Scope leader in this space for the last five years. Our solution gets high marks because of its scalability and because of its interoperability with other technologies. And now that we're leading with Fusion, our incentive compensation offering includes the innovations that the Fusion team built, plus enhancements from the E-Business Suite Incentive Comp team. It's a case of making a good thing even better. (See product video.) The "Wedge" Apps In a number of accounts that I'm working on, there is a non-Oracle CRM system of record. That gives me the perfect opportunity to introduce the benefits of our SPM tools and to get the customer using Fusion. Then the door is wide open for the company to uptake more of Fusion CRM, especially since all the integrations they need are out of the box. I really believe that implementing this wedge of SPM tools is the ticket to taking market share away from other vendors. It allows us to insert ourselves in an environment where no other CRM solution in the market has the extending capabilities of Fusion. Not Just Your Usual Suspects Usually the stakeholders that I talk to for Territory Management are tightly aligned with the sales management team. When I sell the quota planning tool, I'm talking to finance people on the ERP side of the house who are measuring quotas and forecasting revenue. And then Incentive Comp is of most interest to the sales operations people, and generally these people roll up to either HR or the payroll department. I think of our Fusion SPM tools as a three-legged stool straddling an organization's Sales, Finance, and HR departments. So when you're prospecting for opportunities -- yes, people with a CRM perspective will be very interested -- but don't limit yourselves to that constituency. You might find stakeholders in accounting, revenue planning, or HR compensation teams. You just might discover, as I did at United Airlines, that the HR organization is spearheading the CRM project because incentive compensation is what they need ... and they're the ones with the budget. Jason Loh Global Solutions Manager, Fusion CRM Sales Planning Oracle Corporation

    Read the article

  • 101 Ways to Participate...and make the future Java

    - by heathervc
     In case you missed it earlier today, and as promised in BOF6283, here are the 101 Ways to Improve (and Make the Future) Java...thanks to Bruno Souza of SouJava and Martijn Verburg of the London Java Community for their contributions! Join or create a JUG Come to the meetings Help promoting your JUG: twitter, facebook, etc Find someone that can give a talk Get your company to sponsor (a meeting, an event) Organize an activity (meetings, hackathons, dojos, etc) Answer questions on a mailing list (or simply join!) Volunteer for a small, one time tasks (creating a web page, helping with an activity) Come early to an event, and help to carry the piano Moderate a list or add things to the wiki Participate in the organization meetings or mailing lists Take pictures of an event or meeting and publish them online Write a blog about an event or meeting, to help promote the group Help record and post a session online Present your JavaOne experience when you get back Repeat the best talk you saw at JavaOne at a JUG meeting Send this list of ideas to other Java developers in your area so they can help out too! Present a step-by-step tutorial Present GreenFoot and Alice to school students Present BlueJ and Alice to university students Teach those tools to teachers and professors Write a step-by-step tutorial on your blog or to a magazine Create a page that lists resources Give a talk about your favorite Java feature or technology Learn a new Java API and present to your co-workers Then, present in a JUG meeting, and then, present it in an event in your area, and submit it to JavaOne! Create a study group to get certified or to learn some new Java technology Teach a non-Java developer how to download the basic tools and where to find more information Download and use an open source project Improve the documentation Write an article or a blog post about the project Write an FAQ Join and participate on the mailing list Describe a bug in detail and submit a bug report Fix a bug and submit it to the project Give a talk about it at a JUG meeting Teach your co-workers how to use the project Sign up to Adopt a JSR Test regular builds of the Reference Implementation (RI) Report bugs in the RI Submit Feature Requests to the spec Triage issues on the issue tracker Run a hack day to discuss the API Moderate mailing lists and forums Create an FAQ or Wiki Evangelize a specification on Twitter, G+, Hacker News, etc Give a lightning talk Help build the RI Help build the Technical Compatibility Kit (TCK) Create a Podcast Learn Latin - e.g. legal language, translate to English Sign up to Adopt OpenJDK Run a Bugathon Fix javac compiler warnings Build virtual images Add tests to Java Submit Javadoc patches Give a webbing Teach someone to build OpenJDK Hold a brown bag session at work Fix the oldest known bug Overhaul Javadoc to use HTML Load the OpenJDK into different IDEs Run a build farm node Test your code on a nightly build Learn how to read Java byte code Visit JCP.org Follow jcp_org on Twitter Friend JCP on Facebook Read JCP Blog Register for JCP.org site Create a JSR Watch List Review JSRs in progress Comment on JSRs in progress, write and track bug reports, use cases, etc Review JSRs in Maintenance Comment on JSRs in Maintenance Implement Final JSRs Review the Transparency of JSRs in progress and provide feedback to the PMO and Spec Lead/community Become a JCP Member or associate with a current JCP member Nominate to serve on an Expert Group (EG) Serve on an EG Submit a JSR proposal and become Spec Lead Take a Spec Lead role in an Inactive or Dormant JSR Nominate for an Executive Committee (EC) seat Vote in the EC elections Vote in EC Special Elections Review EC Meeting Summaries Attend Spec Lead calls Write blogs, articles on your experiences Join the EC project on java.net Join JCP.Next on java.net/JSR 358 Participate on the JCP forums and join JSR projects on java.net Suggest agenda items for open EC meetings Attend public EC teleconference (2x per year) Attend open EC meetings at JavaOne Nominate for JCP Annual Awards Attend annual JavaOne and JCP Annual Awards Ceremony Attend JCP related BOF sessions and give your feedback to Program Office Invite JCP program office members to your JUG  or meetup Invite JSR Spec Leads to your JUG or meetup And always - hold a party!

    Read the article

  • Caching factory design

    - by max
    I have a factory class XFactory that creates objects of class X. Instances of X are very large, so the main purpose of the factory is to cache them, as transparently to the client code as possible. Objects of class X are immutable, so the following code seems reasonable: # module xfactory.py import x class XFactory: _registry = {} def get_x(self, arg1, arg2, use_cache = True): if use_cache: hash_id = hash((arg1, arg2)) if hash_id in _registry: return _registry[hash_id] obj = x.X(arg1, arg2) _registry[hash_id] = obj return obj # module x.py class X: # ... Is it a good pattern? (I know it's not the actual Factory Pattern.) Is there anything I should change? Now, I find that sometimes I want to cache X objects to disk. I'll use pickle for that purpose, and store as values in the _registry the filenames of the pickled objects instead of references to the objects. Of course, _registry itself would have to be stored persistently (perhaps in a pickle file of its own, in a text file, in a database, or simply by giving pickle files the filenames that contain hash_id). Except now the validity of the cached object depends not only on the parameters passed to get_x(), but also on the version of the code that created these objects. Strictly speaking, even a memory-cached object could become invalid if someone modifies x.py or any of its dependencies, and reloads it while the program is running. So far I ignored this danger since it seems unlikely for my application. But I certainly cannot ignore it when my objects are cached to persistent storage. What can I do? I suppose I could make the hash_id more robust by calculating hash of a tuple that contains arguments arg1 and arg2, as well as the filename and last modified date for x.py and every module and data file that it (recursively) depends on. To help delete cache files that won't ever be useful again, I'd add to the _registry the unhashed representation of the modified dates for each record. But even this solution isn't 100% safe since theoretically someone might load a module dynamically, and I wouldn't know about it from statically analyzing the source code. If I go all out and assume every file in the project is a dependency, the mechanism will still break if some module grabs data from an external website, etc.). In addition, the frequency of changes in x.py and its dependencies is quite high, leading to heavy cache invalidation. Thus, I figured I might as well give up some safety, and only invalidate the cache only when there is an obvious mismatch. This means that class X would have a class-level cache validation identifier that should be changed whenever the developer believes a change happened that should invalidate the cache. (With multiple developers, a separate invalidation identifier is required for each.) This identifier is hashed along with arg1 and arg2 and becomes part of the hash keys stored in _registry. Since developers may forget to update the validation identifier or not realize that they invalidated existing cache, it would seem better to add another validation mechanism: class X can have a method that returns all the known "traits" of X. For instance, if X is a table, I might add the names of all the columns. The hash calculation will include the traits as well. I can write this code, but I am afraid that I'm missing something important; and I'm also wondering if perhaps there's a framework or package that can do all of this stuff already. Ideally, I'd like to combine in-memory and disk-based caching.

    Read the article

  • Isometric layer moving inside map

    - by gronzzz
    i'm created isometric map and now trying to limit layer moving. Main idea, that i have left bottom, right bottom, left top, right top points, that camera can not move outside, so player will not see map out of bounds. But i can not understand algorithm of how to do that. It's my layer scale/moving code. - (void)touchBegan:(UITouch *)touch withEvent:(UIEvent *)event { _isTouchBegin = YES; } - (void)touchMoved:(UITouch *)touch withEvent:(UIEvent *)event { NSArray *allTouches = [[event allTouches] allObjects]; UITouch *touchOne = [allTouches objectAtIndex:0]; CGPoint touchLocationOne = [touchOne locationInView: [touchOne view]]; CGPoint previousLocationOne = [touchOne previousLocationInView: [touchOne view]]; // Scaling if ([allTouches count] == 2) { _isDragging = NO; UITouch *touchTwo = [allTouches objectAtIndex:1]; CGPoint touchLocationTwo = [touchTwo locationInView: [touchTwo view]]; CGPoint previousLocationTwo = [touchTwo previousLocationInView: [touchTwo view]]; CGFloat currentDistance = sqrt( pow(touchLocationOne.x - touchLocationTwo.x, 2.0f) + pow(touchLocationOne.y - touchLocationTwo.y, 2.0f)); CGFloat previousDistance = sqrt( pow(previousLocationOne.x - previousLocationTwo.x, 2.0f) + pow(previousLocationOne.y - previousLocationTwo.y, 2.0f)); CGFloat distanceDelta = currentDistance - previousDistance; CGPoint pinchCenter = ccpMidpoint(touchLocationOne, touchLocationTwo); pinchCenter = [self convertToNodeSpace:pinchCenter]; CGFloat predictionScale = self.scale + (distanceDelta * PINCH_ZOOM_MULTIPLIER); if([self predictionScaleInBounds:predictionScale]) { [self scale:predictionScale scaleCenter:pinchCenter]; } } else { // Dragging _isDragging = YES; CGPoint previous = [[CCDirector sharedDirector] convertToGL:previousLocationOne]; CGPoint current = [[CCDirector sharedDirector] convertToGL:touchLocationOne]; CGPoint delta = ccpSub(current, previous); self.position = ccpAdd(self.position, delta); } } - (void)touchEnded:(UITouch *)touch withEvent:(UIEvent *)event { _isDragging = NO; _isTouchBegin = NO; // Check if i need to bounce _touchLoc = [touch locationInNode:self]; } #pragma mark - Update - (void)update:(CCTime)delta { CGPoint position = self.position; float scale = self.scale; static float friction = 0.92f; //0.96f; if(_isDragging && !_isScaleBounce) { _velocity = ccp((position.x - _lastPos.x)/2, (position.y - _lastPos.y)/2); _lastPos = position; } else { _velocity = ccp(_velocity.x * friction, _velocity.y *friction); position = ccpAdd(position, _velocity); self.position = position; } if (_isScaleBounce && !_isTouchBegin) { float min = fabsf(self.scale - MIN_SCALE); float max = fabsf(self.scale - MAX_SCALE); int dif = max > min ? 1 : -1; if ((scale > MAX_SCALE - SCALE_BOUNCE_AREA) || (scale < MIN_SCALE + SCALE_BOUNCE_AREA)) { CGFloat newSscale = scale + dif * (delta * friction); [self scale:newSscale scaleCenter:_touchLoc]; } else { _isScaleBounce = NO; } } }

    Read the article

  • SQL Azure Security: DoS Part II

    - by Herve Roggero
    Ah!  When you shoot yourself in the foot... a few times... it hurts! That's what I did on Sunday, to learn more about the behavior of the SQL Azure Denial Of Service prevention feature. This article is a short follow up to my last post on this feature. In this post, I will outline some of the lessons learned that were the result of testing the behavior of SQL Azure from two machines. From the standpoint of SQL Azure, they look like one machine since they are behind a NAT. All logins affected The first thing to note is that all the logins are affected. If you lock yourself out to a specific database, none of the logins will work on that database. In fact the database size becomes "--" in the SQL Azure Portal.   Less than 100 sessions I was able to see 50+ sessions being made in SQL Azure (by looking at sys.dm_exec_sessions) before being locked out. The the DoS feature appears to be triggered in part by the number of open sessions. I could not determine if the lockout is triggered by the speed at which connection requests are made however.   Other Databases Unaffected This was interesting... the DoS feature works at the database level. Other databases were available for me to use.   Just Wait Initially I thought that going through SQL Azure and connecting from there would reset the database and allow me to connect again. Unfortunately this doesn't seem to be the case. You will have to wait. And the more you lock yourself out, the more you will have to wait... The first time the database became available again within 30 seconds or so; the second time within 2-3 minutes and the third time... within 2-3 hours...   Successful Logins The DoS feature appears to engage only for valid logins. If you have a login failure, it doesn't seem to count. I ran a test with over 100 login failures without being locked.

    Read the article

  • How to show pending messages using WLST?

    - by lmestre
    Here are the steps: 1. . ./setDomainEnv.sh2. java weblogic.WLST3. connect('weblogic','welcome1','t3://localhost:7001')4. domainRuntime()5. cd('ServerRuntimes/MS1/JMSRuntime/MS1.jms/JMSServers/JMSServer1/Destinations/JMSModule1!Queue1')6. cursor1=cmo.getMessages('true',9999999,10)                                                 **String(selector),Integer(timeout),Integer(state)7. msgs = cmo.getNext(cursor1, 10)                  ** This step gets 10 messages, you can call again cmo.getNext(cursor1, 10) to get the next 10 msgs8. print(msgs)My assumption, is that you had created:a. Managed Server MS1.b. JMS Server JMSServer1.c. Module called JMSModule1.d. Inside of JMSModule1, a Queue called Queue1.If you read my previous post:How to get Messages Pending Count from a Queue using WLST? https://blogs.oracle.com/LuzMestre/entry/how_to_get_messages_pendingYou can see that both are very similar.  Sometimes it is difficult to get a WLST Script sample, but you can use ls() function to know about other functionalities you don't have a sample code.***Until step 5, nothing new comparing to my previous post.5. cd('ServerRuntimes/MS1/JMSRuntime/MS1.jms/JMSServers/JMSServer1/Destinations/JMSModule1!Queue1')6. ls()You will see, MessagesPendingCount, getMessages along a lot of other functionalities available in this Queue. e.g, you can see:-r-x   getMessages                                  String : String(selector),Integer(timeout),Integer(state)Here you can check the complete MBean Reference:http://docs.oracle.com/cd/E23943_01/apirefs.1111/e13951/core/index.htmlSee JMSDestinationRuntimeMBean.Enjoy!

    Read the article

  • How are Reads Distributed in a Workload

    - by Bill Graziano
    People have uploaded nearly one millions rows of trace data to TraceTune.  That’s enough data to start to look at the results in aggregate.  The first thing I want to look at is logical reads.  This is the easiest metric to identify and fix. When you upload a trace, I rank each statement based on the total number of logical reads.  I also calculate each statement’s percentage of the total logical reads.  I do the same thing for CPU, duration and logical writes.  When you view a statement you can see all the details like this: This single statement consumed 61.4% of the total logical reads on the system while we were tracing it.  I also wanted to see the distribution of reads across statements.  That graph looks like this: On average, the highest ranked statement consumed just under 50% of the reads on the system.  When I tune a system, I’m usually starting in one of two modes: this “piece” is slow or the whole system is slow.  If a given piece (screen, report, query, etc.) is slow you can usually find the specific statements behind it and tune it.  You can make that individual piece faster but you may not affect the whole system. When you’re trying to speed up an entire server you need to identity those queries that are using the most disk resources in aggregate.  Fixing those will make them faster and it will leave more disk throughput for the rest of the queries. Here are some of the things I’ve learned querying this data: The highest ranked query averages just under 50% of the total reads on the system. The top 3 ranked queries average 73% of the total reads on the system. The top 10 ranked queries average 91% of the total reads on the system. Remember these are averages across all the traces that have been uploaded.  And I’m guessing that people mainly upload traces where there are performance problems so your mileage may vary. I also learned that slow queries aren’t the problem.  Before I wrote ClearTrace I used to identify queries by filtering on high logical reads using Profiler.  That picked out individual queries but those rarely ran often enough to put a large load on the system. If you look at the execution count by rank you’d see that the highest ranked queries also have the highest execution counts.  The graph would look very similar to the one above but flatter.  These queries don’t look that bad individually but run so often that they hog the disk capacity. The take away from all this is that you really should be tuning the top 10 queries if you want to make your system faster.  Tuning individually slow queries will help those specific queries but won’t have much impact on the system as a whole.

    Read the article

  • How to refactor my design, if it seems to require multiple inheritance?

    - by Omega
    Recently I made a question about Java classes implementing methods from two sources (kinda like multiple inheritance). However, it was pointed out that this sort of need may be a sign of a design flaw. Hence, it is probably better to address my current design rather than trying to simulate multiple inheritance. Before tackling the actual problem, some background info about a particular mechanic in this framework: It is a simple game development framework. Several components allocate some memory (like pixel data), and it is necessary to get rid of it as soon as you don't need it. Sprites are an example of this. Anyway, I decided to implement something ala Manual-Reference-Counting from Objective-C. Certain classes, like Sprites, contain an internal counter, which is increased when you call retain(), and decreased on release(). Thus the Resource abstract class was created. Any subclass of this will obtain the retain() and release() implementations for free. When its count hits 0 (nobody is using this class), it will call the destroy() method. The subclass needs only to implement destroy(). This is because I don't want to rely on the Garbage Collector to get rid of unused pixel data. Game objects are all subclasses of the Node class - which is the main construction block, as it provides info such as position, size, rotation, etc. See, two classes are used often in my game. Sprites and Labels. Ah... but wait. Sprites contain pixel data, remember? And as such, they need to extend Resource. But this, of course, can't be done. Sprites ARE nodes, hence they must subclass Node. But heck, they are resources too. Why not making Resource an interface? Because I'd have to re-implement retain() and release(). I am avoiding this in virtue of not writing the same code over and over (remember that there are multiple classes that need this memory-management system). Why not composition? Because I'd still have to implement methods in Sprite (and similar classes) that essentially call the methods of Resource. I'd still be writing the same code over and over! What is your advice in this situation, then?

    Read the article

  • Does it make the game more fun when the user is forced to progress thru the levels sequentially rather than letting them pick and play?

    - by BeachRunnerJoe
    Hello. For the first time in my game, I'm stuck with a real design dilemma. I guess that's a good thing ;) I'm building a word puzzle game that has five levels, each with 30 puzzles. Currently, the user has to solve one puzzle at a time before moving to the next. However, I'm finding the user occasionally gets stuck on a puzzle, at which point they can no longer play until they solve it. This is obviously bad because many people will just quit playing the game and delete the app since they get frustrated and can't play any other puzzles until the current puzzle is solved. The only elegant solution I can find to helping the player get unstuck is changing the design of the game to allow the users to pick any puzzle to play at any time. This way, if they get stuck, they can come back to it later and at least they have other puzzles to play in the meantime. It's my opinion, however, that this new flow design doesn't make the game as fun as the original flow design where the player has to complete a puzzle before moving to the next. To me, it's like anything else, when you only have one of something, it's more enjoyable, but when you have 30 of something, it's far less enjoyable. In fact, when I present the user with 30 puzzles to choose from that they need to solve before unlocking the next level, it almost seems as tho I'm making them feel like it's work they have to do. I even had a tester voluntarily tell me that being forced to complete a puzzle before moving to the next is more motivating. My questions are... Do you agree/disagree? Do you have any suggestions for how I can help the player get unstuck? Thanks so much in advance for your thoughts! EDIT: I should mention that I've already considered a few other solutions to helping the user get unstuck, but none of them seem like good ideas. They are... Add more hints: Currently, the user gets two hints per puzzle. If I increase the hint count, it only makes the game more easy and still leaves the possibility of the user getting stuck. Add a "Show Solution" button: This seems like a bad idea because it's my opinion this takes the fun out of the game for many people who would probably otherwise solve the puzzle if they didn't have the quick option to see the solution.

    Read the article

  • Profiling Startup Of VS2012 &ndash; Ants Profiler

    - by Alois Kraus
    I just downloaded ANTS Profiler 7.4 to check how fast it is and how deep I can analyze the startup of Visual Studio 2012. The Pro version which is useful does cost 445€ which is ok. To measure a complex system I decided to simply profile VS2012 (Update 1) on my older Intel 6600 2,4GHz with 3 GB RAM and a 32 bit Windows 7. Ants Profiler is really easy to use. So lets try it out. The Ants Profiler does want to start the profiled application by its own which seems to be rather common. I did choose Method Level timing of all managed methods. In the configuration menu I did want to get all call stacks to get full details. Once this is configured you are ready to go.   After that you can select the Method Grid to view Wall Clock Time in ms. I hate percentages which are on by default because I do want to look where absolute time is spent and not something else.   From the Method Grid I can drill down to see where time is spent in a nice and I can look at the decompiled methods where the time is spent. This does really look nice. But did you see the size of the scroll bar in the method grid? Although I wanted all call stacks I do get only about 4 pages of methods to drill down. From the scroll bar count I would guess that the profiler does show me about 150 methods for the complete VS startup. This is nonsense. I will never find a bottleneck in VS when I am presented only a fraction of the methods that were actually executed. I have also tried in the configuration window to also profile the extremely trivial functions but there was no noticeable difference. It seems that the Ants Profiler does filter away way too many details to be useful for bigger systems. If you want to optimize a CPU bound operation inside NUnit then Ants Profiler is with its line level timings a very nice tool to work with. But for bigger stuff it is certainly not usable. I also do not like that I must start the profiled application from the profiler UI. This makes it hard to profile processes which are started by some other process. Next: JetBrains dotTrace

    Read the article

  • The Internet Key Wave MW833UP is not recognized in Ubuntu

    - by gio900
    I can't use my Onda MW833UP... :( Any advice? Here is something that someone else may understand: ~$: lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 11.10 Release: 11.10 Codename: oneiric ~$: lsusb Bus 001 Device 005: ID 1ee8:0012 ~$: dmesg [ 22.709475] cdc_acm 1-1:1.0: ttyACM0: USB ACM device [ 22.714856] usbcore: registered new interface driver cdc_acm [ 22.714866] cdc_acm: USB Abstract Control Model driver for USB modems and ISDN adapters [ 23.520490] ieee80211 phy0: wl_ops_bss_info_changed: arp filtering: enabled true, count 1 (implement) [ 24.244530] usbcore: registered new interface driver usbserial [ 24.244575] USB Serial support registered for generic [ 24.244673] usbcore: registered new interface driver usbserial_generic [ 24.244681] usbserial: USB Serial Driver core [ 24.265879] USB Serial support registered for GSM modem (1-port) [ 24.285680] usbcore: registered new interface driver option [ 24.285691] option: v0.7.2:USB Driver for GSM modems [ 24.425878] EXT4-fs (sda9): re-mounted. Opts: errors=remount-ro,commit=600 [ 24.736540] EXT4-fs (sda8): re-mounted. Opts: commit=600 [ 35.705796] Easy slow down manager: checking for SABI support. [ 35.706002] Easy slow down manager: SABI is supported (f5189) [ 36.060099] usbcore: deregistering interface driver uvcvideo [ 139.508061] CE: hpet increased min_delta_ns to 20113 nsec [ 6798.378917] usb 1-1: USB disconnect, device number 5 [ 6809.108232] usb 1-1: new high speed USB device number 6 using ehci_hcd [ 6809.242692] scsi5 : usb-storage 1-1:1.0 [ 6810.241257] scsi 5:0:0:0: CD-ROM Onda Datacard CD-ROM 0001 PQ: 0 ANSI: 0 [ 6810.241841] scsi 5:0:0:1: Direct-Access Onda Storage 0001 PQ: 0 ANSI: 0 [ 6810.271410] sr0: scsi3-mmc drive: 0x/0x caddy [ 6810.272099] sr 5:0:0:0: Attached scsi CD-ROM sr0 [ 6810.272852] sr 5:0:0:0: Attached scsi generic sg1 type 5 [ 6810.279954] sd 5:0:0:1: [sdb] Attached SCSI removable disk [ 6810.281210] sd 5:0:0:1: Attached scsi generic sg2 type 0 [ 6810.380591] sr0: CDROM (ioctl) error, command: Xpwrite, Read disk info 51 00 00 00 00 00 00 00 02 00 [ 6810.380617] sr: Sense Key : Hardware Error [current] [ 6810.380625] sr: Add. Sense: No additional sense information [ 6810.613937] sr0: CDROM (ioctl) error, command: Xpwrite, Read disk info 51 00 00 00 00 00 00 00 02 00 [ 6810.613972] sr: Sense Key : Hardware Error [current] [ 6810.613984] sr: Add. Sense: No additional sense information [ 6810.673716] usb 1-1: USB disconnect, device number 6 [ 6815.572142] usb 1-1: new high speed USB device number 7 using ehci_hcd [ 6815.706828] cdc_acm 1-1:1.0: ttyACM0: USB ACM device The last 3 lines are where I inserted the Internet key, then reconnected it. usb-device T: Bus=01 Lev=01 Prnt=01 Port=00 Cnt=01 Dev#= 7 Spd=480 MxCh= 0 D: Ver= 2.00 Cls=ef(misc ) Sub=02 Prot=01 MxPS=64 #Cfgs= 1 P: Vendor=1ee8 ProdID=0012 Rev=00.01 S: Manufacturer=Onda S: Product=MW833UP S: SerialNumber=9230B35D870F9CB7AE684EACC5C12BE5EC33B26E C: #Ifs= 2 Cfg#= 1 Atr=a0 MxPwr=500mA I: If#= 0 Alt= 0 #EPs= 1 Cls=02(commc) Sub=02 Prot=01 Driver=cdc_acm I: If#= 1 Alt= 0 #EPs= 2 Cls=0a(data ) Sub=00 Prot=00 Driver=cdc_acm Then there is /dev/ttyACM0. When the key is connected to the USB port, everything that will meant...

    Read the article

  • BEHIND THE SCENES AT A FLASH-MOB...

    - by OliviaOC
    Today, we interviewed Aarti, who recently organised a flash-mob for Oracle Campus, which you can see on our facebook page Hi Aarti, perhaps you could give us a quick introduction of yourself, and what you do at Oracle? I’ve been with Campus Recruitment for just over a year. I’ve been with Oracle for three years. I was keen to get into the campus role after having watched other colleagues working in campus and when the opportunity arrived I jumped at it. The journey has been fantastic thus far. I’m responsible for the GBU hiring at Oracle. Why did you record the flash-mob video - what were your goals? Flash-mobs were one thing that took off really big in India after the first one in Mumbai. It’s the hot thing in the student community at the moment. A better way to reach out and connect with students. I think that it is also a good way to demonstrate our openness and culture at Oracle – demonstrate that we are very flexible and that we have a cool culture. I knew the video could be shared on our social media pages and reach out to a wider student community What was the preparation and rehearsal for the video like? When I decided to do the video, I had to decide who I would like to do the flash-mob. The new campus hires to Oracle would be ideal for this. We were 2 teams at 2 different locations and Each team took 2-3 songs and choreographed it themselves. Every day at 5pm, each team would meet up and every other weekend the whole group met. Practicing went on for about a month like this. How was the video received by participants and by students on the University campus? The event was well received. We did it during the lunch break at the University so that there was a large presence of students around while the flash mob took place. We set up about an hour beforehand to get everything ready. The break-bell sounded and the students came out, that’s when the flash-mob started. The students were pleasantly surprised that a company was doing this. They also recognised some of the participants involved as former graduates. Since the flash-mob and the video of it that you recorded, have you had much response due to it? We have, especially in the past two weeks. We went back to the college to make some hires. The flash-mob was still fresh in their minds and they knew well who Oracle was as a result. Would you like to repeat this kind of creative initiative again with the recruitment team? Yes, absolutely! I’m over the moon with the flash-mob. My mind is working overtime now with ideas about the next things to do!

    Read the article

  • How to get better at solving Dynamic programming problems

    - by newbie
    I recently came across this question: "You are given a boolean expression consisting of a string of the symbols 'true', 'false', 'and', 'or', and 'xor'. Count the number of ways to parenthesize the expression such that it will evaluate to true. For example, there is only 1 way to parenthesize 'true and false xor true' such that it evaluates to true." I knew it is a dynamic programming problem so i tried to come up with a solution on my own which is as follows. Suppose we have a expression as A.B.C.....D where '.' represents any of the operations and, or, xor and the capital letters represent true or false. Lets say the number of ways for this expression of size K to produce a true is N. when a new boolean value E is added to this expression there are 2 ways to parenthesize this new expression 1. ((A.B.C.....D).E) ie. with all possible parenthesizations of A.B.C.....D we add E at the end. 2. (A.B.C.(D.E)) ie. evaluate D.E first and then find the number of ways this expression of size K can produce true. suppose T[K] is the number of ways the expression with size K produces true then T[k]=val1+val2+val3 where val1,val2,val3 are calculated as follows. 1)when E is grouped with D. i)It does not change the value of D ii)it inverses the value of D in the first case val1=T[K]=N.( As this reduces to the initial A.B.C....D expression ). In the second case re-evaluate dp[K] with value of D reversed and that is val1. 2)when E is grouped with the whole expression. //val2 contains the number of 'true' E will produce with expressions which gave 'true' among all parenthesized instances of A.B.C.......D i) if true.E = true then val2 = N ii) if true.E = false then val2 = 0 //val3 contains the number of 'true' E will produce with expressions which gave 'false' among all parenthesized instances of A.B.C.......D iii) if false.E=true then val3=( 2^(K-2) - N ) = M ie. number of ways the expression with size K produces a false [ 2^(K-2) is the number of ways to parenthesize an expression of size K ]. iv) if false.E=false then val3 = 0 This is the basic idea i had in mind but when i checked for its solution http://people.csail.mit.edu/bdean/6.046/dp/dp_9.swf the approach there was completely different. Can someone tell me what am I doing wrong and how can i get better at solving DP so that I can come up with solutions like the one given above myself. Thanks in advance.

    Read the article

  • SEO effects of intermix of WP blog, custom PHP site and FB app game

    - by melbournetechlover
    We're a melbourne tech company in the process of building a custom site in PHP. We plan to launch a "pre-launch" page which is also custom coded (CSS3 on twitter bootstrap framework + HTML5 front end and PHP back end). On that site will be a link to a blog - the idea behind this is to build up ranking for a variety of relevant keywords prior to the full site going live (given the majority of the site is a member only community anyway so the blog is really the main way we'll be able to execute on-site SEO. Ideally, we would like to install wordpress in a subdirectory on our servers and just customise the header to look the same as the landing page of the website. But some questions and concerns... Is there any detrimental effect on SEO efforts in having two separate systems (one custom PHP, the other an installation of wordpress) to manage the blog vs the rest of the site? Are there any benefits or detriments to installing on a sub domain such as blog.sitename.com vs. sitename.com/blog. My preference would be sitename.com/blog as it feels neater - but open to suggestions based on knowledge of Google preferences. Separately, we are building a Facebook app which is under another site name. Again because we are launching this app first, from an SEO perspective, would it actually be better to run it from a sub domain on the main site - e.g. gamename.mainsitename.com instead of on app.gamename.com? Currently we have it on app.gamename.com, but if there are SEO benefits to moving it to the other domain and server then we'll do it. Basically we don't want to have our SEO efforts divided - will Google algorithms prefer two sites heavily referring traffic, or is it better to focus our efforts on one. I guess that's the crux of the issue. But the other one is - does Google care about traffic accessing a page built for the Facebook app iFrame - does that count toward rankings? Sorry I hope these questions aren't too complex - but we're in the tech world every day and still can't seem to find a good answer to these ones...hence I'm taking to the forums!! Free beer for whoever can give me a solid answer!

    Read the article

  • Identifying which pattern fits better.

    - by Daniel Grillo
    I'm developing a software to program a device. I have some commands like Reset, Read_Version, Read_memory, Write_memory, Erase_memory. Reset and Read_Version are fixed. They don't need parameters. Read_memory and Erase_memory need the same parameters that are Length and Address. Write_memory needs Lenght, Address and Data. For each command, I have the same steps in sequence, that are something like this sendCommand, waitForResponse, treatResponse. I'm having difficulty to identify which pattern should I use. Factory, Template Method, Strategy or other pattern. Edit I'll try to explain better taking in count the given comments and answers. I've already done this software and now I'm trying to refactoring it. I'm trying to use patterns, even if it is not necessary because I'm taking advantage of this little software to learn about some patterns. Despite I think that one (or more) pattern fits here and it could improve my code. When I want to read version of the software of my device, I don't have to assembly the command with parameters. It is fixed. So I have to send it. After wait for response. If there is a response, treat (or parse) it and returns. To read a portion of the memory (maximum of 256 bytes), I have to assembly the command using the parameters Len and Address. So I have to send it. After wait for response. If there is a response, treat (or parse) it and returns. To write a portion in the memory (maximum of 256 bytes), I have to assembly the command using the parameters Len, Address and Data. So I have to send it. After wait for response. If there is a response, treat (or parse) it and returns. I think that I could use Template Method because I have almost the same algorithm for all. But the problem is some commands are fixes, others have 2 or 3 parameters. I think that parameters should be passed on the constructor of the class. But each class will have a constructor overriding the abstract class constructor. Is this a problem for the template method? Should I use other pattern?

    Read the article

  • PHP-FPM stops responding and dies [migrated]

    - by user12361
    I'm running Drupal 6 with Nginx 1.5.1 and PHP-FPM (PHP 5.3.26) on a 1GB single core VPS with 3GB of swap space on SSD storage. I just switched from shared hosting to this unmanaged VPS because my site was getting too heavy, so I'm still learning the ropes. I have moderately high traffic, I don't really monitor it closely but Google Adsense usually record close to 30K page views/day. I usually have 50 to 80 authenticated users logged in and a few hundred more anonymous users hitting the Boost static HTML cache at any given moment. The problem I'm having is that PHP-FPM frequently stops responding, resulting in Nginx 502 or 504 errors. I swear I have read every page on the internet about this issue, which seems fairly common, and I've tried endless combinations of configurations, and I can't find a good solution. After restarting Nginx and PHP-FPM, the site runs really fast for a while, and then without warning it simply stops responding. I get a white screen while the browser waits on the server, and after about 30 seconds to a minute it throws an Nginx 502 or 504 error. Sometimes it runs well for 2 minutes, sometimes 5 minutes, sometimes 5 hours, but it always ends up hanging. When I find the server in this state, there is still plenty of free memory (500MB or more) and no major CPU usage, the control and worker PHP-FPM processes are still present, and the server is still pingable and usable via SSH. A reload of PHP-FPM via the init script revives it again. The hangups don't seem to correspond to the amount of traffic, because I observed this behavior consistently when I was testing this configuration on a development VPS with no traffic at all. I've been constantly tweaking the settings, but I can't definitively eliminate the problem. I set Nginx workers to just 1. In the PHP-FPM config I have tried all three of the process managers. "Dynamic" is definitely the least reliable, consistently hanging up after only a few minutes. "Static" also has been unreliable and unpredictable. The least buggy has been "ondemand", but even that is failing me, sometimes after as much as 12 to 24 hours. But I can't leave the server unattended because PHP-FPM dies and never comes back on its own. I tried adjusting the pm.max_children value from as low as 3 to as high as 50, doesn't make a lot of difference, but I currently have it at 10. Same thing for the spare servers values. I also have set pm.max_requests anywhere from 30 to unlimited, and it doesn't seem to make a difference. According to the logs, the PHP-FPM processes are not exiting with SIGSEGV or SIGBUS, but rather with SIGTERM. I get a lot of lines like: WARNING: [pool www] child 3739, script '/var/www/drupal6/index.php' (request: "GET /index.php") execution timed out (38.739494 sec), terminating and: WARNING: [pool www] child 3738 exited on signal 15 (SIGTERM) after 50.004380 seconds from start I actually found several articles that recommend doing a graceful reload of PHP-FPM via cron every few minutes or hours to circumvent this issue. So that's what I did, "/etc/init.d/php-fpm reload" every 5 minutes. So far, it's keeping the lights on. But it feels like a dreadful hack. Is PHP-FPM really that unreliable? Is there anything else I can do? Thanks a lot!

    Read the article

  • Creating an object that is ready to be used & unset properties - with IoC

    - by GetFuzzy
    I have a question regarding the specifics of object creation and the usage of properties. A best practice is to put all the properties into a state such that the object is useful when its created. Object constructors help ensure that required dependencies are created. I've found myself following a pattern lately, and then questioning its appropriateness. The pattern looks like this... public class ThingProcesser { public List<Thing> CalculatedThings { get; set; } public ThingProcesser() { CalculatedThings = new List<Thing>(); } public double FindCertainThing() { CheckForException(); foreach (var thing in CalculatedThings) { //do some stuff with things... } } public double FindOtherThing() { CheckForException(); foreach (var thing in CalculatedThings) { //do some stuff with things... } } private void CheckForException() { if (CalculatedThings.Count < 2) throw new InvalidOperationException("Calculated things must have more than 2 items"); } } The list of items is not being changed, just looked through by the methods. There are several methods on the class, and to avoid having to pass the list of things to each function as a method parameter, I set it once on the class. While this works, does it violate the principle of least astonishment? Since starting to use IoC I find myself not sticking things into the constructor, to avoid having to use a factory pattern. For example, I can argue with myself and say well the ThingProcessor really needs a List to work, so the object should be constructed like this. public class ThingProcesser { public List<Thing> CalculatedThings { get; set; } public ThingProcesser(List<Thing> calculatedThings) { CalculatedThings = calculatedThings; } } However, if I did this, it would complicate things for IoC, and this scenario hardly seems appropriate for something like the factory pattern. So in summary, are there some good guidelines for when something should be part of the object state, vs. passed as a method parameter? When using IoC, is the factory pattern the best way to deal with objects that need created with state? If something has to be passed to multiple methods in a class, does that render it a good candidate to be part of the objects state?

    Read the article

< Previous Page | 437 438 439 440 441 442 443 444 445 446 447 448  | Next Page >