Search Results

Search found 10556 results on 423 pages for 'practical approach'.

Page 39/423 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • Usability: Save changes using "Apply" button or after every single change?

    - by mr.b
    I am interested in hearing opinions and experiences of fellow developers on topic of designing user interface, usability AND maintainability-wise. Common approach is to allow users to tweak options and after form gets "dirty", enable "Apply" button, and user has possibility to back out by pressing cancel. This is most common approach on Windows platform (I believe MS usability guidelines say to do so as well). Another way is to apply changes after every single change has been made to options. Example, user checks some checkbox, and change is applied. User changes value of some text box, and change is applied after box looses focus, etc. You get the point. This approach is most common on Mac OSX. Regardless of my personal opinion (which is that Apple is better at usability, but software I usually write targets Windows users), what do you people think?

    Read the article

  • Razor support of generic extension methods

    - by Brian
    Hello, With regards to the Razor view engine, say I want to render Html.TextBoxFor<SomeModel>(i => i.Name), it doesn't seem that the inline syntax works as in: @Html.TextBoxFor<SomeModel>(i => i.Name) This doesn't seem to work because it interprets the generic as an HTML tag. I could use a code-block approach, but then what's the best approach to output the content? The HTML string returned from this method, do I response.write it, or is there a syntax for it, or what's the approach? Thanks.

    Read the article

  • Where is the chink in Google Chrome's armor?

    - by kudlur
    While browsing with Chrome, I noticed that it responds extremely fast (in comparison with IE and Firefox on my laptop) in terms of rendering pages, including JavaScript heavy sites like gmail. This is what googlebook on Chrome has to say tabs are hosted in process rather than thread. compile javascript using V8 engine as opposed to interpreting. Introduce new virtual machine to support javascript heavy apps introduce "hidden class transitions" and apply dynamic optimization to speed up things. Replace inefficient "Conservative garbage colllection" scheme with more precise garbage collection scheme. Introduce their own task scheduler and memory manager to manage the browser environment. All this sounds so familiar, and Microsoft has been doing such things for long time.. Windows os, C++, C# etc compilers, CLR, and so on. So why isn't Microsoft or any other browser vendor taking Chrome's approach? Is there a flaw in Chrome's approach? If not, is the rest of browser vendor community caught unaware with Google's approach?

    Read the article

  • Count the number of ways in which a number 'A' can be broken into a sum of 'B' numbers such that all numbers are co-prime to 'C'

    - by rajneesh2k10
    I came across the solution of a problem which involve dynamic-programming approach, solved using a three dimensional matrix. Link to actual problem is: http://community.topcoder.com/stat?c=problem_statement&pm=12189&rd=15177 Solution to this problem is here under MuddyRoad2: http://apps.topcoder.com/wiki/display/tc/SRM+555 In the last paragraph of explanation, author describes a dynamic programming approach to count the number of ways in which a number 'A' can be broken into a sum of 'B' numbers (not necessarily different), such that every number is co-prime to 3 and the order in which these numbers appear does matter. I am not able to grasp that approach. Can anyone help me understand how DP is acting here. I can't understand what is a state here and how it is derived from the previous state.

    Read the article

  • Should I be using abstract methods in this Python scenario?

    - by sfjedi
    I'm not sure my approach is good design and I'm hoping I can get a tip. I'm thinking somewhere along the lines of an abstract method, but in this case I want the method to be optional. This is how I'm doing it now... from pymel.core import * class A(object): def __init__(self, *args, **kwargs): if callable(self.createDrivers): self._drivers = self.createDrivers(*args, **kwargs) select(self._drivers) class B(A): def createDrivers(self, *args, **kwargs): c1 = circle(sweep=270)[0] c2 = circle(sweep=180)[0] return c1, c2 b = B() In the above example, I'm just creating 2 circle arcs in PyMEL for Maya, but I fully intend on creating more subclasses that may or may not have a createDrivers method at all! So I want it to be optional and I'm wondering if my approach is—well, if my approach could be improved?

    Read the article

  • Cache bandwidth per tick for modern CPUs

    - by osgx
    Hello What is a speed of cache accessing for modern CPUs? How many bytes can be read or written from memory every processor clock tick by Intel P4, Core2, Corei7, AMD? Please, answer with both theoretical (width of ld/sd unit with its throughput in uOPs/tick) and practical numbers (even memcpy speed tests, or STREAM benchmark), if any. PS it is question, related to maximal rate of load/store instructions in assembler. There can be theoretical rate of loading (all Instructions Per Tick are widest loads), but processor can give only part of such, a practical limit of loading.

    Read the article

  • Is it a problem if i query again and again to SQL Server 2005 and 2000?

    - by learner
    Window app i am constructing is for very low end machines (Celeron with max 128 RAM). From the following two approaches which one is the best (I don't want that application becomes memory hog for low end machines):- Approach One:- Query the database Select GUID from Table1 where DateTime <= @givendate which is returning me more than 300 thousands records (but only one field i.e. GUID - 300 thousands GUIDs). Now running a loop to achieve next process of this software based on GUID. Second Approach:- Query the database Select Top 1 GUID from Table1 where DateTime <= @givendate with top 1 again and again until all 300 thousands records done. It will return me only one GUID at a time, and I can do my next step of operation. What do you suggest which approach will use the less Memory Resources?? (Speed / performance is not the issue here).

    Read the article

  • concurrent doubly-linked list (1 writer, n-readers)

    - by Arne
    Hi guys, I am back in the field of programming for my Diploma-thesis now and stumbled over the following issue: I need to implement a thread-safe doubly-linked list for one thread writing the list at any position (delete, insert, mutate node data) and one to many threads traversing and reading the list. I am well aware that mutexes can be used to serialize access to the list, still I presume that a naive lock around any write operation will be less than optimal. I am wondering whether there are better variants. (I am well aware that 'optimal' has not much of a practical meaning as long as no exact measure/profiling are available but this is an academic thesis after all..) I am very gratefull for code-samples as well as references to academic granted these have at least a tiny bit of practical relevance. Thanks at lot

    Read the article

  • Usage examples of binary search

    - by python dude
    I just realized that in my 4+ years of Java programming (mostly desktop apps) I never used the binary search methods in the Arrays class for anything practical. Not even once. Some reasons I can think of: 100% of the time you can get away with linear search, maps or something else that isn't binary search. The incoming data is almost never sorted, and making it sorted requires an extra sorting step. So I wonder if it's just me, or do a lot of people never use binary search? And what are some good, practical usage examples of binary search?

    Read the article

  • How does Array.ForEach() compare to standard for loop in C#?

    - by DaveN59
    I pine for the days when, as a C programmer, I could type: memset( byte_array, '0xFF' ); and get a byte array filled with 'FF' characters. So, I have been looking for a replacement for this: for (int i=0; i < byteArray.Length; i++) { byteArray[i] = 0xFF; } Lately, I have been using some of the new C# features and have been using this approach instead: Array.ForEach<byte>(byteArray, b => b = 0xFF); Granted, the second approach seems cleaner and is easier on the eye, but how does the performance compare to using the first approach? Am I introducing needless overhead by using Linq and generics? Thanks, Dave

    Read the article

  • Preview hidden options on web sites

    - by veili_13
    I want to have hidden options on my web site and be previewed only when the user chooses. I want to animate them from left and right (about 15%). At the moment I have them visible in my html but with width set at 0 and animate it at 15%. In these containers are many options which should disappear to the left/right (with slide effect). But I am not so sure that having all of this in my HTML so I am thinking to create them with jQuery, with 0 width and animating it to it's should be width and setting the other elements to 0 width. Is this approach good, or should I create them and hide the overflow so I would animate position. Or is there a better approach ??

    Read the article

  • Installing SharePoint 2010 and PowerPivot for SharePoint on Windows 7

    - by smisner
    Many people like me want (or need) to do their business intelligence development work on a laptop. As someone who frequently speaks at various events or teaches classes on all subjects related to the Microsoft business intelligence stack, I need a way to run multiple server products on my laptop with reasonable performance. Once upon a time, that requirement meant only that I had to load the current version of SQL Server and the client tools of choice. In today's post, I'll review my latest experience with trying to make the newly released Microsoft BI products work with a Windows 7 operating system.The entrance of Microsoft Office SharePoint Server 2007 into the BI stack complicated matters and I started using Virtual Server to establish a "suitable" environment. As part of the team that delivered a lot of education as part of the Yukon pre-launch activities (that would be SQL Server 2005 for the uninitiated), I was working with four - yes, four - virtual servers. That was a pretty brutal workload for a 2GB laptop, which worked if I was very, very careful. It could also be a finicky and unreliable configuration as I learned to my dismay at one TechEd session several years ago when I had to reboot a very carefully cached set of servers just minutes before my session started. Although it worked, it came back to life very, very slowly much to the displeasure of the audience. They couldn't possibly have been less pleased than me.At that moment, I resolved to get the beefiest environment I could afford and consolidate to a single virtual server. Enter the 4GB 64-bit laptop to preserve my sanity and my livelihood. Likewise, for SQL Server 2008, I managed to keep everything within a single virtual server and I could function reasonably well with this approach.Now we have SQL Server 2008 R2 plus Office SharePoint Server 2010. That means a 64-bit operating system. Period. That means no more Virtual Server. That means I must use Hyper-V or another alternative. I've heard alternatives exist, but my few dabbles in this area did not yield positive results. It might have been just me having issues rather than any failure of those technologies to adequately support the requirements.My first run at working with the new BI stack configuration was to set up a 64-bit 4GB laptop with a dual-boot to run Windows Server 2008 R2 with Hyper-V. However, I was generally not happy with running Windows Server 2008 R2 on my laptop. For one, I couldn't put it into sleep mode, which is helpful if I want to prepare for a presentation beforehand and then walk to the podium without the need to hold my laptop in its open state along the way (my strategy at the TechEd session long, long ago). Secondly, it was finicky with projectors. I had issues from time to time and while I always eventually got it to work, I didn't appreciate those nerve-wracking moments wondering whether this would be the time that it wouldn't work.Somewhere along the way, I learned that it was possible to load SharePoint 2010 in a Windows 7 which piqued my interest. I had just acquired a new laptop running Windows 7 64-bit, and thought surely running the BI stack natively on my laptop must be better than running Hyper-V. (I have not tried booting to Hyper-V VHD yet, but that's on my list of things to try so the jury of one is still out on this approach.) Recently, I had to build up a server with the RTM versions of SQL Server 2008 R2 and Sharepoint Server 2010 and decided to follow suit on my Windows 7 Ultimate 64-bit laptop. The process is slightly different, but I'm happy to report that it IS possible, although I had some fits and starts along the way.DISCLAIMER: These products are NOT intended to be run in production mode on the Windows 7 operating system. The configuration described in this post is strictly for development or learning purposes and not supported by Microsoft. If you have trouble, you will NOT get help from them. I might be able to help, but I provide no guarantees of my ability or availablity to help. I won't provide the step-by-step instructions in this post as there are other resources that provide these details, but I will provide an overview of my approach, point you to the relevant resources, describe some of the problems I encountered, and explain how I addressed those problems to achieve my desired goal.Because my goal was not simply to set up SharePoint Server 2010 on my laptop, but specifically PowerPivot for SharePoint, I started out by referring to the installation instructions at the PowerPiovt-Info site, but mainly to confirm that I was performing steps in the proper sequence. I didn't perform the steps in Part 1 because those steps are applicable only to a server operating system which I am not running on my laptop. Then, the instructions in Part 2, won't work exactly as written for the same reason. Instead, I followed the instructions on MSDN, Setting Up the Development Environment for SharePoint 2010 on Windows Vista, Windows 7, and Windows Server 2008. In general, I found the following differences in installation steps from the steps at PowerPivot-Info:You must copy the SharePoint installation media to the local drive so that you can edit the config.xml to allow installation on a Windows client.You also have to manually install the prerequisites. The instructions provides links to each item that you must manually install and provides a command-line instruction to execute which enables required Windows features.I will digress for a moment to save you some grief in the sequence of steps to perform. I discovered later that a missing step in the MSDN instructions is to install the November CTP Reporting Services add-in for SharePoint. When I went to test my SharePoint site (I believe I tested after I had a successful PowerPivot installation), I ran into the following error: Could not load file or assembly 'RSSharePointSoapProxy, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91' or one of its dependencies. The system cannot find the file specified. I was rather surprised that Reporting Services was required. Then I found an article by Alan le Marquand, Working Together: SQL Server 2008 R2 Reporting Services Integration in SharePoint 2010,that instructed readers to install the November add-in. My first reaction was, "Really?!?" But I confirmed it in another TechNet article on hardware and software requirements for SharePoint Server 2010. It doesn't refer explicitly to the November CTP but following the link took me there. (Interestingly, I retested today and there's no longer any reference to the November CTP. Here's the link to download the latest and greatest Reporting Services Add-in for SharePoint Technologies 2010.) You don't need to download the add-in anymore if you're doing a regular server-based installation of SharePoint because it installs as part of the prerequisites automatically.When it was time to start the installation of SharePoint, I deviated from the MSDN instructions and from the PowerPivot-Info instructions:On the Choose the installation you want page of the installation wizard, I chose Server Farm.On the Server Type page, I chose Complete.At the end of the installation, I did not run the configuration wizard.Returning to the PowerPivot-Info instructions, I tried to follow the instructions in Part 3 which describe installing SQL Server 2008 R2 with the PowerPivot option. These instructions tell you to choose the New Server option on the Setup Role page where you add PowerPivot for SharePoint. However, I ran into problems with this approach and got installation errors at the end.It wasn't until much later as I was investigating an error that I encountered Dave Wickert's post that installing PowerPivot for SharePoint on Windows 7 is unsupported. Uh oh. But he did want to hear about it if anyone succeeded, so I decided to take the plunge. Perseverance paid off, and I can happily inform Dave that it does work so far. I haven't tested absolutely everything with PowerPivot for SharePoint but have successfully deployed a workbook and viewed the PowerPivot Management Dashboard. I have not yet tested the data refresh feature, but I have installed. Continue reading to see how I accomplished my objective.I unintalled SQL Server 2008 R2 and started again. I had different problems which I don't recollect now. However, I uninstalled again and approached installation from a different angle and my next attempt succeeded. The downside of this approach is that you must do all of the things yourself that are done automatically when you install PowerPivot as a new server. Here are the steps that I followed:Install SQL Server 2008 R2 to get a database engine instance installed.Run the SharePoint configuration wizard to set up the SharePoint databases.In Central Administration, create a Web application using classic mode authentication as per a TechNet article on PowerPivot Authentication and Authorization.Then I followed the steps I found at How to: Install PowerPivot for SharePoint on an Existing SharePoint Server. Especially important to note - you must launch setup by using Run as administrator. I did not have to manually deploy the PowerPivot solution as the instructions specify, but it's good to know about this step because it tells you where to look in Central Administration to confirm a successful deployment.I did spot some incorrect steps in the instructions (at the time of this writing) in How To: Configure Stored Credentials for PowerPivot Data Refresh. Specifically, in the section entitled Step 1: Create a target application and set the credentials, both steps 10 and 12 are incorrect. They tell you to provide an actual Windows user name and password on the page where you are simply defining the prompts for your application in the Secure Store Service. To add the Windows user name and password that you want to associate with the application - after you have successfully created the target application - you select the target application and then click Set credentials in the ribbon.Lastly, I followed the instructions at How to: Install Office Data Connectivity Components on a PowerPivot server. However, I have yet to test this in my current environment.I did have several stops and starts throughout this process and edited those out to spare you from reading non-essential information. I believe the explanation I have provided here accurately reflect the steps I followed to produce a working configuration. If you follow these steps and get a different result, please let me know so that together we can work through the issue and correct these instructions. I'm sure there are many other folks in the Microsoft BI community that will appreciate the ability to set up the BI stack in a Windows 7 environment for development or learning purposes. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • When should I use Areas in TFS instead of Team Projects

    - by Martin Hinshelwood
    Well, it depends…. If you are a small company that creates a finite number of internal projects then you will find it easier to create a single project for each of your products and have TFS do the heavy lifting with reporting, SharePoint sites and Version Control. But what if you are not… Update 9th March 2010 Michael Fourie gave me some feedback which I have integrated. Ed Blankenship via @edblankenship offered encouragement and a nice quote. Ewald Hofman gave me a couple of Cons, and maybe a few more soon. Ewald’s company, Avanade, currently uses Areas, but it looks like the manual management is getting too much and the project is getting cluttered. What if you are likely to have hundreds of projects, possibly with a multitude of internal and external projects? You might have 1 project for a customer or 10. This is the situation that most consultancies find themselves in and thus they need a more sustainable and maintainable option. What I am advocating is that we should have 1 “Team Project” per customer, and use areas to create “sub projects” within that single “Team Project”. "What you describe is what we generally do internally and what we recommend. We make very heavy use of area path to categorize the work within a larger project." - Brian Harry, Microsoft Technical Fellow & Product Unit Manager for Team Foundation Server   "We tend to use areas to segregate multiple projects in the same team project and it works well." - Tiago Pascoal, Visual Studio ALM MVP   "In general, I believe this approach provides consistency [to multi-product engagements] and lowers the administration and maintenance costs. All good." - Michael Fourie, Visual Studio ALM MVP   “@MrHinsh BTW, I'm very much a fan of very large, if not huge, team projects in TFS. Just FYI :) Use Areas & Iterations.” Ed Blankenship, Visual Studio ALM MVP   This would mean that SSW would have a single Team Project called “SSW” that contains all of our internal projects and consequently all of the Areas and Iteration move down one hierarchy to accommodate this. Where we would have had “\SSW\Sprint 1” we now have “\SSW\SqlDeploy\Sprint1” with “SqlDeploy” being our internal project. At the moment SSW has over 70 internal projects and more than 170 total projects in TFS. This method has long term benefits that help to simplify the support model for companies that often have limited internal support time and many projects. But, there are implications as TFS does not provide this model “out-of-the-box”. These implications stretch across Areas, Iterations, Queries, Project Portal and Version Control. Michael made a good comment, he said: I agree with your approach, assuming that in a multi-product engagement with a client, they are happy to adopt the same process template across all products. If they are not, then it’ll either be easy to convince them or there is a valid reason for having a different template - Michael Fourie, Visual Studio ALM MVP   At SSW we have a standard template that we use and this is applied across the board, to all of our projects. We even apply any changes to the core process template to all of our existing projects as well. If you have multiple projects for the same clients on multiple templates and you want to keep it that way, then this approach will not work for you. However, if you want to standardise as we have at SSW then this approach may benefit you as well. Implications around Areas Areas should be used for topological classification/isolation of work items. You can think of this as architecture areas, organisational areas or even the main features of your application. In our scenario there is an additional top level item that represents the Project / Product that we want to chop our Team Project into. Figure: Creating a sub area to represent a product/project is easy. <teamproject> <teamproject>\<Functional Area/module whatever> Becomes: <teamproject> <teamproject>\<ProjectName>\ <teamproject>\<ProjectName>\<Functional Area/module whatever> Implications around Iterations Iterations should be used for chronological classification/isolation of work items. This could include isolated time boxes, milestones or release timelines and really depends on the logical flow of your project or projects. Due to the new level in Area we need to add the same level to Iteration. This is primarily because it is unlikely that the sprints in each of your projects/products will start and end at the same time. This is just a reality of managing multiple projects. Figure: Adding the same Area value to Iteration as the top level item adds flexibility to Iteration. <teamproject>\Sprint 1 Or <teamproject>\Release 1\Sprint 1 Becomes: <teamproject>\<ProjectName>\Sprint 1 Or <teamproject>\<ProjectName>\Release 1\Sprint 1 Implications around Queries Queries are used to filter your work items based on a specified level of granularity. There are a number of queries that are built into a project created using the MSF Agile 5.0 template, but we now have multiple projects and it would be a pain to have to edit all of the work items every time we changed project, and that would only allow one team to work on one project at a time.   Figure: The Queries that are created in a normal MSF Agile 5.0 project do not quite suit our new needs. In order for project contributors to be able to query based on their project we need a couple of things. The first thing I did was to create an “_Area Template” folder that has a copy of the project layout with all the queries setup to filter based on the “_Area Template” Area and the “_Sprint template” you can see in the Area and Iteration views. Figure: The template is currently easily drag and drop, but you then need to edit the queries to point at the right Area and Iteration. This needs a tool. I then created an “Areas” folder to hold all of the area specific queries. So, when you go to create a new TFS Sub-Project you just drag “_Area Template” while holding “Ctrl” and drop it onto “Areas”. There is a little setup here. That said I managed it in around 10 minutes which is not so bad, and I can imagine it being quite easy to build a tool to create these queries Figure: These new queries can be configured in around 10 minutes, which includes setting up the Area and Iteration as well. Version Control What about your source code? Well, that is the easiest of the lot. Just create a sub folder for each of your projects/products.   Figure: Creating sub folders in source control is easy as “Right click | Create new folder”. <teamproject>\DEV\Main\ Becomes: <teamproject>\<ProjectName>\DEV\Main\ Conclusion I think it is up to each company to make a call on how you want to configure your Team Projects and it depends completely on how many projects/products you are going to have for each customer including yourself. If we decide to utilise this route it will require some configuration to get our 170+ projects into this format, and I will probably be writing some tools to help. Pros You only have one project to upgrade when a process template changes – After going through an upgrade of over 170 project prior to the changes in the RC I can tell you that that many projects is no fun. Standardises your Process Template – You will always have the same Process implementation across projects/products without exception You get tighter control over the permissions – Yes, you can do this on a standard Team Project, but it gets a lot easier with practice. You can “move” work items from one “product” to another – Have we not always wanted to do that. You can rename your projects – Wahoo: everyone wants to do this, now you can. One set of Reporting Services reports to manage – You set an area and iteration to run reports anyway, so you may as well set both. Simplified Check-In Policies– There is only one set of check-in policies per client. This simplifies administration of policies. Simplified Alerts – As alerts are applied across multiple projects this simplifies your alert rules as per client. Cons All of these cons could be mitigated by a custom tool that helps automate creation of “Sub-projects” within Team Projects. This custom tool could create areas, Iteration, permissions, SharePoint and queries. It just does not exist yet :) You need to configure the Areas and Iterations You need to configure the permissions You may need to configure sub sites for SharePoint (depends on your requirement) – If you have two projects/products in the same Team Project then you will not see the burn down for each one out-of-the-box, but rather a cumulative for the Team Project. This is not really that much of a problem as you would have to configure your burndown graphs for your current iteration anyway. note: When you create a sub site to a TFS linked portal it will inherit the settings of its parent site :) This is fantastic as it means that you can easily create sub sites and then set the Area and Iteration path in each of the reports to be the correct one. Every team wants their own customization (via Ewald Hofman) - small teams of 2 persons against teams of 30 – or even outsourcing – need their own process, you cannot allow that because everybody gets the same work item types. note: Luckily at SSW this is not a problem as our template is standardised across all projects and customers. Large list of builds (via Ewald Hofman) – As the build list in Team Explorer is just a flat list it can get very cluttered. note: I would mitigate this by removing any build that has not been run in over 30 days. The build template and workflow will still be available in version control, but it will clean the list. Feedback Now that I have explained this method, what do you think? What other pros and cons can you see? What do you think of this approach? Will you be using it? What tools would you like to support you?   Technorati Tags: Visual Studio ALM,TFS Administration,TFS,Team Foundation Server,Project Planning,TFS Customisation

    Read the article

  • Database model for keeping track of likes/shares/comments on blog posts over time

    - by gage
    My goal is to keep track of the popular posts on different blog sites based on social network activity at any given time. The goal is not to simply get the most popular now, but instead find posts that are popular compared to other posts on the same blog. For example, I follow a tech blog, a sports blog, and a gossip blog. The tech blog gets waaay more readership than the other two blogs, so in raw numbers every post on the tech blog will always out number views on the other two. So lets say the average tech blog post gets 500 facebook likes and the other two get an average of 50 likes per post. Then when there is a sports blog post that has 200 fb likes and a gossip blog post with 300 while the tech blog posts today have 500 likes I want to highlight the sports and gossip blog posts (more likes than average vs tech blog with more # of likes but just average for the blog) The approach I am thinking of taking is to make an entry in a database for each blog post. Every x minutes (say every 15 minutes) I will check how many likes/shares/comments an entry has received on all the social networks (facebook, twitter, google+, linkeIn). So over time there will be a history of likes for each blog post, i.e post 1234 after 15 min: 10 fb likes, 4 tweets, 6 g+ after 30 min: 15 fb likes, 15 tweets, 10 g+ ... ... after 48 hours: 200 fb likes, 25 tweets, 15 g+ By keeping a history like this for each blog post I can know the average number of likes/shares/tweets at any give time interval. So for example the average number of fb likes for all blog posts 48hrs after posting is 50, and a particular post has 200 I can mark that as a popular post and feature/highlight it. A consideration in the design is to be able to easily query the values (likes/shares) for a specific time-frame, i.e. fb likes after 30min or tweets after 24 hrs in-order to compute averages with which to compare against (or should averages be stored in it's own table?) If this approach is flawed or could use improvement please let me know, but it is not my main question. My main question is what should a database scheme for storing this info look like? Assuming that the above approach is taken I am trying to figure out what a database schema for storing the likes over time would look like. I am brand new to databases, in doing some basic reading I see that it is advisable to make a 3NF database. I have come up with the following possible schema. Schema 1 DB Popular Posts Table: Post post_id ( primary key(pk) ) url title Table: Social Activity activity_id (pk) url (fk) type (i.e. facebook,twitter,g+) value timestamp This was my initial instinct (base on my very limited db knowledge). As far as I under stand this schema would be 3NF? I searched for designs of similar database model, and found this question on stackoverflow, http://stackoverflow.com/questions/11216080/data-structure-for-storing-height-and-weight-etc-over-time-for-multiple-users . The scenario in that question is similar (recording weight/height of users overtime). Taking the accepted answer for that question and applying it to my model results in something like: Schema 2 (same as above, but break down the social activity into 2 tables) DB Popular Posts Table: Post post_id (pk) url title Table: Social Measurement measurement_id (pk) post_id (fk) timestamp Table: Social stat stat_id (pk) measurement_id (fk) type (i.e. facebook,twitter,g+) value The advantage I see in schema 2 is that I will likely want to access all the values for a given time, i.e. when making a measurement at 30min after a post is published I will simultaneous check number of fb likes, fb shares, fb comments, tweets, g+, linkedIn. So with this schema it may be easier get get all stats for a measurement_id corresponding to a certain time, i.e. all social stats for post 1234 at time x. Another thought I had is since it doesn't make sense to compare number of fb likes with number of tweets or g+ shares, maybe it makes sense to separate each social measurement into it's own table? Schema 3 DB Popular Posts Table: Post post_id (pk) url title Table: fb_likes fb_like_id (pk) post_id (fk) timestamp value Table: fb_shares fb_shares_id (pk) post_id (fk) timestamp value Table: tweets tweets__id (pk) post_id (fk) timestamp value Table: google_plus google_plus_id (pk) post_id (fk) timestamp value As you can see I am generally lost/unsure of what approach to take. I'm sure this typical type of database problem (storing measurements overtime, i.e temperature statistic) that must have a common solution. Is there a design pattern/model for this, does it have a name? I tried searching for "database periodic data collection" or "database measurements over time" but didn't find anything specific. What would be an appropriate model to solve the needs of this problem?

    Read the article

  • How to move the camera sideways in libgdx

    - by Bubblewrap
    I want to move the camera sideways (strafe/truck), now i had the following in mind, but it doesn't look like there are standard methods to achieve this in libgdx. If i want to move the camera sideways by x, i think i need to do the following: Create a Matrix4 mat Determine the orthogonal vector v between camera.direction and camera.up translate mat by v*x multiply camera.position by mat Will this approach do what i think it does, and is it a good way to do it? And how can i do this in libgdx? I get "stuck" at step 2, as in, i have not found any standard method in libgdx to calculate an orthogonal vector. EDIT: I think i can use camera.direction.crs(camera.up) to find v. Guess i can now try this approach tonight and see if it works.

    Read the article

  • June Oracle Technology Network NEW Member Benefits - books books and more books!!!

    - by Cassandra Clark
    As we mentioned a few posts ago we are working to bring Oracle Technology Network members NEW benefits each month. Listed below are several discounts on technology books brought to you by Apress, Pearson, CRC Press and Packt Publishing. Happy reading!!! Apress Offers - Get 50% off the eBook below using promo code ORACLEJUNEJCCF. Pro ODP.NET for Oracle Database 11g By Edmund T. Zehoo This book is a comprehensive and easy-to-understand guide for using the Oracle Data Provider (ODP) version 11g on the .NET Framework. It also outlines the core GoF (Gang of Four) design patterns and coding techniques employed to build and deploy high-impact mission-critical applications using advanced Oracle database features through the ODP.NET provider. Pearson Offers - Get 35% off all titles listed below using code OTNMEMBER. SOA Design Patterns | Thomas Earl | ISBN: 0136135161 In cooperation with experts and practitioners throughout the SOA community, best-selling author Thomas Erl brings together the de facto catalog of design patterns for SOA and service-orientation. Oracle Performance Survival Guide | Guy Harrison | ISBN: 9780137011957 The fast, complete, start-to-finish guide to optimizing Oracle performance. Core JavaServer Faces, Third Edition | David Geary and Cay S. Horstmann | ISBN: 9780137012893 Provides everything you need to master the powerful and time-saving features of JSF 2.0? Solaris Security Essentials | ISBN: 9780137012336 A superb guide to deploying and managing secure computer environments.? Effective C#, Second Edition | Bill Wagner | ISBN: 9780321658708 Respected .NET expert Bill Wagner identifies fifty ways you can leverage the full power of the C# 4.0 language to express your designs concisely and clearly. CRC Press Offers - Use 813DA to get 20% off this the title below. Secure and Resilient Software Development This book illustrates all phases of the secure software development life cycle. It details quality software development strategies that stress resilience requirements with precise, actionable, and ground-level inputs. Packt Publishing Offers - Use the promo code "Java35June", to save 35% off of each eBook mentioned below. JSF 2.0 Cookbook By Anghel Leonard ISBN: 978-1-847199-52-2 Packed with fast, practical solutions and techniques for JavaServer Faces developers who want to push past the JSF basics. JavaFX 1.2 Application Development Cookbook By Vladimir Vivien ISBN: 978-1-847198-94-5 Fast, practical solutions and techniques for building powerful, responsive Rich Internet Applications in JavaFX.

    Read the article

  • Scrum for a single programmer?

    - by Rob Perkins
    I'm billed as the "Windows Expert" in my very small company, which consists of myself, a mechanical engineer working in a sales and training role, and the company's president, working in a design, development, and support role. My role is equally as general, but primarily I design and implement whatever programming on our product needs to get done in order for our stuff to run on whichever versions of Windows are current. I just finished watching a high-level overview of the Scrum paradigm, given in a webcast. My question is: Is it worth my time to learn more about this approach to product development, given that my development work items are usually given at a very high level, such as "internationalize and localize the product". If it is, how would you suggest adapting Scrum for the use of just one programmer? What tools, cloud-based or otherwise, would be useful to that end? If it is not, what approach would you suggest for a single programmer to organize his efforts from day to day? (Perhaps the question reduces to that simple question.)

    Read the article

  • NetBeans Podcast 69

    - by TinuA
    Podcast Guests: Terrence Barr, Simon Ritter, Jaroslav Tulach (It's an all-Oracle lineup!) Download mp3: 47 Minutes – 39.5 mb Subscribe on iTunes NetBeans Community News with Geertjan and Tinu If you missed the first two Java Virtual Developer Day events in early May, there's still one more LIVE training left on May 28th. Sign up here to participate live in the APAC time zone or watch later ON DEMAND. Video: Get started with Vaadin development using NetBeans IDE NetBeans IDE was at JavaCro 2014 and at Hippo Get-together 2014 Another great lineup is in the works for NetBeans Day at JavaOne 2014. More details coming soon! NetBeans' Facebook page is almost at 40,000 Likes! Help us crack that milestone in the next few weeks! Other great ways to stay updated about NetBeans? Twitter and Google+. 09:28 / Terrence Barr - What to Know about Java Embedded Terrence Barr, a Senior Technologist and Principal Product Manager for Embedded and Mobile technologies at Oracle, discusses new features of the Java SE Embedded and Java ME Embedded platforms, and sheds some light on the differences between them and what they have to offer to developers. Learn more about Java SE Embedded Tutorial: Using Oracle Java SE Embedded Support in NetBeans IDE Learn more about Java ME Embedded Video: NetBeans IDE Support for Java ME 8 Video: Installing and Using Java ME SDK 8.0 Plugins in NetBeans IDE Follow Terrence Barr to keep up with news in the Embedded space: Blog and Twitter 26:02 / Simon Ritter - A Massive Serving of Raspberry Pi Oracle's Raspberry Pi virtual course is back by popular demand! Simon Ritter, the head of Oracle's Java Technology Evangelism team, chats about the second run of the free Java Embedded course (starting May 30th), what participants can expect to learn, NetBeans' support for Java ME development, and other Java trainings coming to a desktop, laptop or user group near you. Sign up for the Oracle MOOC: Develop Java Embedded Applications Using Raspberry Pi Find out when Simon Ritter and the Java Evangelism team are coming to a Java event or JUG in your area--follow them on Twitter: Simon Ritter Angela Caicedo Steven Chin Jim Weaver 36:58 / Jaroslav Tulach - A Perfect Translation Jaroslav Tulach returns to the NetBeans podcast with tales about the Japanese translation of the Practical API Design book, which he contends surpasses all previous translations, including the English edition! Order "Practical API Design" (Japanese Version)  Find out why the Japanese translation is the best edition yet *Have ideas for NetBeans Podcast topics? Send them to ">nbpodcast at netbeans dot org. *Subscribe to the official NetBeans page on Facebook! Check us out as well on Twitter, YouTube, and Google+.

    Read the article

  • Google I/O 2010 - Google Charts Toolkit

    Google I/O 2010 - Google Charts Toolkit Google I/O 2010 - Google Charts Toolkit: Google's new unified approach for creating dynamic charts on the web Google APIs 201 Michael Fink, Amit Weinstein Google Charts Toolkit is Google's unified approach for creating charts on the web. It provides a rich gallery spanning from pie charts to interactive heat-maps and from organizational trees to motion charts. The toolkit lets developers choose between JavaScript based client-side rendering and image based server-side rendering. We will present the relative strengths of these two approaches, and unveil the future visual design of Google Charts. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 9 0 ratings Time: 56:50 More in Science & Technology

    Read the article

  • how do you document your development process?

    - by David
    My current state is a mixture of spreadsheets, wikis, documents, and dated folders for my input/configuration and output files and bzr version control for code. I am relatively new to programming that requires this level of documentation, and I would like to find a better, more coherent approach. update (for clarity): My inputs are data used to generate configuration files with parameter values and my outputs are analyses of model predictions. I would really like to have an approach that allows me to associate particular configuration(s) with particular outputs, so that I can ask questions of my documentation such as "what causes over/under estimates?" or "what causes error 'X'"?

    Read the article

  • Got an idea for an application, but part of it is patented, any suggestions?

    - by tekiegreg
    Hi there, so I've been working on developing an idea for an application that I think has the potential to be successful, however after some initial research I've discovered that at least part of my ideas are covered by a patent out there, the patent in particular is held by a really large company (I don't want to give away specifics for fear I'd draw their attention for sure). I'm debating a few options: 1) Develop patents around my ideas that don't conflict and maybe approach the company in question for a license exchange 2) Just approach them for a license outright 3) Just develop around it anyways and hope for the best :-p What have other people done in these situations? Are companies generally willing to grant patent licenses? Are they willing to grant them at reasonable prices? Thoughts?

    Read the article

  • What to send to server in real time FPS game?

    - by syloc
    What is the right way to tell the position of our local player to the server? Some documents say that it is better to send the inputs whenever they are produced. And some documents say the client sends its position in a fixed interval. With the sending the inputs approach: What should I do if the player is holding down the direction keys? It means I need to send a package to the server in every frame. Isn't it too much? And there is also the rotation of the player from the mouse input. Here is an example: http://www.gabrielgambetta.com/fpm_live.html What about sending the position in fixed interval approach. It sends too few messages to the server. But it also reduces responsiveness. So which way is better?

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >