Search Results

Search found 21336 results on 854 pages for 'db api'.

Page 576/854 | < Previous Page | 572 573 574 575 576 577 578 579 580 581 582 583  | Next Page >

  • Am I missing something about these considerations about Leaderboard's database's schema?

    - by misiMe
    I just finished to develop a mobile game, now I want to implement an online leaerboard using mysql. I'm wondering about the database's schema, I thought about some possibilities: (I didn't got in detail with syntax because my question is just about the logic of it) Name: string; Score: integer I thought to ask the name just the first time. If, in the future, you will modify that, it will call just an update to the name associated with your id. Leaderboard(ID, Name, Score) ID: integer autoincrement, PrimaryKey With this kind of idea maybe the db will grow fast because if you choose everytime a different name for the score, it will add a new entry. Leaderboard(PhoneId, Name, Score) Here PhoneId will be the unique identifier of the phone, PrimaryKey. A con of this choice is that if you want to play with your friends' phone, you can't put a different name for the score. Leaderboard(Name, Score) Here Name is PrimaryKey. With that, if you enter a name that already exists, you will be prompted to choose another one. Do you agree with this considerations? What will you do? Am I missing something?

    Read the article

  • Project Jigsaw: On the next train

    - by Mark Reinhold
    I recently proposed to defer Project Jigsaw from Java 8 to Java 9. Feedback on the proposal was about evenly divided as to whether Java 8 should be delayed for Jigsaw, Jigsaw should be deferred to Java 9, or some other, usually less-realistic, option should be taken. The ultimate decision rested, of course, with the Java SE 8 (JSR 337) Expert Group. After due consideration, a strong majority of the EG agreed to my proposal. In light of this decision we can still make progress in Java 8 toward the convergence of the higher-end Java ME Platforms with Java SE. I previously suggested that we consider defining a small number of Profiles which would allow compact configurations of the SE Platform to be built and deployed. JEP 161 lays out a specific initial proposal for such Profiles. There is also much useful work to be done in Java 8 toward the fully-modular platform in Java 9. Alan Bateman has submitted JEP 162, which proposes some changes in Java 8 to smooth the eventual transition to modules, to provide new tools to help developers prepare for modularity, and to deprecate and then, in Java 9, actually remove certain API elements that are a significant impediment to modularization. Thanks to everyone who responded to the proposal with comments and questions. As I wrote initially, deferring Jigsaw to a Java 9 release in 2015 is by no means a pleasant decision. It does, however, still appear to be the best available option, and it is now the plan of record.

    Read the article

  • JSR Updates and EC Meeting Tuesday @ 15:00 PST

    - by Heather VanCura
    JSR 310, Date and Time API, has moved to JCP 2.9 (first JCP 2.9 JSR!) JSR 236, Concurrency Utilities for Java EE, has published an Early Draft Review. This review ends 15 December 2012.  Tomorrow, Tuesday 20 November is the last Public EC Meeting of 2012, and the first EC meeting with the merged EC. The second hour of this meeting will be open to the public at 3:00 PM PST. The agenda includes  JSR 355,  EC merge implementation report, JSR 358 (JCP.next.3) status report, JCP 2.8 status update and community audit program.  Details are below. We hope you will join us, but if you cannot attend, not to worry--the recording and materials will also be public on the JCP.org multimedia page. Meeting details Date & Time Tuesday November 20, 2012, 3:00 - 4:00 pm PST Location Teleconference Dial-in +1 (866) 682-4770 (US) Conference code: 627-9803 Security code: 52732 ("JCPEC" on your phone handset) For global access numbers see http://www.intercall.com/oracle/access_numbers.htm Or +1 (408) 774-4073 WebEx Browse for the meeting from https://jcp.webex.com No registration required (enter your name and email address) Password: JCPEC Agenda JSR 355 (the EC merge) implementation report JSR 358 (JCP.next.3) status report 2.8 status update and community audit program Discussion/Q&A Note The call will be recorded and the recording published on jcp.org, so those who are unable to join in real-time will still be able to participate.

    Read the article

  • Google Analytics Social Tracking implementation. Is Google's example correct?

    - by s_a
    The current Google Analytics help page on Social tracking (developers.google.com/analytics/devguides/collection/gajs/gaTrackingSocial?hl=es-419) links to this page with an example of the implementation: http://analytics-api-samples.googlecode.com/svn/trunk/src/tracking/javascript/v5/social/facebook_js_async.html I've followed the example carefully yet social interactions are not registered. This is the webpage with the non-working setup: http://bit.ly/1dA00dY (obscured domain as per Google's Webmaster Central recommendations for their product forums) This is the structure of the page: In the : ga async code copied from the analytics' page a script tag linking to stored in the same domain. the twitter js loading tag In the the fb-root div the facebook async loading js including the _ga.trackFacebook(); call the social buttons afterwards, like so: (with the proper URL) Tweet (with the proper handle) That's it. As far as I can tell, I have implemented it exactly like in the example, but likes and twitts aren't registered. I have also altered the ga_social_tracking.js to register the social interactions as events, adding the code below. It doesn't work either. What could be wrong? Thanks! Code added to ga_social_tracking.js var url = document.URL; var category = 'Social Media'; /* Facebook */ FB.Event.subscribe('edge.create', function(href, widget) { _gaq.push(['_trackEvent', category, 'Facebook', url]); }); /* Twitter */ twttr.events.bind('tweet', function(event) { _gaq.push(['_trackEvent', category, 'Twitter', url]); });

    Read the article

  • Rich snippet for Google Custom Search - Schema.org

    - by Joesoc
    I am trying to extract the book URL from a link using microdata. The format is specified in schema.org. Here is my html. <div class="col-sm-4 col-md-3" itemscope itemtype="http://schema.org/Book"> <div class="thumbnail"> <img src="{{ book.thumbnailurl }}" itemprop="thumbnailUrl" style="width: 100px;height: 200px;"> <div class="caption"> <h4><span itemprop="name">{{ book.name }}</span> - <span itemprop="author">{{ book.author }}</span></h4> <p><span itemprop="about"> {{ book.about }}</span></p> <p> <a href="{{ book.url }}" itemprop="url" onclick="trackOutboundLink(‘{{ book.name }}’);"> <button type="button" class="btn btn-default btn-md"> <span class="glyphicon glyphicon-book"></span>Read </button> </a> </p> </div> </div> </div> When I use google snippet testing tool the JSON API returns book as a html link. However when I make the call in javascript the value of url is text("Read"). What am i missing ?

    Read the article

  • StreamInsight V2.0 Released!

    - by Roman Schindlauer
    The StreamInsight Team is proud to announce the release of StreamInsight V2.0! This is the version that ships with SQL 2012, and as such it has been available through Connect to SQL CTP customers already since December. As part of the SQL 2012 launch activities, we are now making V2.0 available to everyone, following our tradition of providing a separate download page. StreamInsight V2.0 includes a number of stability and performance fixes over its predecessor V1.2. Moreover it introduces a dependency on the .NET Framework 4.0, as well as on SQL 2012 license keys. For these reasons, we decided to bump the major version number, even though V2.0 does not add new features or API surface. It can be regarded a stepping stone to the upcoming release 2.1 which will contain significantly new APIs (that will depend on .NET 4.0). Head over here to download StreamInsight V2.0. The updated Books Online can be found here. Update: For instructions on how to make your existing application work against the new bits without recompilation, see here. Regards, The StreamInsight Team

    Read the article

  • How to indicate to a web server the language of a resource

    - by Nik M
    I'm writing an HTTP API to a publishing server, and I want resources with representations in multiple languages. A user whose client GETs a resource which has Korean, Japanese and Trad. Chinese representations, and sends Accept-Language: en, ja;q=0.7 should get the Japanese. One resource, identified by one URI, will therefore have a number of different language representations. This seems to me like a totally orthodox use of content negotiation and multiple resource representations. But when each translator comes to provide these alternate language representations to the server, what's the correct way to instruct the server which language to store the representation under? I'm having the translators PUT the representation in its entirety to the same URI, but I can't find out how to do this elegantly. Content-Language is a response header, and none of the request headers seem to fit the bill. It seems my options are Invent a new request header Supply additional metadata in a multipart/related document Provide language as a parameter to the Content-Type of the request, like Content-Type: text/html;language=en I don't want to get into the business of extending HTTP, and I don't feel great about bundling extra metadata into the representation. Neither approach seems friendly to HTTP caches either. So option 3 seems like the best way that I can think of, but even then it's decidedly non-standard to put my own specific parameters on a very well established content type. Is there any by-the-book way of achieving this?

    Read the article

  • Design in "mixed" languages: object oriented design or functional programming?

    - by dema80
    In the past few years, the languages I like to use are becoming more and more "functional". I now use languages that are a sort of "hybrid": C#, F#, Scala. I like to design my application using classes that correspond to the domain objects, and use functional features where this makes coding easier, more coincise and safer (especially when operating on collections or when passing functions). However the two worlds "clash" when coming to design patterns. The specific example I faced recently is the Observer pattern. I want a producer to notify some other code (the "consumers/observers", say a DB storage, a logger, and so on) when an item is created or changed. I initially did it "functionally" like this: producer.foo(item => { updateItemInDb(item); insertLog(item) }) // calls the function passed as argument as an item is processed But I'm now wondering if I should use a more "OO" approach: interface IItemObserver { onNotify(Item) } class DBObserver : IItemObserver ... class LogObserver: IItemObserver ... producer.addObserver(new DBObserver) producer.addObserver(new LogObserver) producer.foo() //calls observer in a loop Which are the pro and con of the two approach? I once heard a FP guru say that design patterns are there only because of the limitations of the language, and that's why there are so few in functional languages. Maybe this could be an example of it? EDIT: In my particular scenario I don't need it, but.. how would you implement removal and addition of "observers" in the functional way? (I.e. how would you implement all the functionalities in the pattern?) Just passing a new function, for example?

    Read the article

  • Structuring cascading properties - parent only or parent + entire child graph?

    - by SB2055
    I have a Folder entity that can be Moderated by users. Folders can contain other folders. So I may have a structure like this: Folder 1 Folder 2 Folder 3 Folder 4 I have to decide how to implement Moderation for this entity. I've come up with two options: Option 1 When the user is given moderation privileges to Folder 1, define a moderator relationship between Folder 1 and User 1. No other relationships are added to the db. To determine if the user can moderate Folder 3, I check and see if User 1 is the moderator of any parent folders. This seems to alleviate some of the complexity of handling updates / moved entities / additions under Folder 1 after the relationship has been defined, and reverting the relationship means I only have to deal with one entity. Option 2 When the user is given moderation privileges to Folder 1, define a new relationship between User 1 and Folder 1, and all child entities down to the grandest of grandchildren when the relationship is created, and if it's ever removed, iterate back down the graph to remove the relationship. If I add something under Folder 2 after this relationship has been made, I just copy all Moderators into the new Entity. But when I need to show only the top-level Folders that a user is Moderating, I need to query all folders that have a parent folder that the user does not moderate, as opposed to option 1, where I just query any items that the user is moderating. I think it comes down to determining if users will be querying for all parent items more than they'll be querying child items... if so, then option 1 seems better. But I'm not sure. Is either approach better than the other? Why? Or is there another approach that's better than both? I'm using Entity Framework in case it matters.

    Read the article

  • Should I redo an abandoned project with Lightswitch?

    - by Elson
    I had a small project that I was doing on the side. It was basically a couple of forms linked to a DB. Access was out, because it was a specifically meant to be a web application. Being a small project I used ASP.NET Dynamic Data, but, for various reasons, the project ended before deployment. I met the client recently, and he said there was a need for it still. I'm considering restarting the project with Dynamic Data, but I've seen some Lightswitch demos, and was suitably impressed with the BETA. I will wait for RTM if I use it, but is it a good idea to use Lightswitch to replace the Dyanmic Data? The amount of work I put into the Dynamic Data site isn't really an issue. Additional information: It's a system that tracks production in a small factory, broken down by line, machine, section and will generate reports. I would guess that the data structure will remain fairly constant over time, but that the reporting requirements will grow. The other thing is that the factory is part of a larger group, and I'm hopeful that, if this system succeeds, similar work with be forthcoming for other factories.

    Read the article

  • "TDD is about design, not verification"; concretely, what does that mean?

    - by sigo
    I've been wondering about this. What do we exactly mean by design and verification. Should I just apply TDD to make sure my code is SOLID and not check if it's external behaviour is correct? Should I use BDD for verifying the behaviour is correct? Where I get confused also is regarding TDD code Katas, to me they looked like more about verification than design; shouldn't they be called BDD Katas instead of TDD Katas? I reckon that for example the Uncle Bob bowling Kata leads in the end to a simple and nice internal design but I felt that most of the process was centred more around verification than design. Design seemed to be a side effect of testing the external behaviour incrementally. I didn't feel so much that we were focusing most of our efforts on design but more on verification. While normally we are told the contrary, that in TDD, verification is a side effect, design is the main purpose. So my question is what should I focus on exactly, when I do TDD: SOLID, external API usability, or something else? And how can I do that without being focused on verification? What do you guys focus your energy on when you are practising TDD?

    Read the article

  • A Web Service to collect data from local servers every hour

    - by anilerduran
    I'm trying to find a way to collect data from different servers around the world. Here are the details: There is only one single PowerShell script on servers that encrypts data (simple csv file) and sends with preferred method (HTTP/HTTPS Post could be) There is no more control on that servers. Can't install any service, process etc. Just I can configure script to execute every hour. This script also will have encrypted username/password/license key for every server. Script will compress data and send to me with these information. So I need a service (I'm not sure if Web Service is the rigth solution) on the cloud that will help me to: Will get data that is sent from servers using a method. Will authenticate request to recognize sender using license key/username/password and most importantly, Will redirect/send this filecab to my SQL Server on the cloud (Azure). Also it should seperate data according to customer information in license key. So every data for every customer will be stored in dedicated DB/Tables on my SQL All the processes above should be completed automatically. No way for manual steps. Question: A Web Service (SOAP or Restful) is the rigth solution for that?

    Read the article

  • What patterns book for iOS development contains this specific information? [closed]

    - by Brett Ryan
    I've read several books on iOS development and Objective-C, however what a lot of them teach is how to work with interfaces and all contain the model inside the view controller, i.e. a UITableViewController based view will simply have an NSArray as it's model. I'm interested in what the best practices are for designing the structure of an application. Specifically I'm interested in best practices for the following: How to separate a model from the view controller. I think I know how to do this by simply replacing the NSArray style example with a specific model object, however what I do not know how to do is alert the view when the model changes. For example in .NET I would solve this by conforming to INotifyPropertyChanged and databinding, and similarly with Java I would use PropertyChangeListener. How to create a service model for my domain objects. For example I want to learn the best way to create a service for a hypothetical Widget object to manage an internal DB and also services for communicating with remote endpoints. I need to learn the best ways to do this in a way that interface components can subscribe to events such as widgetUpdated. These services should be singleton classes and some how dependency injected into model/controller objects. Books I've read so far are: Programming in Objective-C (4th Edition) Beginning iOS 5 Development: Exploring the iOS SDK The iOS 5 Developer's Cookbook: Expanded Electronic Edition: Essentials and Advanced Recipes for iOS Programmers Learn Objective-C on the Mac: For OS X and iOS I've also purchased the following updated books but not yet read them. The Core iOS 6 Developer's Cookbook (4th edition Programming in Objective-C (5th Edition) I come from a Java and C# background with 15 years experience, I understand that many of the ways I would do things in these languages may not fit to the ObjC way of developing applications. Would someone be able to provide me with the book on this topic containing this specific subject matter?

    Read the article

  • HTML Tidy in NetBeans IDE (Part 2)

    - by Geertjan
    This is what I was aiming for in the previous blog entry: What you can see above (especially if you click to enlarge it) is that I have HTML Tidy integrated into the NetBeans analyzer functionality, which is pluggable from 7.2 onwards. Well, if you set an implementation dependency on "Static Analysis Core", since it's not an official API yet. Also, the scopes of the analyzer functionality are not pluggable. That means you can 'only' set the analyzer's scope to one or more projects, one or more packages, or one or more files. Not one or more folders, which means you can't have a bunch off HTML files in a folder that you access via the Favorites window and then run the analyzer on that folder (or on multiple folders). Thus, to try out my new code, I had to put some HTML files into a package inside a Java application. Then I chose that package as the scope of the analyzer. Then I ran all the analyzers (i.e., standard NetBeans Java hints, FindBugs, as well as my HTML Tidy extension) on that package. The screenshot above is the result. Here's all the code for the above, which is a port of the Action code from the previous blog entry into a new Analyzer implementation: import java.io.IOException; import java.io.PrintWriter; import java.io.StringWriter; import java.util.ArrayList; import java.util.Collections; import java.util.List; import javax.swing.JComponent; import javax.swing.text.Document; import org.netbeans.api.fileinfo.NonRecursiveFolder; import org.netbeans.modules.analysis.spi.Analyzer; import org.netbeans.modules.analysis.spi.Analyzer.AnalyzerFactory; import org.netbeans.modules.analysis.spi.Analyzer.Context; import org.netbeans.modules.analysis.spi.Analyzer.CustomizerProvider; import org.netbeans.modules.analysis.spi.Analyzer.WarningDescription; import org.netbeans.spi.editor.hints.ErrorDescription; import org.netbeans.spi.editor.hints.ErrorDescriptionFactory; import org.netbeans.spi.editor.hints.Severity; import org.openide.cookies.EditorCookie; import org.openide.filesystems.FileObject; import org.openide.loaders.DataObject; import org.openide.util.Exceptions; import org.openide.util.lookup.ServiceProvider; import org.w3c.tidy.Tidy; public class TidyAnalyzer implements Analyzer {     private final Context ctx;     private TidyAnalyzer(Context cntxt) {         this.ctx = cntxt;     }     @Override     public Iterable<? extends ErrorDescription> analyze() {         List<ErrorDescription> result = new ArrayList<ErrorDescription>();         for (NonRecursiveFolder sr : ctx.getScope().getFolders()) {             FileObject folder = sr.getFolder();             for (FileObject fo : folder.getChildren()) {                 for (ErrorDescription ed : doRunHTMLTidy(fo)) {                     if (fo.getMIMEType().equals("text/html")) {                         result.add(ed);                     }                 }             }         }         return result;     }     private List<ErrorDescription> doRunHTMLTidy(FileObject sr) {         final List<ErrorDescription> result = new ArrayList<ErrorDescription>();         Tidy tidy = new Tidy();         StringWriter stringWriter = new StringWriter();         PrintWriter errorWriter = new PrintWriter(stringWriter);         tidy.setErrout(errorWriter);         try {             Document doc = DataObject.find(sr).getLookup().lookup(EditorCookie.class).openDocument();             tidy.parse(sr.getInputStream(), System.out);             String[] split = stringWriter.toString().split("\n");             for (String string : split) {                 //Bit of ugly string parsing coming up:                 if (string.startsWith("line")) {                     final int end = string.indexOf(" c");                     int lineNumber = Integer.parseInt(string.substring(0, end).replace("line ", ""));                     string = string.substring(string.indexOf(": ")).replace(":", "");                     result.add(ErrorDescriptionFactory.createErrorDescription(                             Severity.WARNING,                             string,                             doc,                             lineNumber));                 }             }         } catch (IOException ex) {             Exceptions.printStackTrace(ex);         }         return result;     }     @Override     public boolean cancel() {         return true;     }     @ServiceProvider(service = AnalyzerFactory.class)     public static final class MyAnalyzerFactory extends AnalyzerFactory {         public MyAnalyzerFactory() {             super("htmltidy", "HTML Tidy", "org/jtidy/format_misc.gif");         }         public Iterable<? extends WarningDescription> getWarnings() {             return Collections.EMPTY_LIST;         }         @Override         public <D, C extends JComponent> CustomizerProvider<D, C> getCustomizerProvider() {             return null;         }         @Override         public Analyzer createAnalyzer(Context cntxt) {             return new TidyAnalyzer(cntxt);         }     } } The above only works on packages, not on projects and not on individual files.

    Read the article

  • How to reduce the CPU load on a hosting with WordPress installed as a CMS? [on hold]

    - by Akky Awesøme
    I have been using hostgators hatchling plan for three months. I got an email from the hosting that my website is creating an over load on CPU. They said that I am eating up their processor and as a precaution, they have temporarily suspended my account. When I contacted their customer support, they said: You have to optimize your database and use some sort of caching mechanism, where the script does not need to generate a new page with every request, helps to lower the over load that a script will cause. I am not a technical geek, I am wondering how I will do this thing. I don't have any resource to hire a web developer to do this job. My website is down for 48hours. I was using wp super cache along with cloudfare's free support. Now I have intalled optimize-db plugin and optimized my database. Please provide me with some more tips on how to optimize my database to reduce CPU usage. Any help would be appreciated.

    Read the article

  • Dynamic Query Generation : suggestion for better approaches

    - by Gaurav Parmar
    I am currently designing a functionality in my Web Application where the verified user of the application can execute queries which he wishes to from the predefined set of queries with where clause varying as per user's choice. For example,Table ABC contains the following Template query called SecretReport "Select def as FOO, ghi as BAR from MNO where " SecretReport can have parameters XYZ, ILP. Again XYZ can have values as 1,2 and ILP can have 3,4 so if the user chooses ILP=3, he will get the result of the following query on his screen "Select def as FOO, ghi as BAR from MNO where ILP=3" Again the user is allowed permutations of XYZ / ILP My initial thought is that User will be shown a list of Report names and each report will have parameters and corresponding values. But this approach although technically simple does not appear intuitive. I would like to extend this functionality to a more generic level. Such that the user can choose a table and query based on his requirements. Of course we do not want the end user to take complete control of DB. But only tables and fields that are relevant to him. At present we are defining what is relevant in the code. But I want the Admin to take over this functionality such that he can decide what is relevant and expose the same to the user. On user's side it should be intuitive what is available to him and what queries he can form. Please share your thoughts what is the most user friendly way to provide this feature to the end user.

    Read the article

  • Material System

    - by Towelie
    I'm designing Material/Shader System (target API DX10+ and may be OpenGL3+, now only DX10). I know, there was a lot of topics about this, but i can't find what i need. I don't want to do some kind of compilation/parsing scripts in real-time. So there some artist-created material, written at some analog of CG. After it compiled to hlsl code and after to final shader. Also there are some hard-coded ConstantBuffers, like cbuffer EveryFrameChanging { float4x4 matView; float time; float delta; } And shader use shared constant buffers to get parameters. For each mesh in the scene, getting needs and what it can give (normals, binormals etc.) and finding corresponding permutation of shader or calculating missing parts. Also, during build calculating render states and the permutations or hash for this shader which later will be used for sorting or even giving the ID from 0 to ShaderCount w/o gaps to it for sorting. FinalShader have only 1 technique and one pass. After it for each Mesh setting some shader and it's good to render. some pseudo code SetConstantBuffer(ConstantBuffer::PerFrame); foreach (shader in FinalShaders) SetConstantBuffer(ConstantBuffer::PerShader, shader); SetRenderState(shader); foreach (mesh in shader.GetAllMeshes) SetConstantBuffer(ConstantBuffer::PerMesh, mesh); SetBuffers(mesh); Draw(); class FinalShader { public: UUID m_ID; RenderState m_RenderState; CBufferBindings m_BufferBindings; } But i have no idea how to create this CG language and do i really need it?

    Read the article

  • Should I be looking for developers with specific skill sets or generalists that need to learn?

    - by Lostsoul
    Thanks to the great help of this site and SO, I've been able to make a prototype of a software I want to sell but unfortunately although the prototype works I think my code quality is very low. I didn't use much OOP or design patterns so although my code is understandable to me, I think a normal developer would faint if they had to read it. So I wanted to hire a developer to make it a bit more better quality and improve some of my implementations of API's that I may have not done correctly. I'm having problems hiring a developer though. I have met 2 developers and had them read my software specs.The problem is, they lacked my business's domain knowledge(which is completely understandable and no biggie) but they also lacked knowledge of the underlying tech systems I used such as Hadoop, Hbase, Cuda, etc..I spent alot of time explaining map/reduce, bigtables and other technologies I used. I thought it was common knowledge because of my interactions with people on this site but the people I met with mentioned they never had to deal with these things so they didn't know it. My question is, for software projects that are hiring contractor developers is it a danger if the developer does not have experience with the underlying technologies? or can a general developer who is accomplished in another area realistically pick up new technologies? I did a very very quick back of envelope calculation and I think the upfront costs would be similar if I hire a student or developer with no experience in my technologies who will work many hours versus hiring a highly experienced developer who charges double but finishes in half the time but what other risks should I be considering or worried about? Also, should if I do hire a generalist, should I be paying for the time it takes them to learn hadoop or cuda if they are contractors(seems to make business sense but not sure how fair it is to them if they do not use the skill again). I'm a bit confused so any suggestions would be great.

    Read the article

  • How can I design my classes for a calendar based on database events?

    - by Gianluca78
    I'm developing a web calendar in php (using Symfony2) inspired by iCal for a project of mine. At this moment, I have two classes: a class "Calendar" and a class "CalendarCell". Here you are the two classes properties and method declarations. class Calendar { private $month; private $monthName; private $year; private $calendarCellList = array(); private $translator; public function __construct($month, $year, $translator) {} public function getCalendarCells() {} public function getMonth() {} public function getMonthName() {} public function getNextMonth() {} public function getNextYear() {} public function getPreviousMonth() {} public function getPreviousYear() {} public function getYear() {} private function calculateDaysPreviousMonth() {} private function calculateNumericDayOfTheFirstDayOfTheWeek() {} private function isCurrentDay(\DateTime $dateTime) {} private function isDifferentMonth(\DateTime $dateTime) {} } class CalendarCell { private $day; private $month; private $dayNameAbbreviation; private $numericDayOfTheWeek; private $isCurrentDay; private $isDifferentMonth; private $translator; public function __construct(array $parameters) {} public function getDay() {} public function getMonth() {} public function getDayNameAbbreviation() {} public function isCurrentDay() {} public function isDifferentMonth() {} } Each calendar day can includes many events stored in a database. My question is: which is the best way to manage these events in my classes? I think to add a eventList property in CalendarCell and populate it with an array of CalendarEvent objects fetched by the database. This kind of solution doesn't allow other coders to reuse the classes without db (because I should inject at least a repository services also) just to create and visualize a calendar... so maybe it could be better to extend CalendarCell (for instance in CalendarCellEvent) and add the database features? I feel like I'm missing some crucial design pattern! Any suggestion will be very appreciated!

    Read the article

  • Using a random string to authenticate HMAC?

    - by mrwooster
    I am designing a simple webservice and want to use HMAC for authentication to the service. For the purpose of this question we have: a web service at example.com a secret key shared between a user and the server [K] a consumer ID which is known to the user and the server (but is not necessarily secret) [D] a message which we wish to send to the server [M] The standard HMAC implementation would involve using the secret key [K] and the message [M] to create the hash [H], but I am running into issues with this. The message [M] can be quite long and tends to be read from a file. I have found its very difficult to produce a correct hash consistently across multiple operating systems and programming languages because of hidden characters which make it into various file formats. This is of course bad implementation on the client side (100%), but I would like this webservice to be easily accessible and not have trouble with different file formats. I was thinking of an alternative, which would allow the use a short (5-10 char) random string [R] rather than the message for autentication, e.g. H = HMAC(K,R) The user then passes the random string to the server and the server checks the HMAC server side (using random string + shared secret). As far as I can see, this produces the following issues: There is no message integrity - this is ok message integrity is not important for this service A user could re-use the hash with a different message - I can see 2 ways around this Combine the random string with a timestamp so the hash is only valid for a set period of time Only allow each random string to be used once Since the client is in control of the random string, it is easier to look for collisions I should point out that the principle reason for authentication is to implement rate limiting on the API service. There is zero need for message integrity, and its not a big deal if someone can forge a single request (but it is if they can forge a very large number very quickly). I know that the correct answer is to make sure the message [M] is the same on all platforms/languages before hashing it. But, taking that out of the equation, is the above proposal an acceptable 2nd best?

    Read the article

  • Which platform to choose from .Net or Java [closed]

    - by SahilMahajanMj
    I know that there is a lot healthy discussion about this topic. but my case is entirely different from these discussions. I have just passed out my graduation. During the graduation period, i had done C#.net and java as well. Now i am working as Android programmer in a company. In the recent times, i have heard a lot about pros and cons of these platforms. What our seniors used to suggest that, ASP.net is a lot better than JSP for website developement. .net provides us lot of in built tools and API's for building powerfull applications. Java's platform independancy has always been issue among them. Security fetures are more in java. .Net has more powerfull IDE's. Now i am a bit confused on which platform to choose from these. I found java better from two, so did i prefer to choose android. Also, the discussion always ends with an issue that java has no scope for job as far has .net does. I would like to hear suggestions on this topic, but it would be better, if you consider indian IT market for this discussion.

    Read the article

  • JEditorPane Code Completion

    - by Geertjan
    Code completion in a JEditorPane: Unfortunately, a lot of this solution depends on the Java Editor support in the IDE. Therefore, to use it, in its current state, you'll need lots of Java Editor related JARs even though your own application probably doesn't include a Java Editor. A key thing one needs to do is implement the NetBeans Code Completion API, using the related tutorial in the NetBeans Platform Learning Trail, but register the CompletionProvider as follows: @MimeRegistration(mimeType = "text/x-dialog-binding", service = CompletionProvider.class) Then in the TopComponent, include this code, which will bind all the completion providers in the above location, i.e., text/x-dialog-binding, to the JEditorPane: EditorKit kit = CloneableEditorSupport.getEditorKit("text/x-java"); jEditorPane1.setEditorKit(kit); FileObject fob; try {     fob = FileUtil.getConfigRoot().createData("tmp.java");     DataObject dob = DataObject.find(fob);     jEditorPane1.getDocument().putProperty(             Document.StreamDescriptionProperty,             dob);     DialogBinding.bindComponentToFile(fob, 0, 0, jEditorPane1);     jEditorPane1.setText("Egypt"); } catch (IOException ex) {     Exceptions.printStackTrace(ex); } Not a perfect solution, a bit hacky, with a high overheard, but a start nonetheless. Someone should look in the NetBeans sources to see how this actually works and then create a generic solution that is not tied to the Java Editor.

    Read the article

  • Should one use a separate database for application data and user data?

    - by trycatch
    I’ve been working on a project for a little while and I’m unsure which is the better architecture. I’m interested in the consensus. The answer to me seems fairly obvious but something about it is digging at me and I can't pick out what. The TL;DR is: how do you handle a program with application data and user data in the same DB which needs to be able to receive updates to the application data periodically? One database for user data and one for application, or both in one? The detailed version is.. if an application has a database which needs to maintain application data AND user data, and the user data all references application data, it feels more natural to me to store them in the same database. But if there exists a need to be able to update the application data within this database periodically, should this be stripped into two databases so that one can simply download the updated application data database file as an update and replace the old one? Or should they remain as one database, and the application data be updated via a script which inserts the new data into the existing database? The second sounds clearly preferable to me... but for some reason just doesn’t feel right, and I can't pick out quite why.

    Read the article

  • Single CAS web application in a cluster

    - by Dolf Dijkstra
    Recently a customer wanted to set up a cluster of CAS nodes to be used together with WebCenter Sites. In the process of setting this up they realized that they needed to create a web application per managed server. They did not want to have this management burden but would like to have one web application deployed to multiple nodes. The reason that there is a need for a unique application per node is that the web-application contains information that needs to be unique per node, the postfix for the ticket id.  My customer would like to externalize the node specific configuration to either a specific classpath per managed server or to system properties set at startup.It turns out that the postfix for ticket ids is managed through a property host.name and that this property can be externalized.The host.name property is used in: /webapps/cas/WEB-INF/spring-configuration/uniqueIdGenerators.xmlIt is set in /webapps/cas/WEB-INF/spring-configuration/propertyFileConfigurer.xmlin a PropertyPlaceholderConfigurer.The documentation for PropertyPlaceholderConfigurer:http://static.springsource.org/spring/docs/2.0.x/api/org/springframework/beans/factory/config/PropertyPlaceholderConfigurer.htmlThis indicates that the properties defined through the PropertyPlaceHolderConfigurer can be externalized.To enable this externalization you would need to change host.properties so it is generic for all the managed servers and thus can be reused for all the managed servers: host.name=${cluster.node.id}Next step is to change the startup scripts for the managed servers and add a system property for -Dcluster.node.id=<something unique and stable>.Viola, the postfix is externalized and the web application can be shared amongst the cluster nodes.

    Read the article

  • How do I make apt-get instal commands not display every package that is installing?

    - by rajlego
    When I install something on terminal, it often shows me a few things for status. For one, it shows download rate (which is fine). However, when I install something, it can display Unpacking libgranite2:amd64 (0.3.0~r732+pkg64~ubuntu0.3.1) ... Selecting previously unselected package slingshot-launcher. Preparing to unpack .../slingshot-launcher_0.7.6.1+r421+pkg32~ubuntu0.3.1_amd64.deb ... Unpacking slingshot-launcher (0.7.6.1+r421+pkg32~ubuntu0.3.1) ... Selecting previously unselected package contractor. Preparing to unpack .../contractor_0.3.1~r136+pkg22~ubuntu0.3.1_amd64.deb ... Unpacking contractor (0.3.1~r136+pkg22~ubuntu0.3.1) ... Selecting previously unselected package apport-hooks-elementary. Preparing to unpack .../apport-hooks-elementary_0.1-0~35~saucy1_all.deb ... Unpacking apport-hooks-elementary (0.1-0~35~saucy1) ... Processing triggers for hicolor-icon-theme (0.13-1) ... Processing triggers for libglib2.0-0:amd64 (2.40.0-2) ... Processing triggers for man-db (2.6.7.1-1) ... Setting up libgranite-common (0.3.0~r732+pkg64~ubuntu0.3.1) ... Setting up libgranite2:amd64 (0.3.0~r732+pkg64~ubuntu0.3.1) ... Setting up slingshot-launcher (0.7.6.1+r421+pkg32~ubuntu0.3.1) ... Setting up contractor (0.3.1~r136+pkg22~ubuntu0.3.1) ... Setting up apport-hooks-elementary (0.1-0~35~saucy1) ... Processing triggers for libc-bin (2.19-0ubuntu6) .. I would rather that not show up. I only want to see download rate, not all that other stuff. How do I do this? EDIT: I would also like the jargon to be stored somwehre else if something goes wrong, or for the jargon to just be expanable on terminal.

    Read the article

< Previous Page | 572 573 574 575 576 577 578 579 580 581 582 583  | Next Page >