Search Results

Search found 6001 results on 241 pages for 'requires'.

Page 98/241 | < Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >

  • Isometric Movement in Javascript In the DOM

    - by deep
    I am creating a game using Javascript. I am not using the HTML5 Canvas Element. The game requires both side view controlles, and Isometric controls, hence the movementMode variable. I have got the specific angles, but I am stuck on an aspect of this. https://chillibyte.makes.org/thimble/movement function draw() { if (keyPressed) { if (whichKey == keys.left) { move(-1,0) } if (whichKey == keys.right) { move(1,0) } if (whichKey == keys.up) { move(0,-1) } if (whichKey == keys.down) { move(0,1) } } } This gives normal up, down , left, and right. i want to refactor this so that i can plugin two variables into the move() function, which will give the movement wanted. Now for the trig. /| / | / | y / | /a___| x Take This Right angled Triangle. given that x is 1, y must be equal to tan(a) That Seems right. However, when I do Math.tan(45), i get a number similar to 1.601. Why? To Sum up this question. I have a function, and i need a function which will converts an angle to a value, which will tell me the number of pixels that i need to go up by, if i only go across 1. Is it Math.tan that i want? or is it something else?

    Read the article

  • DIA2012

    - by Chris Kawalek
    If you've read this blog before, you probably know that Oracle desktop virtualization is used to demonstrate Oracle Applications at many different trade shows. This week, the Oracle desktop team is at DIA2012 in Philadelphia, PA. The DIA conference is a large event, hosting about 7,000 professionals in the pharmaceutical, bio technology, and medical device fields. Healthcare and associated fields are leveraging desktop virtualization because the model is a natural fit due to their high security requirements. Keeping all the data on the server and not distributing it on laptops or PCs that could be stolen makes a lot of sense when you're talking about patient records and other sensitive information. We're proud to be supporting the Oracle Health Sciences team at DIA2012 by hosting all of the Oracle healthcare related demos on a central server, and providing simple, smart card based access using our Sun Ray Clients. And remember that you're not limited to using just Sun Ray Clients--you can also use the Oracle Virtual Desktop Client and freely move your session from your iPad, your Windows or Linux PC, your Mac, or Sun Ray Clients. It's a truly mobile solution for an industry that requires mobile, secure access in order to remain compliant. Here are some pics from the show: We also have an informative PDF on Oracle desktop virtualization and Oracle healthcare that you can have a look at.  (Many thanks to Adam Workman for the pics!) -Chris  For more information, please go to the Oracle Virtualization web page, or  follow us at :  Twitter   Facebook YouTube Newsletter

    Read the article

  • What is the reason for section 1 of LGPL and what is the implication for section 9.

    - by Roland Schulz
    Why was section 1 added to LGPLv3? My understanding of section 3&4 is, one can convey the combined work under any license and with no requirements from GPLv3 (besides those explicitly stated as requirements in LGPLv3 3&4). Given that, why is section 1 necessary. Wouldn't that sections 3&4 by themselves already imply anyhow what section 1 explicitly states? I assume that I'm missing something and section 1 isn't redundant. Assuming that, does this have implications for other sections in GPLv3? E.g. does conveying a covered work under sections 3&4 fall under the patent clause of section 10 of GPLv3? Why does section 1 not also state an exception for section 10? Put another way. Is the Eigen FAQ correct by stating: LGPL requires [for header only libraries] pretty much the same as the 2-clause BSD license. It it true that for conveying object files including material from LGPLv3 headers no GPLv3 patent clauses apply?

    Read the article

  • Should mock objects for tests be created at a high or low level

    - by Danack
    When creating unit tests for those other objects, what is the best way to create mock objects that provide data to other objects. Should they be created at a 'high level' and intercept the calls as soon as possible, or should they be done at a 'low level' and so make as much as the real code still be called? e.g. I'm writing a test for some code that requires a NoteMapper object that allows Notes to be loaded from the DB. class NoteMapper { function getNote($sqlQueryFactory, $noteID) { // Create an SQL query from $sqlQueryFactory // Run that SQL // if null // return null // else // return new Note($dataFromSQLQuery) } } I could either mock this object at a high level by creating a mock NoteMapper object, so that there are no calls to the SQL at all e.g. class MockNoteMapper { function getNote($sqlQueryFactory, $noteID) { //$mockData = {'Test Note title', "Test note text" } // return new Note($mockData); } } Or I could do it at a very low level, by creating a MockSQLQueryFactory that instead of actually querying the database just provides mock data back, and passing that to the current NoteMapper object. It seems that creating mocks at a high level would be easier in the short term, but that in the long term doing it at a low level would be more powerful and possibly allow more automation of tests e.g. by recording data in an out of a DB and then replaying that data for tests. Is there a recommended way of creating mocks? Are there any hard and fast rules about which are better, or should they both be used where appropriate?

    Read the article

  • Why maven so slow compared to automake?

    - by ???'Lenik
    I have a Maven project consists of around 100 modules. I have reason to decompose the project to so many modules, and I don't think I should merge them in order to speed up the build process. I have read a lot of projects by other people, e.g., the Maven project itself, and Apache Archiva, and Hudson project, they all consists of a lot of modules, nearly 100 maybe, more or less. The problem is, to build them all need so much time, 3 hours for the first time build (this is acceptable because a lot of artifacts to download), and 15 minutes for the second build (this is not acceptable). For automake, things are similarly, the first time you need to configure the project, to prepare the magical config.h file, it's far more complex then what maven does. But it's still fast, maybe 10 seconds on my Debian box. After then, make install requires maybe 10 minutes for the first time build. However, when everything get prepared, the .o object files are generated, they don't have to be rebuild at all for the second time build. (In Maven, everything rebuild at everytime.) I'm very wondering, how guys working for Maven projects can bare this long time for each build, I'm just can't sit down calmly during each time Maven build, it took too long time, really.

    Read the article

  • Breadcrumbs in a modern web application, make sense? [on hold]

    - by Xtreme Biker
    I'm currently beginning with the development of a new web application. The whole web application is going to be bookmarkable and all the pages accesible via GET requests and url parameters. Having said that, let's suppose I've got three entities in my application, Customer, Team and City. Each Customer and Team belong to a city and I've got a city-detail page which displays the detail for a concrete city. So next navigation cases are possible: Customers - Customer detail (id=2) - City detail (id=3) Football teams - Team detail (id=5) - City detail (id=3) Cities - City detail (id=3) There are three possible ways of ending up in a city detail view. My question is, does it make sense to implement a breadcrumb to show such a history, having it available in the browser itself? Would it be more appropiate to show a breadcrumb with the last case, no matter where we're coming from (hierarchical breadcrumb)? That's what Jakob Nielsen points out here: Offering users a Hansel-and-Gretel-style history trail is basically useless, because it simply duplicates functionality offered by the Back button, which is the Web’s second-most-used feature. A history trail can also be confusing: users often wander in circles or go to the wrong site sections. Having each point in a confused progression at the top of the current page doesn’t offer much help. Finally, a history trail is useless for users who arrive directly at a page deep within the site. Also, even if the history trail seems the most natural way to implement it, it requires an extra effort to keep the whole track being HTTP a stateless mean.

    Read the article

  • Splitting big request in multiple small ajax requests

    - by Ionut
    I am unsure regarding the scalability of the following model. I have no experience at all with large systems, big number of requests and so on but I'm trying to build some features considering scalability first. In my scenario there is a user page which contains data for: User's details (name, location, workplace ...) User's activity (blog posts, comments...) User statistics (rating, number of friends...) In order to show all this on the same page, for a request there will be at least 3 different database queries on the back-end. In some cases, I imagine that those queries will be running quite a wile, therefore the user experience may suffer while waiting between requests. This is why I decided to run only step 1 (User's details) as a normal request. After the response is received, two ajax requests are sent for steps 2 and 3. When those responses are received, I only place the data in the destined wrappers. For me at least this makes more sense. However there are 3 requests instead of one for every user page view. Will this affect the system on the long term? I'm assuming that this kind of approach requires more resources but is this trade of UX for resources a good dial or should I stick to one plain big request?

    Read the article

  • SQL SERVER – Get Schema Name from Object ID using OBJECT_SCHEMA_NAME

    - by pinaldave
    Sometime a simple solution have even simpler solutions but we often do not practice it as we do not see value in it or find it useful. Well, today’s blog post is also about something which I have seen not practiced much in codes. We are so much comfortable with alternative usage that we do not feel like switching how we query the data. I was going over forums and I noticed that at one place user has used following code to get Schema Name from ObjectID. USE AdventureWorks2012 GO SELECT s.name AS SchemaName, t.name AS TableName, s.schema_id, t.OBJECT_ID FROM sys.Tables t INNER JOIN sys.schemas s ON s.schema_id = t.schema_id WHERE t.name = OBJECT_NAME(46623209) GO Before I continue let me say I do not see anything wrong with this script. It is just fine and one of the way to get SchemaName from Object ID. However, I have been using function OBJECT_SCHEMA_NAME to get the schema name. If I have to write the same code from the beginning I would have written the same code as following. SELECT OBJECT_SCHEMA_NAME(46623209) AS SchemaName, t.name AS TableName, t.schema_id, t.OBJECT_ID FROM sys.tables t WHERE t.name = OBJECT_NAME(46623209) GO Now, both of the above code give you exact same result. If you remove the WHERE condition it will give you information of all the tables of the database. Now the question is which one is better – honestly – it is not about one is better than other. Use the one which you prefer to use. I prefer to use second one as it requires less typing. Let me ask you the same question to you – which method to get schema name do yo use? and Why? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL System Table, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Open source license with backlink requirement

    - by KajMagnus
    I'm developing a Javascript library, and I'm thinking about releasing it under an open source license (e.g. GPL, BSD, MIT) — but that requires that websites that use the software link back to my website. Do you know about any such licenses? And how have they formulated the attribution part of the license text? Do you think this BSD-license would do what you think that I want? (I suppose it doesn't :-)) [...] 3. Each website that redistributes this work must include a visible rel=follow link to my-website.example.com, reachable via rel=follow links from each page where the software is being redistributed. (For example, you could have a link back to your homepage, and from your homepage to an About-Us section, which could link to a Credits section) I realize that some companies wouldn't want to use the library because of legal issues with interpreting non-standard licenses (have a look at this answer: http://programmers.stackexchange.com/a/156859/54906). — After half a year, or perhaps some years, I'd change the license to plain GPL + MIT.

    Read the article

  • SQL SERVER – Excel Losing Decimal Values When Value Pasted from SSMS ResultSet

    - by pinaldave
    No! It is not a SQL Server Issue or SSMS issue. It is how things work. There is a simple trick to resolve this issue. It is very common when users are coping the resultset to Excel, the floating point or decimals are missed. The solution is very much simple and it requires a small adjustment in the Excel. By default Excel is very smart and when it detects the value which is getting pasted is numeric it changes the column format to accommodate that. Now as Zero which are training any digit after decimal points have no value, Excel automatically hides it. To prevent this to happen user has to convert columns to text format so it can preserve the formatting. Here is how you can do it. Select the corner between A and 1 and Right Click on it. It will select complete spreadsheet. If you want to change the format of any column you can select an individual column the same way. In the menu Click on Format Cells… It will bring up the following menu. Here by default the selected column will be General, change that to Text. It will change the format of all the cells to Text. Now once again paste the values from SSMS to the Excel. This time it will preserve the decimal values from SSMS. Solved! Any other trick you do you know to preserve the decimal values? Leave a comment please. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology Tagged: Excel

    Read the article

  • Difficulty / Process for mobile 2D multiplayer RPG game

    - by user2827214
    I'm looking to develop a 2D multiplayer RPG game for Android, with a bird's eye view similar to that of zelda/pokemon. The game is very simple in all ways since my intent is for thousands of players to occupy the same world which I imagine requires good performance. However, I am unsure about the performance requirements of two properties: the tile map that is used as a background is dynamic (interactive). For example, a player steps in the water, and the water turns black. Every tile in the game does this. the tile map is the same object used for all players, but it is displayed differently on each user's mobile device, even though the players exist in the same world. For example, the water that turned black is displayed as red on all other players' screens. I have knowledge of java, but almost none regarding game dev. tools. Is there a best process for these requirements? Should I develop in pure java, or use some tool like Slick2D etc.? How performance intensive are these properties, if even possible? Edit: There are no collisions in the game or difficult animations, I imagining simply changing colors of the tiles like in the examples, and a client-server architecture

    Read the article

  • Queries regarding Geometry Shaders

    - by maverick9888
    I am dealing with geometry shaders using GL_ARB_geometry_shader4 extension. My code goes like : GLfloat vertices[] = { 0.5,0.25,1.0, 0.5,0.75,1.0, -0.5,0.75,1.0, -0.5,0.25,1.0, 0.6,0.35,1.0, 0.6,0.85,1.0, -0.6,0.85,1.0, -0.6,0.35,1.0 }; glProgramParameteriEXT(psId, GL_GEOMETRY_INPUT_TYPE_EXT, GL_TRIANGLES); glProgramParameteriEXT(psId, GL_GEOMETRY_OUTPUT_TYPE_EXT, GL_TRIANGLE_STRIP); glLinkProgram(psId); glBindAttribLocation(psId,0,"Position"); glEnableVertexAttribArray (0); glVertexAttribPointer(0, 3, GL_FLOAT, 0, 0, vertices); glDrawArrays(GL_TRIANGLE_STRIP,0,4); My vertex shader is : #version 150 in vec3 Position; void main() { gl_Position = vec4(Position,1.0); } Geometry shader is : #version 150 #extension GL_EXT_geometry_shader4 : enable in vec4 pos[3]; void main() { int i; vec4 vertex; gl_Position = pos[0]; EmitVertex(); gl_Position = pos[1]; EmitVertex(); gl_Position = pos[2]; EmitVertex(); gl_Position = pos[0] + vec4(0.3,0.0,0.0,0.0); EmitVertex(); EndPrimitive(); } Nothing is rendered with this code. What exactly should be the mode in glDrawArrays() ? How does the GL_GEOMETRY_OUTPUT_TYPE_EXT parameter will affect glDrawArrays() ? What I expect is 3 vertices will be passed on to Geometry Shader and using those we construct a primitive of size 4 (assuming GL_TRIANGLE_STRIP requires 4 vertices). Can somebody please throw some light on this ?

    Read the article

  • Is there a LOGO interpreter that actually has a turtle?

    - by Tim Post
    This is not a repeat of the now infamous "How do I move the turtle in LOGO?" Recently, I had the following conversation with my five year old daughter: Daughter: Daddy, do you write programs? Me: Yes! Daughter: Daddy, what's a program? Me: A program is a set of instructions that a computer follows. Daughter: Daddy, can I write a program too? Me: Sure! This got me scrambling to think of a very basic language that a five year old could get some satisfaction from mastering rather quickly. I'm ashamed to admit that the first thing that came to mind was this: 10 INPUT "Tell me a secret" A$ 20 PRINT "Wow really? :" A$ 30 GOTO 10 That isn't going to hold a five year old's attention for very long and it requires too much of a lecture. However, moving a turtle around and drawing neat pictures might just work. Sadly, my search for a LOGO interpreter yielded noting but ad ridden sites, flight simulators and a whole bunch of other stuff that I really don't want. I'm hoping to find a cross platform (Java / Python) LOGO interpreter (dare I call it simulator?) with the following features: Can save / replay commands (stored programs) Has an actual turtle Sound effects are a plus Have you stumbled across something like this, if so, can you provide a link? I hate to ask a 'shopping' sort of question, but it seemed much better than "Is LOGO appropriate for a five year old?"

    Read the article

  • Securing credentials passed to web service

    - by Greg Smith
    I'm attempting to design a single sign on system for use in a distributed architecture. Specifically, I must provide a way for a client website (that is, a website on a different domain/server/network) to allow users to register accounts on my central system. So, when the user takes an action on a client website, and that action is deemed to require an account, the client will produce a page (on their site/domain) where the user can register for a new account by providing an email and password. The client must then send this information to a web service, which will register the account and return some session token type value. The client will need to hash the password before sending it across the wire, and the webservice will require https, but this doesn't feel like it's safe enough and I need some advice on how I can implement this in the most secure way possible. A few other bits of relevant information: Ideally we'd prefer not to share any code with the client We've considered just redirecting the user to a secure page on the same server as the webservice, but this is likely to be rejected for non-technical reasons. We almost certainaly need to salt the password before hashing and passing it over, but that requires the client to either a) generate the salt and communicate it to us, or b) come and ask us for the salt - both feel dirty. Any help or advice is most appreciated.

    Read the article

  • Javascript form validation - what's lacking?

    - by box9
    I've tried out two javascript form validation frameworks - jQuery validation, and jQuery Tools validator - and I've found both of them lacking. jQuery validation lacks the clear separation between the concepts of "validating" and "displaying validation errors", and is highly inflexible when it comes to displaying dynamic error messages. jQuery Tools on the other hand lacks decent remote validation support (to check if a username exists for example). Even though jQuery validation supports remote validation, the built-in method requires the server to respond in a particular format. In both cases, any sort of asynchronous validation is a pain, as is defining rules for dependencies between multiple inputs. I'm thinking of rolling my own framework to address these shortcomings, but first I want to ask... have others experienced similar annoyances with javascript validation? What did you end up doing? What are some common validation requirements you've had which really should be catered for? And are there other, much better frameworks out there which I've missed? I'm looking primarily at jQuery-based frameworks, though well-implemented frameworks built on other libraries can still provide some useful ideas.

    Read the article

  • UPDATE MANAGER UNABLE TO UPDATE

    - by muguro
    Requires installation of untrusted packages The action would require the installation of packages from not authenticated sources. i get this error every time i try updating. the system shows that it has 466 updates but fails after clicking update more details have this accountsservice apparmor apport apport-gtk apt apt-transport-https apt-utils aptdaemon aptdaemon-data at-spi2-core bamfdaemon base-files bcmwl-kernel-source bind9-host compiz compiz-core compiz-gnome compiz-plugins-default cron cups cups-bsd cups-client cups-common cups-filters cups-ppdc dbus dbus-x11 dconf-gsettings-backend dconf-service desktop-file-utils dmsetup dnsutils empathy empathy-common eog evince evince-common evolution-data-server evolution-data-server-common firefox firefox-globalmenu firefox-gnome-support firefox-locale-en fontconfig fontconfig-config fonts-liberation fonts-opensymbol foomatic-filters gcalctool gdb ghostscript ghostscript-cups ghostscript-x ginn gir1.2-atspi-2.0 gir1.2-dbusmenu-glib-0.4 gir1.2-dbusmenu-gtk-0.4 gir1.2-gst-plugins-base-0.10 gir1.2-gtk-3.0 gir1.2-gtksource-3.0 gir1.2-gudev-1.0 gir1.2-javascriptcoregtk-3.0 gir1.2-launchpad-integration-3.0 gir1.2-pango-1.0 gir1.2-rb-3.0 gir1.2-totem-1.0 gir1.2-ubuntuoneui-3.0 gir1.2-unity-5.0 gir1.2-webkit-3.0 glib-networking glib-networking-common glib-networking-services gnome-accessibility-themes gnome-control-center gnome-control-center-data gnome-desktop3-data gnome-games-data gnome-icon-theme gnome-media gnome-orca gnome-settings-daemon gnome-sudoku gnomine gnupg google-talkplugin gpgv grub-common grub-pc grub-pc-bin grub2-common gstreamer0.10-alsa gstreamer0.10-plugins-base gstreamer0.10-plugins-base-apps gstreamer0.10-x gvfs gvfs-backends gvfs-bin gvfs-common gvfs-daemons gvfs-fuse gvfs-libs gwibber gwibber-service gwibber-service-facebook gwibber-service-identica gwibber-service-twitter hdparm hplip hplip-data indicator-sound initscripts isc-dhcp-client isc-dhcp-common jockey-common jockey-gtk krb5-locales landscape-client-ui-install language-pack-en language-pack-en-base language-pack-gnome-en language-pack-gnome-en-base launchpad-integration libaccountsservice0 libapt-inst1.4 libapt-pkg4.12 libart-2.0-2 libasound2 libatspi2.0-0 libbamf0 libbamf3-0 libbind9-80 libc-bin libc-dev-bin libc6 libc6-dev libcairo-gobject2 libcairo2

    Read the article

  • invite: WEBLOGIC 12c HANDS-ON WORKSHOP IN PARIS

    - by mseika
    Oracle WebLogic 12c InnovationWorkshopApril 24-26, 2012: Colombes, France Workshop Description Oracle Fusion Middleware is the #1 application infrastructure foundation and WebLogic Server is the #1 Application Server across conventional and cloud environments. It enables enterprises to create and run agile and intelligent business applications and maximize IT efficiency by exploiting modern hardware and software architectures. Do you want to learn more about innovative features, capabilities and roadmap of WebLogic Server 12c? Then this technical hands-on workshop is for you. Agenda Outline WebLogic introduction WebLogic Topology WebLogic Clustering and High Availibility Coherence Troubleshooting Entreprise Messaging Development Tools & Productivity Performance Exalogic Introduction Entreprise Manager Grid Control Oracle Public Cloud Oracle Traffic Director Lab Outline WebLogic Installation & Configuration WebLogic Clustering & HA Coherence Use Cases & Monitoring WebLogic Active GridLink for RAC Integration Messaging: JMS Audience WebLogic Consultants & Architects Prerequisites Basic knowledge in Java and JavaEE Understanding the Application Server concept Basic knowledge in older releases of WebLogic Server would be beneficial Equipment Requirements This workshop requires attendees to provide their own laptops for this class. Attendee laptops must meet the following minimum hardware/software requirements: Minimum 4GB RAM, 30GB free disk space Internet Explorer 7 or Firefox 3 or higher Download and install Oracle VM VirtualBox 4.1.8 AgendaThis workshop is 3 days. 8:30 am Sign-In and technical set up9:00 am: Workshop starts5:00 pm: Workshop ends This workshop is Free but space is limited. Register now!Register Here!

    Read the article

  • Oracle VM server for SPARC 2.2 on S11

    - by Liam Merwick
    Oracle VM Server for SPARC 2.2 has been released for a little while now. The https://blogs.oracle.com/virtualization blog has an overview of all the 2.2 features. Initially, what was released was the SVR4 package for Solaris 10 (which is unbundled and wasn't constrained by any external schedule). On Solaris 11, the 'ldomsmanager' package is built into Solaris (and therefore doesn't need to be downloaded separately) so it is delivered as part of an S11 Support Repository Update (SRU). Some of the features in 2.2 are specific to S11 (SR-IOV and the ability to live migrate between machines with different CPU types) and so there have been many requests to know when are the S11 bits coming. Solaris 11 SRU8.5 was released on Friday and this includes Oracle VM server for SPARC 2.2 so if you're already running an S11 SRU all you need do is a 'pkg update' to get the 2.2 bits. If you're still running the original S11 and your 'pkg publisher' output shows the /release repository then you'll need to sign up for the /support repo by getting the appropriate keys and certificates to access the repository (requires a support contract). The 2.2 Admin Guide documents how to do this upgrade on S11 Two S11 articles which have some useful details on upgrading (not just 'ldomsmanager') via the support repositories are: How to Update Oracle Solaris 11 Systems From Oracle Support Repositories by Glynn Foster Tips for Updating Your Oracle Solaris 11 System from the Oracle Support Repository by Peter Dennis In particular, if you'd like to stick with the v2.1 release when upgrading to SRU8.5 or greater, see the 'pkg freeze' section of Peter's article.

    Read the article

  • Finally! Entity Framework working in fully disconnected N-tier web app

    - by oazabir
    Entity Framework was supposed to solve the problem of Linq to SQL, which requires endless hacks to make it work in n-tier world. Not only did Entity Framework solve none of the L2S problems, but also it made it even more difficult to use and hack it for n-tier scenarios. It’s somehow half way between a fully disconnected ORM and a fully connected ORM like Linq to SQL. Some useful features of Linq to SQL are gone – like automatic deferred loading. If you try to do simple select with join, insert, update, delete in a disconnected architecture, you will realize not only you need to make fundamental changes from the top layer to the very bottom layer, but also endless hacks in basic CRUD operations. I will show you in this article how I have  added custom CRUD functions on top of EF’s ObjectContext to make it finally work well in a fully disconnected N-tier web application (my open source Web 2.0 AJAX portal – Dropthings) and how I have produced a 100% unit testable fully n-tier compliant data access layerfollowing the repository pattern. http://www.codeproject.com/KB/linq/ef.aspx In .NET 4.0, most of the problems are solved, but not all. So, you should read this article even if you are coding in .NET 4.0. Moreover, there’s enough insight here to help you troubleshoot EF related problems. You might think “Why bother using EF when Linq to SQL is doing good enough for me.” Linq to SQL is not going to get any innovation from Microsoft anymore. Entity Framework is the future of persistence layer in .NET framework. All the innovations are happening in EF world only, which is frustrating. There’s a big jump on EF 4.0. So, you should plan to migrate your L2S projects to EF soon.

    Read the article

  • Recommended Method to Watch Amazon Prime using Ubuntu 14.04 LTS

    - by Kurt Sanger
    I realize that Hal is no longer in the Ubuntu Software Center for Ubuntu 14.04 and it is only available from a third party at this time. But I would like to know what Ubuntu's plans are for integrating DRM into Linux? Especially with Amazon's integration into the search tool, one would hope that they would make it easier for their Amazon Prime customers to watch Instant Videos. Is the repository for getting Hal for 13.10 safe for use? What will that break if I install it onto 14.04? Or do we need to find another OS that has DRM built into it? If Hal is okay to add to the OS using a third party repo, then why doesn't Ubuntu Software Center support it too? I imagine that Amazon's contract with the video copyright holders requires that they have some protection on electronically distributed media. I also imagine that getting Amazon to change is much harder than getting a bunch of software engineers to fix Ubuntu. Unless they don't want too. At which point Ubuntu isn't really a complete OS. Very disappointing. In general the ease of use of Ubuntu, the software center, and the large variety of applications was alluring. But breaking DRM wasn't a great idea. Can't wait to see what fails in our next update. Please tell us that there is a plan that is going to work in our future.

    Read the article

  • The effects of Agile Programming can alter the five desirable properties of modeling tools and techniques

    The effects of Agile Programming can alter the five desirable properties of modeling tools and techniques as documented by Pfleeger. The agile methodology does promote human understanding and communication through the use of short iterative software development life cycles which forces stakeholders to review the project and adjust the project for any requirement changes.  Due to the consistent evaluations of a project and requirements, process are continually being refined, upgraded, and compared against other alternatives to ensure the best design delivered to the client. Due to the short repetitive development cycles, increased time is devoted to process management due to the fact that requirements and designs could be constantly changing. This requires additional forecasting, monitoring, and planning for each iteration. Because things can change so rapidly, automated guidance in performing process must be updated for each iteration because the environment and the available reusable process could change. In addition, the original guidance and suggestions for the project also need to be updated to account for these changes as well.   In essence the automation of process execution is supported by the agile methodology because during every iteration all processes must be tested, evaluated to ensure process integrity and compliance with the customer’s requirements. I do not think the agile approach diminishes modeling, in fact I think it increases the modeling because before the start of every development cycle, modeling must be checked for accuracy based on the changed requirements. So in essence the reduced time spent initially designing the models is in fact gained as the project completes every iteration of the project.

    Read the article

  • Making The EBS Upgrade From 11.5.10 Easier - Part III

    - by Annemarie Provisero
    ADVISOR WEBCAST: Making The EBS Upgrade From 11.5.10 Easier - Part III PRODUCT FAMILY: E-Business Suite July 19, 2011 at 8 am PT, 9 am MT, 11 am ET This one-hour session is recommended for technical users who are responsible for upgrading their E-Business Suite applications from Release 11.5.10 to Release 12.1.x. As you begin your upgrade process, there are a number of tools available to assist you in a successful upgrade. A successful upgrade requires careful planning, correct upgrade processing, detailed testing, and user (re)training prior to upgrade. Over three sessions we will discuss the tools that you can use to assist in your upgrade tasks. These tools are available to you via My Oracle Support and as part of the E-Business Suite product offerings. In this third session, we’ll cover the Best Practices for Using The Upgrade Tools. Additionally, this session includes an extended question and answer period. In the first part of the three-session series, we covered the following topics: Overview of Tools Available for Upgrading Upgrade versus Re-implementing Upgrade Community Upgrade Product Information Center Page Detailed Look at Upgrade Advisor In the second session, we covered the following topics: Recap of Part I Detailed Look at Maintenance Wizard Detailed Look at Patch Wizard A replay of those sessions is available via Note 740964.1, Advisor Webcast Archive. A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Click here to register for this session ------------------------------------------------------------------------------------------------------------- The above webcast is a service of the E-Business Suite Communities in My Oracle Support. For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • One-week release cycle: how do I make this feasible?

    - by Arkaaito
    At my company (3-yr-old web industry startup), we have frequent problems with the product team saying "aaaah this is a crisis patch it now!" (doesn't everybody?) This has an impact on the productivity (and morale) of engineering staff, self included. Management has spent some time thinking about how to reduce the frequency of these same-day requests and has come up with the solution that we are going to have a release every week. (Previously we'd been doing one every two weeks, which usually slipped by a couple of days or so.) There are 13 developers and 6 local / 9 offshore testers; the theory is that only 4 developers (and all testers) will work on even-numbered releases, unless a piece of work comes up that really requires some specific expertise from one of the other devs. Each cycle will contain two days of dev work and two days of QA work (plus 1 day of scoping / triage / ...). My questions are: (a) Does anyone have experience with this length of release cycle? (b) Has anyone heard of this length of release cycle even being attempted? (c) If (a) or (b), how on Earth do you make it work? (Any pitfalls to avoid, etc., are also appreciated.) (d) How can we minimize the damage if this effort fails?

    Read the article

  • TCO Comparison: Oracle Exadata vs IBM P-Series

    - by Javier Puerta
    Cost Comparison for Business Decision-makersOracle Exadata Database Machine vs. IBM Power SystemsHow to Weigh a Purchase DecisionOctober 2012 Download full report here In this research-based  white paper conducted at the request of Oracle, The FactPoint Group compares the cost of ownership of the Oracle Exadata engineered system to a traditional build-your-own (BYO) solution, in this case an IBM Power 770 (P770) with SAN storage.  The IBM P770 was chosen given it is IBM’s current most popular model, based on FactPoint primary and secondary research and IBM claims, and because at least one of the interviewed customers had specifically migrated from a P770 to Exadata, affording us a more specific data point for comparison. This research found that Oracle Exadata: Can be deployed more quickly and easily requiring 59% fewer man-hours than a traditional IBM Power Systems solution. Delivers dramatically higher performance typically up to 12X improvement, as described by customers, over their prior solution.  Requires 40% fewer systems administrator hours to maintain and operate annually, including quicker support calls because of less finger-pointing and faster service with a single vendor.  Will become even easier to operate over time as users become more proficient and organize around the benefits of integrated infrastructure. Supplies a highly available, highly scalable and robust solution that results in reserve capacity that make Exadata easier for IT to operate because IT administrators can manage proactively, not reactively.  Overall, Exadata operations and maintenance keep IT administrators from “living on the edge.”  And it’s pre-engineered for long-term growth. Finally, compared to IBM Power Systems hardware, Exadata is a bargain from a total cost of ownership perspective:  Over three years, the IBM hardware running Oracle Database cost 31% more in TCO than Exadata.

    Read the article

  • lxc containers fail to autoboot in 14.04 trusty using 'lxc.start.auto = 1'

    - by user273046
    In trusty 14.04 containers fail to autoboot despite all settings being set as 14.04 requires. They show all as STOPPED I have correctly configured 2 LXC containers: calypso encelado They run perfectly if I run sudo lxc-autostart then sudo lxc-ls --fancy results in: ubuntu@saturn:/etc/init$ sudo lxc-ls --fancy NAME STATE IPV4 IPV6 AUTOSTART calypso RUNNING 192.168.1.161 - YES encelado RUNNING 192.168.1.162 - YES The problem is trying to run them at boot. I have at: /var/lib/lxc/calypso/config: # Template used to create this container: /usr/share/lxc/templates/lxc-download # Parameters passed to the template: # For additional config options, please look at lxc.conf(5) # Distribution configuration lxc.include = /usr/share/lxc/config/ubuntu.common.conf lxc.arch = x86_64 # Container specific configuration lxc.rootfs = /var/lib/lxc/calypso/rootfs lxc.utsname = calypso # Network configuration lxc.network.type = veth lxc.network.flags = up #lxc.network.link = lxcbr0 lxc.network.link = br0 lxc.network.hwaddr = 00:16:3e:64:0b:6e # Assegnazione IP Address lxc.network.ipv4 = 192.168.1.161/24 lxc.network.ipv4.gateway = 192.168.1.1 # Autostart lxc.start.auto = 1 lxc.start.delay = 5 lxc.start.order = 100 and I have LXC_AUTO="false" as required inside /etc/default/lxc: LXC_AUTO="false" USE_LXC_BRIDGE="false" # overridden in lxc-net [ -f /etc/default/lxc-net ] && . /etc/default/lxc-net LXC_SHUTDOWN_TIMEOUT=120 Any idea on why the containers don't start at boot? At reboot they are always in the STOPPED state: ubuntu@saturn:~$ sudo lxc-ls --fancy NAME STATE IPV4 IPV6 AUTOSTART calypso STOPPED - - YES encelado STOPPED - - YES and then again they can be started manually, using sudo lxc-autostart

    Read the article

< Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >