Search Results

Search found 7891 results on 316 pages for 'multi layer perceptron'.

Page 110/316 | < Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >

  • Oracle VM 3.1.1 build 365 released

    - by wcoekaer
    A few days ago we released a patch update for Oracle VM 3.1.1 (build 365). Oracle VM Manager 3.1.1 Build 365 is now available from My Oracle Support patch ID 14227416 Oracle VM Server 3.1.1 errata updates are, as usual, released on ULN in the ovm3_3.1.1_x86_64_patch channel. Just a reminder, when we publish errata for Oracle VM, the notifications are sent through the oraclevm-errata maillist. You can sign up here. Some of the bugfixes in 3.1.1 : 14054162 - Removes unnecessary locks when creating VNICs in a multi-threaded operation. 14111234 - Fixes the issue when discovering a virtual machine that has disks in a un-discovered repository or has un-discovered physical disks. 14054133 - Fixes a bug of object not found where vdisks are left stale in certain multi-thread operations. 14176607 - Fixes the issue where Oracle VM Manager would hang after a restart due to various tasks running jobs in the global context. 14136410 - Fixes the stale lock issue on multithreaded server where object not found error happens in some rare situations. 14186058 - Fixes the issue where Oracle VM Manager fails to discover the server or start the server after the server hardware configuration (i.e. BIOS) was modified. 14198734 - Fixes the issue where HTTP cannot be disabled. 14065401 - Fixes Oracle VM Manager UI time-out issue where the default value was not long enough for storage repository creation. 14163755 - Fixes the issue when migrating a virtual machine the list of target servers (and "other servers") was not ordered by name. 14163762 - Fixes the size of the "Edit Vlan Group" window to display all information correctly. 14197783 - Fixes the issue that navigation tree (servers) was not ordered by name. I strongly suggest everyone to use this latest build and also update the server to the latest version. have at it.

    Read the article

  • Check parameters annotated with @Nonnull for null?

    - by David Harkness
    We've begun using FindBugs with and annotating our parameters with @Nonnull appropriately, and it works great to point out bugs early in the cycle. So far we have continued checking these arguments for null using Guava's checkNotNull, but I would prefer to check for null only at the edges--places where the value can come in without having been checked for null, e.g., a SOAP request. // service layer accessible from outside public Person createPerson(@CheckForNull String name) { return new Person(Preconditions.checkNotNull(name)); } ... // internal constructor accessed only by the service layer public Person(@Nonnull String name) { this.name = Preconditions.checkNotNull(name); // remove this check? } I understand that @Nonnull does not block null values itself. However, given that FindBugs will point out anywhere a value is transferred from an unmarked field to one marked @Nonnull, can't we depend on it to catch these cases (which it does) without having to check these values for null everywhere they get passed around in the system? Am I naive to want to trust the tool and avoid these verbose checks? Bottom line: While it seems safe to remove the second null check below, is it bad practice? This question is perhaps too similar to Should one check for null if he does not expect null, but I'm asking specifically in relation to the @Nonnull annotation.

    Read the article

  • Designing a flexible tile-based engine

    - by Vee
    I'm trying to create a flexible tile-based game engine to make all sorts of non-realtime puzzle games, just as Bejeweled, Civilization, Sokoban, and so on. The first approach I had was to have a 2D array of Tile objects, and then have classes inheriting from Tile that represented the game objects. Unfortunately that way I couldn't stack more game elements on the same Tile without having a 3D array. Then I did something different: I still had the 2D array of Tile objects, but every Tile object contained a List where I put and different entities. This worked fine until 20 minutes ago, when I realized that it's too expensive to do many things, look at this example: I have a Wall entity. Every update I have to check the 8 adjacent Tiles, then check all of the entities in the Tile's List, check if any of those entities is a Wall, then finally draw the correct sprite. (This is done to draw walls that are next to each other seamlessly) The only solution I see now is having a 3D array, with many layers, that could suit every situation. But that way I can't stack two entities that share the same layer on the same tile. Whenever I want to do that I have to create a new layer. Is there a better solution? What would you do?

    Read the article

  • Best of OTN - Week of November 4th

    - by CassandraClark-OTN
    It was another exciting week at OTN!  Lots of GREAT content to share.  If you had a favorite that you don't see listed let us know in the comment section below.  Java Community - JavaOne Sessions Online - We've posted 60 of the JavaOne sessions online, and we'll be rolling out more sessions every few weeks. This content is free, courtesy of Oracle.NetBeans 7.4 Released  - NetBeans 7.4 features HTML5 integration for Java EE and PHP development; support for Apache Cordova and JDK 8 preview features; enhancements to Maven, C/C++, and more.vJUG: Worldwide Virtual JUG Created - London Java Community leader and technical evangelist Simon Maple has created a Meetup called vJUG, with aim toward connecting Java Developers in the virtual world.Tori Wieldt, Java Community Manager Friday Funny: This is what REALLY happens when you give someone your business card ow.ly/q6aKUArchitect Community - Don't forget to register for the free Virtual Developer Day - Harnessing the Power of Oracle WebLogic and Oracle Coherence.  December 3rd, 2013 - Two great tracks, Design & Develop and Build, Deploy & Manage.   Why wait, register now!  Multi-Factor Authentication in Oracle WebLogic - Shailesh K. Mishra - Really good technical article on using multi-factor authentication to protect web applications deployed on Oracle WebLogic.Coherence*Web: Sharing an httpSessions Among Applications in Different Oracle WebLogic Clusters - Jordi VillenaUnderstanding when and how to select session attributes that must be stored in the local storage of the Oracle WebLogic instances and which should be leveraged to an Oracle Coherence distributed cache.  Bob Rhubart, Architect Community Manager Friday Funny - "Be yourself, everyone else is already taken." Oscar Wilde (October 16, 1854 - November 30, 1900) Irish writer and poet.

    Read the article

  • Proper updating of GeoClipMaps

    - by thr
    I have been working on an implementation of gpu-based geo clip maps, but there is a section of the GPU Gems 2 article that i just can't seem to understand, specifically this paragraph and more precisely the bolded part: The choice of grid size n = 2k-1 has the further advantage that the finer level is never exactly centered with respect to its parent next-coarser level. In other words, it is always offset by 1 grid unit either left or right, as well as either top or bottom (see Figure 2-4), depending on the position of the viewpoint. In fact, it is necessary to allow a finer level to shift while its next-coarser level stays fixed, and therefore the finer level must sometimes be off-center with respect to the next-coarser level. An alternative choice of grid size, such as n = 2k-3, would provide the possibility for exact centering Let's take an example image from the article: My "understanding" of the way the clip maps were update was that you floor the position of the viewpoint to an int, and such get the center vertex point if this is not the same as the previous center point, you update the entire map. Now, this obviously is not the case - but what I am failing to understand is this: If you look at the image above, if the viewpoint was to move one unit to the right, then the inner ring (the one just around the view point + white center square) would end up getting a 1 unit space on both the left and right side of itself. But there is nothing in the paper that deals with this, what i mean is that it would end up looking like this (excuse my crummy cut-and-paste editing of the above image): This is obviously not a valid state of the. So, would the solution be that a clip ring (layer) can only move in increments of the ring/layer it's contained within? Wouldn't this end up being very restrictive? I feel like I am missing some crucial understanding of parts of the algorithm, but I have been over both this paper and the original paper from 2004 and I just can't see what I am not getting.

    Read the article

  • Oracle R Distribution 2-13.2 Update Available

    - by Sherry LaMonica
    Oracle has released an update to the Oracle R Distribution, an Oracle-supported distribution of open source R. Oracle R Distribution 2-13.2 now contains the ability to dynamically link the following libraries on both Windows and Linux: The Intel Math Kernel Library (MKL) on Intel chips The AMD Core Math Library (ACML) on AMD chips To take advantage of the performance enhancements provided by Intel MKL or AMD ACML in Oracle R Distribution, simply add the MKL or ACML shared library directory to the LD_LIBRARY_PATH system environment variable. This automatically enables MKL or ACML to make use of all available processors, vastly speeding up linear algebra computations and eliminating the need to recompile R.  Even on a single core, the optimized algorithms in the Intel MKL libraries are faster than using R's standard BLAS library. Open-source R is linked to NetLib's BLAS libraries, but they are not multi-threaded and only use one core. While R's internal BLAS are efficient for most computations, it's possible to recompile R to link to a different, multi-threaded BLAS library to improve performance on eligible calculations. Compiling and linking to R yourself can be involved, but for many, the significantly improved calculation speed justifies the effort. Oracle R Distribution notably simplifies the process of using external math libraries by enabling R to auto-load MKL or ACML. For R commands that don't link to BLAS code, taking advantage of database parallelism using embedded R execution in Oracle R Enterprise is the route to improved performance. For more information about rebuilding R with different BLAS libraries, see the linear algebra section in the R Installation and Administration manual. As always, the Oracle R Distribution is available as a free download to anyone. Questions and comments are welcome on the Oracle R Forum.

    Read the article

  • ADF Essentials - Available for free and certified on GlassFish!

    - by delabassee
    If you are an Oracle customer, you are probably familiar with Oracle ADF (Application Development Framework). If you are not, ADF is, in a nutshell, a Java EE based framework that simplifies the development of enterprise applications. It is the development framework that was used, among other things, to build Oracle Fusion Applications. Oracle has just released ADF Essentials, a free to develop and deploy version of Oracle ADF's core technologies. As a good news never come alone, GlassFish 3.1.2 is now a certified container for ADF Essentials! ADF Essentials leverage core ADF features and includes: Oracle ADF Faces - a set of more than 150 JSF 2.0 rich components that simplify the creation of rich Web user interfaces (charting, data vizualization, advanced tables, drag and drop, touch gesture support, extensive windowing capabilities, etc.) Oracle ADF Controller - an extension of the JSF controller that helps build reusable process flows and provides the ability to create dynamic regions within Web pages. Oracle ADF Binding - an XML-based, meta-data abstraction layer to connect user interfaces to business services. Oracle ADF Business Components – a declaratively-configured layer that simplifies developing business services against relational databases by providing reusable components that implement common design patterns. ADF is a highly declarative framework, it has always had a very good tooling support. Visual development for Oracle ADF Essentials is provided in Oracle JDeveloper 11.1.2.3. Eclispe support is planned for a later OEPE (Oracle Enterprise Pack for Eclipse) release. Here are some relevant links to quickly learn on how to use ADF Essentials on GlassFish: Video : Oracle ADF Essentials Overview and Demo Deploying Oracle ADF Essentials Applications to Glassfish OTN : Oracle ADF Essentials Ressources

    Read the article

  • What stops HTML5 and JS apps to perform as good as native apps?

    - by Amogh Talpallikar
    From what I understand, HTML is a mark-up language, so is the content of XAML, XIB and whatever Android uses and other native UI development frameworks. JavaScript is a programming language used along with it to handle client side scripting which will include things like event handling, client side validations and anything else C#,Java,Objective-C or C++ do in various such frameworks. There are MVC/MVVM patterns available in form frameworks like Sencha's, Angular etc. We have localStorage in form of both sqlite and key-value store as other frameworks have and you have API specification for almost everything that it missing. Whenever a native UI frameworks has to render UI , it has to parse a similar the markup and render the UI. Question break-down What stops from doing the same in HTML and JS itself ? Instead of having a web-control or browser as a layer in between why can't HTML(along with CSS) and JS be made to perform the same way ? Even if there is a layer,so does .net runtime and JVM are in other cases where C++,C are not being used. So Lets take the case of Android, like Dalvik, why Can't Chromium be another option(along with dalvik and NDK) where HTML does what android markup does and JavaScript is used to do what Java does ? So the Question is, Even if current implementations aren't as good, but theoretically is it possible to get HTML5 based applications to work as other native apps specially on mobile ?

    Read the article

  • Need advice on framework design: how to make extending easy

    - by beginner_
    I'm creating a framework/library for a rather specific use-case (data type). It uses diverse spring components, including spring-data. The library has a set of entity classes properly set up and according service and dao layers. The main work or main benefit of the framework lies in the dao and service layer. Developers using the framework should be able to extend my entity classes to add additional fields they require. Therefore I made dao and service layer generic so it can be used by such extended entity classes. I now face an issue in the IO part of the framework. It must be able to import the according "special data type" into the database. In this part I need to create a new entity instance and hence need the actual class used. My current solution is to configure in spring a bean of the actual class used. The problem with this is that an application using the framework could only use 1 implementation of the entity (the original one from me or exactly 1 subclass but not 2 different classes of the same hierarchy. I'm looking for suggestions / desgins for solving this issue. Any ideas?

    Read the article

  • Navigation in Win8 Metro Style applications

    - by Dennis Vroegop
    In Windows 8, Touch is, as they say, a first class citizen. Now, to be honest: they also said that in Windows 7. However in Win8 this is actually true. Applications are meant to be used by touch. Yes, you can still use mouse, keyboard and pen and your apps should take that into account but touch is where you should focus on initially. Will all users have touch enabled devices? No, not in the first place. I don’t think touchscreens will be on every device sold next year. But in 5 years? Who knows? Don’t forget: if your app is successful it will be around for a long time and by that time touchscreens will be everywhere. Another reason to embrace touch is that it’s easier to develop a touch-oriented app and then to make sure that keyboard, nouse and pen work as doing it the other way around. Porting a mouse-based application to a touch based application almost never works. The reverse gives you much more chances for success. That being said, there are some things that you need to think about. Most people have more than one finger, while most users only use one mouse at the time. Still, most touch-developers translate their mouse-knowledge to the touch and think they did a good job. Martin Tirion from Microsoft said that since Touch is a new language people face the same challenges they do when learning a new real spoken language. The first thing people try when learning a new language is simply replace the words in their native language to the newly learned words. At first they don’t care about grammar. To a native speaker of that other language this sounds all wrong but they still will be able to understand what the intention was. If you don’t believe me: try Google translate to translate something for you from your language to another and then back and see what happens. The same thing happens with Touch. Most developers translate a mouse-click into a tap-event and think they’re done. Well matey, you’re not done. Not by far. There are things you can do with a mouse that you cannot do with touch. Think hover. A mouse has the ability to ‘slide’ over UI elements. Touch doesn’t (I know: with Pen you can do this but I’m talking about actual fingers here). A touch is either there or it isn’t. And right-click? Forget about it. A click is a click.  Yes, you have more than one finger but the machine doesn’t know which finger you use… The other way around is also true. Like I said: most users only have one mouse but they are likely to have more than one finger. So how do we take that into account? Thinking about this is really worth the time: you might come up with some surprisingly good ideas! Still: don’t forget that not every user has touch-enabled hardware so make sure your app is useable for both groups. Keep this in mind: we’re going to need it later on! Now. Apps should be easy to use. You don’t want your user to read through pages and pages of documentation before they can use the app. Imagine that spotter next to an airfield suddenly seeing a prototype of a Concorde 2 landing on the nearby runway. He probably wants to enter that information in our app NOW and not after he’s taken a 3 day course. Even if he still has to download the app, install it for the first time and then run it he should be on his way immediately. At least, fast enough to note down the details of that unique, rare and possibly exciting sighting he just did. So.. How do we do this? Well, I am not talking about games here. Games are in a league of their own. They fall outside the scope of the apps I am describing. But all the others can roughly be characterized as being one of two flavors: the navigation is either flat or hierarchical. That’s it. And if it’s hierarchical it’s no more than three levels deep. Not more. Your users will get lost otherwise and we don’t want that. Flat is simple. Just imagine we have one screen that is as high as our physical screen is and as wide as you need it to be. Don’t worry if it doesn’t fit on the screen: people can scroll to the right and left. Don’t combine up/down and left/right scrolling: it’s confusing. Next to that, since most users will hold their device in landscape mode it’s very natural to scroll horizontal. So let’s use that when we have a flat model. The same applies to the hierarchical model. Try to have at most three levels. If you need more space, find a way to group the items in such a way that you can fit it in three, very wide lanes. At the highest level we have the so called hub level. This is the entry point of the app and as such it should give the user an immediate feeling of what the app is all about. If your app has categories if items then you might show these categories here. And while you’re at it: also show 2 or 3 of the items itself here to give the user a taste of what lies beneath. If the user selects a category you go to the section part. Here you show several sections (again, go as wide as you need) with again some detail examples. After that: the details layer shows each item. By giving some samples of the underlaying layer you achieve several things: you make the layer attractive by showing several different things, you show some highlights so the user sees actual content and you provide a shortcut to the layers underneath. The image below is borrowed from the http://design.windows.com website which has tons and tons of examples: For our app we’ll use this layout. So what will we show? Well, let’s see what sorts of features our app has to offer. I’ll repeat them here: Note planes Add pictures of that plane Notify friends of new spots Share new spots on social media Write down arrival times Write down departure times Write down the runway they take I am sure you can think of some more items but for now we'll use these. In the hub we’ll show something that represents “Spots”, “Friends”, “Social”. Apparently we have an inner list of spotter-friends that are in the app, while we also have to whole world in social. In the layer below we show something else, depending on what the user choose. When they choose “Spots” we’ll display the last spots, last spots by our friends (so we can actually jump from this category to the one next to it) and so on. When they choose a “spot” (or press the + icon in the App bar, which I’ll talk about next time) they go to the lowest and final level that shows details about that spot, including a picture, date and time and the notes belonging to that entry. You’d be amazed at how easy it is to organize your app this way. If you don’t have enough room in these three layers you probably could easily get away with grouping items. Take a look at our hub: we have three completely different things in one place. If you still can’t fit it all in in a logical and consistent way, chances are you are trying to do too much in this app. Go back to your mission statement, determine if it is specific enough and if your feature list helps that statement or makes it unclear. Go ahead. Give it a go! Next time we’ll talk about the look and feel, the charms and the app-bar….

    Read the article

  • Is Scala ready for prime time?

    - by jayraynet
    Now that I've done a few trivial things with Scala (which I love for "hello world" and contrived applications!) I am left wondering.. part about maturity of the tools to support development, and part about general applicability. Are the toolsets ready? Is Scala appropriate for use on enterprise / business applications? Would "you" use it on a non-trivial project? Some of my (possibly unfounded) concerns would be: are the IDE and toolsets as rich as what we have to develop .net and java applications (eclipse for Scala seems limited compared to eclipse for java)? are the build / CI / testing toolsets able to effectively deal with Scala? how maintainable is the concise code that can be (encouraged?) written in the language? is it possible to find developers with Scala experience? is there enough critical mass to get help through on-line reference and books that are more than "intro" to the language? So bottom line - is the ecosystem mature enough to use now, or better off waiting to see how it evolves? EDIT: let's say "non-trivial" is a multi-year, multi-release, 10-20 developers project.

    Read the article

  • Centralizing a resource file among multiple projects in one solution (C#/WPF)

    - by MarkPearl
    One of the challenges one faces when doing multi language support in WPF is when one has several projects in one solution (i.e. a business layer & ui layer) and you want multi language support. Typically each solution would have a resource file – meaning if you have 3 projects in a solution you will have 3 resource files.   For me this isn’t an ideal solution, as you normally want to send the resource files to a translator and the more resource files you have, the more fragmented the dictionary will be and the more complicated it will be for the translator. This can easily be overcome by creating a single project that just holds your translation resources and then exposing it to the other projects as a reference as explained in the following steps. Step 1 Step 1 -  Add a class library to your solution that will contain just the resource files. Your solution will now have an additional project as illustrated below. Step 2 Reference this project to the other projects. Step 3 Move all the resources from the other resource files to the translation projects resource file. Step 4 Set the translations projects resource files access modifier to public. Step 5 Reference all other projects to use the translation resource file instead of their local resource file. To do this in xaml you would need to expose the project as a namespace at the top of the xaml file… note that the example below is for a project called MaxCutLanguages – you need to put the correct project name in its place.   xmlns:MaxCutLanguages="clr-namespace:MaxCutLanguages;assembly=MaxCutLanguages"   And then in the actual xaml you need to replace any text with a reference to the resource file. <TextBlock Text="{x:Static MaxCutLanguages:Properties.Resources.HelloWorld}"/> End Result You can now delete all the resource files in the other projects as you now have one centralized resource file.

    Read the article

  • How is determined an impact of a requirement change on the existing code?

    - by MainMa
    Hi, How companies working on large projects evaluate an impact of a single modification on an existing code? Since my question is probably not very clear, here's an example: Let's take a sample business application which deals with tasks. In the database, each task has a state, 0 being "Pending", ... 5 - "Finished". A new requirement adds a new state, between 2nd and 3rd one. It means that: A constraint on the values 1 - 5 in the database must be changed, Business layer and code contracts must be changed to add a new state, Data access layer must be changed to take in account that, for example the state StateReady is now 6 instead of 5, etc. The application must implement a new state visually, add new controls for it, new localized strings for tool-tips, etc. When an application is written recently by one developer, it's more or less easy to predict every change to do. On the other hand, when an application was written for years by many people, no single person can anticipate every change immediately, without any investigation. So since this situation (such changes in requirements) is very frequent, I imagine there are already some clever techniques and ways to predict the impact. Is there any? Do you know any books which deal about this subject? Note: my question is not related to How do you deal with changing requirements? question. In fact, I'm not interested in evaluating the cost of a change, but rather the way to predict the parts of an application which will be concerned by the change. What will be those changes and how difficult they are really doesn't matter in my question.

    Read the article

  • Searching for entity awareness in 3D space algorithm and data structure

    - by Khanser
    I'm trying to do some huge AI system just for the fun and I've come to this problem. How can I let the AI entities know about each other without getting the CPU to perform redundant and costly work? Every entity has a spatial awareness zone and it has to know what's inside when it has to decide what to do. First thoughts, for every entity test if the other entities are inside the first's reach. Ok, so it was the first try and yep, that is redundant and costly. We are working with real time AI over 10000+ entities so this is not a solution. Second try, calculate some grid over the awareness zone of every entity and test whether in this zones are entities (we are working with 3D entities with float x,y,z location coordinates) testing every point in the grid with the indexed-by-coordinate entities. Well, I don't like this because is also costly, but not as the first one. Third, create some multi linked lists over the x's and y's indexed positions of the entities so when we search for an interval between x,y and z,w positions (this interval defines the square over the spatial awareness zone) over the multi linked list, we won't have 'voids'. This has the problem of finding the nearest proximity value if there isn't one at the position where we start the search. I'm not convinced with any of the ideas so I'm searching for some enlightening. Do you people have any better ideas?

    Read the article

  • How much overhead is there in persistent connections?

    - by nynex
    Ok so I'm musing over a little side project I want to start. Essentially its a multi-session web based FTP client. Multi-session in that you can log into several FTP servers at the same time and perform operations like moving a file from one FTP server to another. I'm doing this mainly to brush up on the new webdev technologies, particularly websockets. I'm using node.js + socket.io to keep a persistent bi-directional connection between the web browser and the web server. The web server will also have persistent connections to each FTP server the user has logged into. So if there are 100 concurrent users each logged into 5 ftp accounts, the web server will have 100 websocket connections + 500 ftp connections. Is servicing 600 connections a lot? I know it depends on the hardware resources of the server but is something like this doable on a budget? Are there more efficient means of doing something like this? I know its unlikely that this project will really get popular but I want it to scale well regardless. Thanks for any help, I've still got a lot to learn.

    Read the article

  • Layers - Logical seperation vs physical

    - by P.Brian.Mackey
    Some programmers recommend logical seperation of layers over physical. For example, given a DL, this means we create a DL namespace not a DL assembly. Benefits include: faster compilation time simpler deployment Faster startup time for your program Less assemblies to reference Im on a small team of 5 devs. We have over 50 assemblies to maintain. IMO this ratio is far from ideal. I prefer an extreme programming approach. Where if 100 assemblies are easier to maintain than 10,000...then 1 assembly must be easier than 100. Given technical limits, we should strive for < 5 assemblies. New assemblies are created out of technical need not layer requirements. Developers are worried for a few reasons. A. People like to work in their own environment so they dont step on eachothers toes. B. Microsoft tends to create new assemblies. E.G. Asp.net has its own DLL, so does winforms. Etc. C. Devs view this drive for a common assembly as a threat. Some team members Have a tendency to change the common layer without regard for how it will impact dependencies. My personal view: I view A. as silos, aka cowboy programming and suggest we implement branching to create isolation. C. First, that is a human problem and we shouldnt create technical work arounds for human behavior. Second, my goal is not to put everything in common. Rather, I want partitions to be made in namespaces not assemblies. Having a shared assembly doesnt make everything common. I want the community to chime in and tell me if Ive gone off my rocker. Is a drive for a single assembly or my viewpoint illogical or otherwise a bad idea?

    Read the article

  • Encapsulate standard C functions?

    - by Jack Stout
    While studying the C programming language and learning safe practices, I'm inclined to write a layer of functionality over several parts of the standard library. This would serve two purposes: I could use standard parts of the language in ways that feel more familiar or rational to me, and I could easily replace that functionality with my own, if I needed to. I could benefit from this, but should I do it? As an example, we can consider memory management. If I've written malloc() into the constructors of each of my objects, then decide that I need to handle memory allocation on my own, I have to edit the constructor associated with every object. By referencing my own function, I can change the contents of that function without writing a new constructors. It seems obvious that I should do this, but I'm used to Python. I'm extremely comfortable in that environment and have no problem linking to any part of the standard library from any part of my program because I know I will almost certainly leave that relationship untouched for the life of the project. The situation I'm running into with C feels like I'm trying to hide the language from myself. Will writing a layer of functionality over the C standard library help me in learning the language and developing a codebase, or will it stifle my understanding going forward?

    Read the article

  • Build 2013&ndash;Keynote Thoughts

    - by D'Arcy Lussier
    Originally posted on: http://geekswithblogs.net/dlussier/archive/2013/06/26/153243.aspxSome thoughts on the Build 2013 keynote. They Listened to Feedback while Keeping to their Plans I am one of the people in the “bring back the start menu” camp. I want my start menu. I *like* my start menu. Microsoft heard that and put it back, fantastic. But they implemented it in a way that still pushes the Windows 8 UI – and I’m actually pretty happy with it. When you hit the Start menu, you get the live-tiles displayed overlaying the desktop. But you can also swipe from the bottom to get the “all-applications” view. This, in essense, is really what those that like the Start Menu want. I believe it was mentioned that you can configure the all-applications view to be the default. They’re Committed to Improving Windows 8 The commitment to rapid deployments Ballmer talked about is crucial to Windows 8’s success. They need to keep it evolving quickly to maintain the interest of users and developers. I think the little improvements they showed are excellent (hands-free mode, multi window docking, better multi-monitor support, new developer controls, etc.). Hardware Vendors are Committed to Windows 8 They showed off a number of new hardware products (Windows 8 and Windows Phone). The Surface’s introduction to the market has done nothing to dissuade their hardware partners. Bing as a Platform is Huge for Developers!!! This was the biggest take-away from the keynote! What the team is doing with Bing not as a search engine but as a developer API is very impressive! I’m going to be diving into this over the rest of Build so watch more blog posts coming on it. Azure, Office 365, and other topics will be covered at tomorrow’s keynote. So far, great kick off to Build. Now on to sessions! D

    Read the article

  • SPARC SuperCluster Papers

    - by user12616590
    Oracle has been publishing white papers that describe uses and characteristics of the SPARC SuperCluster product. Here are just a few: A Technical Overview of the Oracle SPARC SuperCluster T4-4SPARC SuperCluster T4-4 is a high performance, multi-purpose engineered system that has been designed, tested and integrated to run a wide array of enterprise applications. It is well suited for multi-tier enterprise applications with Web, database and application components. This 20-page paper discusses the components and technical characteristics of this product. SPARC SuperCluster T4-4 Platform Security Principles and CapabilitiesThe security capabilities designed into the SPARC SuperCluster, and architectural, deployment, and operational best practices for taking advantage of them. Consolidating Oracle E-Business Suite on Oracle’s SPARC SuperClusterThis Oracle Optimized Solution describes the implementation and use of SPARC SuperCluster as a consolidation platform for E-Business Suite in 30 pages. Oracle Optimized Solution for Oracle PeopleSoft Human Capital Management on SPARC SuperClusterThe Oracle Optimized Solution for PeopleSoft Human Capital Management on SPARC SuperCluster is the industry's only proven, tested, applications-to-disk solution that maintains excellence managing absences, optimizing collaborative activities, streamlining knowledge and honing processes; 31 pages. I hope you find some of those papers useful.

    Read the article

  • Avoiding bloated Domain Objects

    - by djcredo
    We're trying to move data from our bloated Service layer into our Domain layer using a DDD approach. We currently have a lot of business logic in our services, which is spread out all over the place and doesn't benefit from inheritance. We have a central Domain class which is the focus of most of our work - a Trade. The Trade object will know how to price itself, how to estimate risk, validate itself, etc. We can then replace conditionals with polymorphism. Eg: SimpleTrade will price itself one way, but ComplexTrade will price itself another. However, we are worried that this will bloat the Trade class(s). It really should be in charge of its own processing but the class size is going to increase exponentially as more features are added. So we have choices: Put processing logic in Trade class. Processing logic is now polymorphic based on the type of the trade, but Trade class is now has multiple responsibilites (pricing, risk, etc) and is large Put processing logic into other class such as TradePricingService. No longer polymorphic with the Trade inheritance tree, but classes are smaller and easier to test. What would be the suggested approach?

    Read the article

  • decouple software components via nameconvention

    - by csteinmueller
    I'm currently evaluating alternatives to refactor a drivermanagement. In my multitier architecture I have Baseclass DAL.Device //my entity Interfaces BL.IDriver //handles the dataprocessing between application and device BL.IDriverCreator //creates an IDriver from a Device BL.IDriverFactory //handles the driver creation requests Every specialization of Device has a corresponding IDriver implementation and a corresponding IDriverCreator implementation. At the moment the mapping is fix via a type check within the business layer / DriverFactory. That means every new driver needs a) changing code within the DriverFactory and b) referencing the new IDriver implementation / assembly. On a customers point of view that means, every new driver, used or not, needs a complex revalidation of their hardware environment, because it's a critical process. My first inspiration was to use a caliburn micro like nameconvention see Caliburn.Micro: Xaml Made Easy BL.RestDriver BL.RestDriverCreator DAL.RestDevice After receiving the RestDevicewithin the IDriverFactory I can load all driver dlls via reflection and do a namesplitting/comparing (extracting the xx from xxDriverCreator and xxDevice) Another idea would be a custom attribute (which also leads to comparing strings). My question: is that a good approach above layer borders? If not, what would be a good approach?

    Read the article

  • How can we add arch specific conflicts tag when building .deb package?

    - by Sphopale
    We are trying to build multi-arch supported i386 .deb package. There are two .deb packages build on i386 X1 & X2 (X2 is a subset of X1 binaries). X1 <- X2 conflict each other when installing . Only one .deb package can be installed at any instance. We similarly have binaries on xa64 arch. Again on xa64, there are two .deb packages X1 & X2 (X2 is a subset of X1 binaries). X1 <- X2 conflict each other when installing . Only one .deb package can be installed at any instance. In case of multi-arch i386 .deb package,i386 .deb packages (X1 & X2) can be installed on xa64 along side with 64bit (X1 & X2) However I see that when installing X1:i386 & X1:amd64 can co-exist However, it throws conflict error when trying to install X1:i386 & X2:amd64 In short, Can we mark package to conflict based on arch Conflict: X2:i386 X1:i386 package should only conflict with X2:i386 & allow other packages to co-exist X1:amd64 package should only conflict with X2:amd64 & allow other packages to co-exist X1:i386 can co-exist with X1:amd64 OR X2:amd64 X2:i386 can co-exist with X1:amd64 OR X2:amd64 Thanks for your reply

    Read the article

  • Am I just not understanding TDD unit testing (Asp.Net MVC project)?

    - by KallDrexx
    I am trying to figure out how to correctly and efficiently unit test my Asp.net MVC project. When I started on this project I bought the Pro ASP.Net MVC, and with that book I learned about TDD and unit testing. After seeing the examples, and the fact that I work as a software engineer in QA in my current company, I was amazed at how awesome TDD seemed to be. So I started working on my project and went gun-ho writing unit tests for my database layer, business layer, and controllers. Everything got a unit test prior to implementation. At first I thought it was awesome, but then things started to go downhill. Here are the issues I started encountering: I ended up writing application code in order to make it possible for unit tests to be performed. I don't mean this in a good way as in my code was broken and I had to fix it so the unit test pass. I mean that abstracting out the database to a mock database is impossible due to the use of linq for data retrieval (using the generic repository pattern). The reason is that with linq-sql or linq-entities you can do joins just by doing: var objs = select p from _container.Projects select p.Objects; However, if you mock the database layer out, in order to have that linq pass the unit test you must change the linq to be var objs = select p from _container.Projects join o in _container.Objects on o.ProjectId equals p.Id select o; Not only does this mean you are changing your application logic just so you can unit test it, but you are making your code less efficient for the sole purpose of testability, and getting rid of a lot of advantages using an ORM has in the first place. Furthermore, since a lot of the IDs for my models are database generated, I proved to have to write additional code to handle the non-database tests since IDs were never generated and I had to still handle those cases for the unit tests to pass, yet they would never occur in real scenarios. Thus I ended up throwing out my database unit testing. Writing unit tests for controllers was easy as long as I was returning views. However, the major part of my application (and the one that would benefit most from unit testing) is a complicated ajax web application. For various reasons I decided to change the app from returning views to returning JSON with the data I needed. After this occurred my unit tests became extremely painful to write, as I have not found any good way to write unit tests for non-trivial json. After pounding my head and wasting a ton of time trying to find a good way to unit test the JSON, I gave up and deleted all of my controller unit tests (all controller actions are focused on this part of the app so far). So finally I was left with testing the Service layer (BLL). Right now I am using EF4, however I had this issue with linq-sql as well. I chose to do the EF4 model-first approach because to me, it makes sense to do it that way (define my business objects and let the framework figure out how to translate it into the sql backend). This was fine at the beginning but now it is becoming cumbersome due to relationships. For example say I have Project, User, and Object entities. One Object must be associated to a project, and a project must be associated to a user. This is not only a database specific rule, these are my business rules as well. However, say I want to do a unit test that I am able to save an object (for a simple example). I now have to do the following code just to make sure the save worked: User usr = new User { Name = "Me" }; _userService.SaveUser(usr); Project prj = new Project { Name = "Test Project", Owner = usr }; _projectService.SaveProject(prj); Object obj = new Object { Name = "Test Object" }; _objectService.SaveObject(obj); // Perform verifications There are many issues with having to do all this just to perform one unit test. There are several issues with this. For starters, if I add a new dependency, such as all projects must belong to a category, I must go into EVERY single unit test that references a project, add code to save the category then add code to add the category to the project. This can be a HUGE effort down the road for a very simple business logic change, and yet almost none of the unit tests I will be modifying for this requirement are actually meant to test that feature/requirement. If I then add verifications to my SaveProject method, so that projects cannot be saved unless they have a name with at least 5 characters, I then have to go through every Object and Project unit test to make sure that the new requirement doesn't make any unrelated unit tests fail. If there is an issue in the UserService.SaveUser() method it will cause all project, and object unit tests to fail and it the cause won't be immediately noticeable without having to dig through the exceptions. Thus I have removed all service layer unit tests from my project. I could go on and on, but so far I have not seen any way for unit testing to actually help me and not get in my way. I can see specific cases where I can, and probably will, implement unit tests, such as making sure my data verification methods work correctly, but those cases are few and far between. Some of my issues can probably be mitigated but not without adding extra layers to my application, and thus making more points of failure just so I can unit test. Thus I have no unit tests left in my code. Luckily I heavily use source control so I can get them back if I need but I just don't see the point. Everywhere on the internet I see people talking about how great TDD unit tests are, and I'm not just talking about the fanatical people. The few people who dismiss TDD/Unit tests give bad arguments claiming they are more efficient debugging by hand through the IDE, or that their coding skills are amazing that they don't need it. I recognize that both of those arguments are utter bullocks, especially for a project that needs to be maintainable by multiple developers, but any valid rebuttals to TDD seem to be few and far between. So the point of this post is to ask, am I just not understanding how to use TDD and automatic unit tests?

    Read the article

  • Problems with real-valued input deep belief networks (of RBMs)

    - by Junier
    I am trying to recreate the results reported in Reducing the dimensionality of data with neural networks of autoencoding the olivetti face dataset with an adapted version of the MNIST digits matlab code, but am having some difficulty. It seems that no matter how much tweaking I do on the number of epochs, rates, or momentum the stacked RBMs are entering the fine-tuning stage with a large amount of error and consequently fail to improve much at the fine-tuning stage. I am also experiencing a similar problem on another real-valued dataset. For the first layer I am using a RBM with a smaller learning rate (as described in the paper) and with negdata = poshidstates*vishid' + repmat(visbiases,numcases,1); I'm fairly confident I am following the instructions found in the supporting material but I cannot achieve the correct errors. Is there something I am missing? See the code I'm using for real-valued visible unit RBMs below, and for the whole deep training. The rest of the code can be found here. rbmvislinear.m: epsilonw = 0.001; % Learning rate for weights epsilonvb = 0.001; % Learning rate for biases of visible units epsilonhb = 0.001; % Learning rate for biases of hidden units weightcost = 0.0002; initialmomentum = 0.5; finalmomentum = 0.9; [numcases numdims numbatches]=size(batchdata); if restart ==1, restart=0; epoch=1; % Initializing symmetric weights and biases. vishid = 0.1*randn(numdims, numhid); hidbiases = zeros(1,numhid); visbiases = zeros(1,numdims); poshidprobs = zeros(numcases,numhid); neghidprobs = zeros(numcases,numhid); posprods = zeros(numdims,numhid); negprods = zeros(numdims,numhid); vishidinc = zeros(numdims,numhid); hidbiasinc = zeros(1,numhid); visbiasinc = zeros(1,numdims); sigmainc = zeros(1,numhid); batchposhidprobs=zeros(numcases,numhid,numbatches); end for epoch = epoch:maxepoch, fprintf(1,'epoch %d\r',epoch); errsum=0; for batch = 1:numbatches, if (mod(batch,100)==0) fprintf(1,' %d ',batch); end %%%%%%%%% START POSITIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% data = batchdata(:,:,batch); poshidprobs = 1./(1 + exp(-data*vishid - repmat(hidbiases,numcases,1))); batchposhidprobs(:,:,batch)=poshidprobs; posprods = data' * poshidprobs; poshidact = sum(poshidprobs); posvisact = sum(data); %%%%%%%%% END OF POSITIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% poshidstates = poshidprobs > rand(numcases,numhid); %%%%%%%%% START NEGATIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% negdata = poshidstates*vishid' + repmat(visbiases,numcases,1);% + randn(numcases,numdims) if not using mean neghidprobs = 1./(1 + exp(-negdata*vishid - repmat(hidbiases,numcases,1))); negprods = negdata'*neghidprobs; neghidact = sum(neghidprobs); negvisact = sum(negdata); %%%%%%%%% END OF NEGATIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% err= sum(sum( (data-negdata).^2 )); errsum = err + errsum; if epoch>5, momentum=finalmomentum; else momentum=initialmomentum; end; %%%%%%%%% UPDATE WEIGHTS AND BIASES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% vishidinc = momentum*vishidinc + ... epsilonw*( (posprods-negprods)/numcases - weightcost*vishid); visbiasinc = momentum*visbiasinc + (epsilonvb/numcases)*(posvisact-negvisact); hidbiasinc = momentum*hidbiasinc + (epsilonhb/numcases)*(poshidact-neghidact); vishid = vishid + vishidinc; visbiases = visbiases + visbiasinc; hidbiases = hidbiases + hidbiasinc; %%%%%%%%%%%%%%%% END OF UPDATES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% end fprintf(1, '\nepoch %4i error %f \n', epoch, errsum); end dofacedeepauto.m: clear all close all maxepoch=200; %In the Science paper we use maxepoch=50, but it works just fine. numhid=2000; numpen=1000; numpen2=500; numopen=30; fprintf(1,'Pretraining a deep autoencoder. \n'); fprintf(1,'The Science paper used 50 epochs. This uses %3i \n', maxepoch); load fdata %makeFaceData; [numcases numdims numbatches]=size(batchdata); fprintf(1,'Pretraining Layer 1 with RBM: %d-%d \n',numdims,numhid); restart=1; rbmvislinear; hidrecbiases=hidbiases; save mnistvh vishid hidrecbiases visbiases; maxepoch=50; fprintf(1,'\nPretraining Layer 2 with RBM: %d-%d \n',numhid,numpen); batchdata=batchposhidprobs; numhid=numpen; restart=1; rbm; hidpen=vishid; penrecbiases=hidbiases; hidgenbiases=visbiases; save mnisthp hidpen penrecbiases hidgenbiases; fprintf(1,'\nPretraining Layer 3 with RBM: %d-%d \n',numpen,numpen2); batchdata=batchposhidprobs; numhid=numpen2; restart=1; rbm; hidpen2=vishid; penrecbiases2=hidbiases; hidgenbiases2=visbiases; save mnisthp2 hidpen2 penrecbiases2 hidgenbiases2; fprintf(1,'\nPretraining Layer 4 with RBM: %d-%d \n',numpen2,numopen); batchdata=batchposhidprobs; numhid=numopen; restart=1; rbmhidlinear; hidtop=vishid; toprecbiases=hidbiases; topgenbiases=visbiases; save mnistpo hidtop toprecbiases topgenbiases; backpropface; Thanks for your time

    Read the article

  • Problems with real-valued deep belief networks (of RBMs)

    - by Junier
    I am trying to recreate the results reported in Reducing the dimensionality of data with neural networks of autoencoding the olivetti face dataset with an adapted version of the MNIST digits matlab code, but am having some difficulty. It seems that no matter how much tweaking I do on the number of epochs, rates, or momentum the stacked RBMs are entering the fine-tuning stage with a large amount of error and consequently fail to improve much at the fine-tuning stage. I am also experiencing a similar problem on another real-valued dataset. For the first layer I am using a RBM with a smaller learning rate (as described in the paper) and with negdata = poshidstates*vishid' + repmat(visbiases,numcases,1); I'm fairly confident I am following the instructions found in the supporting material but I cannot achieve the correct errors. Is there something I am missing? See the code I'm using for real-valued visible unit RBMs below, and for the whole deep training. The rest of the code can be found here. rbmvislinear.m: epsilonw = 0.001; % Learning rate for weights epsilonvb = 0.001; % Learning rate for biases of visible units epsilonhb = 0.001; % Learning rate for biases of hidden units weightcost = 0.0002; initialmomentum = 0.5; finalmomentum = 0.9; [numcases numdims numbatches]=size(batchdata); if restart ==1, restart=0; epoch=1; % Initializing symmetric weights and biases. vishid = 0.1*randn(numdims, numhid); hidbiases = zeros(1,numhid); visbiases = zeros(1,numdims); poshidprobs = zeros(numcases,numhid); neghidprobs = zeros(numcases,numhid); posprods = zeros(numdims,numhid); negprods = zeros(numdims,numhid); vishidinc = zeros(numdims,numhid); hidbiasinc = zeros(1,numhid); visbiasinc = zeros(1,numdims); sigmainc = zeros(1,numhid); batchposhidprobs=zeros(numcases,numhid,numbatches); end for epoch = epoch:maxepoch, fprintf(1,'epoch %d\r',epoch); errsum=0; for batch = 1:numbatches, if (mod(batch,100)==0) fprintf(1,' %d ',batch); end %%%%%%%%% START POSITIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% data = batchdata(:,:,batch); poshidprobs = 1./(1 + exp(-data*vishid - repmat(hidbiases,numcases,1))); batchposhidprobs(:,:,batch)=poshidprobs; posprods = data' * poshidprobs; poshidact = sum(poshidprobs); posvisact = sum(data); %%%%%%%%% END OF POSITIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% poshidstates = poshidprobs > rand(numcases,numhid); %%%%%%%%% START NEGATIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% negdata = poshidstates*vishid' + repmat(visbiases,numcases,1);% + randn(numcases,numdims) if not using mean neghidprobs = 1./(1 + exp(-negdata*vishid - repmat(hidbiases,numcases,1))); negprods = negdata'*neghidprobs; neghidact = sum(neghidprobs); negvisact = sum(negdata); %%%%%%%%% END OF NEGATIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% err= sum(sum( (data-negdata).^2 )); errsum = err + errsum; if epoch>5, momentum=finalmomentum; else momentum=initialmomentum; end; %%%%%%%%% UPDATE WEIGHTS AND BIASES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% vishidinc = momentum*vishidinc + ... epsilonw*( (posprods-negprods)/numcases - weightcost*vishid); visbiasinc = momentum*visbiasinc + (epsilonvb/numcases)*(posvisact-negvisact); hidbiasinc = momentum*hidbiasinc + (epsilonhb/numcases)*(poshidact-neghidact); vishid = vishid + vishidinc; visbiases = visbiases + visbiasinc; hidbiases = hidbiases + hidbiasinc; %%%%%%%%%%%%%%%% END OF UPDATES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% end fprintf(1, '\nepoch %4i error %f \n', epoch, errsum); end dofacedeepauto.m: clear all close all maxepoch=200; %In the Science paper we use maxepoch=50, but it works just fine. numhid=2000; numpen=1000; numpen2=500; numopen=30; fprintf(1,'Pretraining a deep autoencoder. \n'); fprintf(1,'The Science paper used 50 epochs. This uses %3i \n', maxepoch); load fdata %makeFaceData; [numcases numdims numbatches]=size(batchdata); fprintf(1,'Pretraining Layer 1 with RBM: %d-%d \n',numdims,numhid); restart=1; rbmvislinear; hidrecbiases=hidbiases; save mnistvh vishid hidrecbiases visbiases; maxepoch=50; fprintf(1,'\nPretraining Layer 2 with RBM: %d-%d \n',numhid,numpen); batchdata=batchposhidprobs; numhid=numpen; restart=1; rbm; hidpen=vishid; penrecbiases=hidbiases; hidgenbiases=visbiases; save mnisthp hidpen penrecbiases hidgenbiases; fprintf(1,'\nPretraining Layer 3 with RBM: %d-%d \n',numpen,numpen2); batchdata=batchposhidprobs; numhid=numpen2; restart=1; rbm; hidpen2=vishid; penrecbiases2=hidbiases; hidgenbiases2=visbiases; save mnisthp2 hidpen2 penrecbiases2 hidgenbiases2; fprintf(1,'\nPretraining Layer 4 with RBM: %d-%d \n',numpen2,numopen); batchdata=batchposhidprobs; numhid=numopen; restart=1; rbmhidlinear; hidtop=vishid; toprecbiases=hidbiases; topgenbiases=visbiases; save mnistpo hidtop toprecbiases topgenbiases; backpropface; Thanks for your time

    Read the article

< Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >