Search Results

Search found 18592 results on 744 pages for 'basic unix commands'.

Page 572/744 | < Previous Page | 568 569 570 571 572 573 574 575 576 577 578 579  | Next Page >

  • SharePoint 2010 Hosting :: How to Enable Office Web Apps on SharePoint 2010

    - by mbridge
    Office Web App is the online version of Microsoft Office 2010 which is very helpful if you are going to use SharePoint 2010 in your organization as it allows you to do basic editing of word document without installing the Office Suite in the client machine. Prerequisites : - Microsoft Server 2008 R2 - Microsoft SharePoint Server 2010 or Microsoft SharePoint Foundation 2010 - Microsoft Office Web Apps. If you have installed all the above products, just follow this steps: 1. Go to Central Administration > Click on Manage Service Application. 2. All the menus are not displayed in ribbon Menu format which was first introduced in Office 2007. Click on New > Word Viewing Services ( You can choose PowerPoint or Excel also, steps are same ). This will open a pop window. Adding Services for Office Web Apps 3. Give a Proper Name which can have your companies or project name. 4. Under Application Pool select : SharePoint Web Services Default. 5. Next keep the check box checked which says : Add this service application’s proxy to the farm’s default proxy list. Click Ok Adding Word Viewer as Service Application Office Web Apps as Services in Sharepoint 2010 6. This will install all the Office Web App services required. You can see the name as you gave in the above step. How to Activate Office Web Apps in Site Collection? 1. Go to the site for which you want to activate this feature. 2. Click on Site Action > Site Settings > Site Collection Administrator > Site Collection Features 3. Activate Office Web Apps. Activate Office Web Apps Feature in Site Collection How to make sure Office Web Apps is working for your site collection? 1. Locate any office document you have and click on the smart menu which appears when you hover your mouse on it. Dont double-click as this will launch the document in Office Client if its installed. This feature can be changed. 2. If you see View or Edit in Browser as menu item, your Office Web Apps is configured correctly. View Edit Office Document in Browser Editing Office Document in Browser Another post related SharePoint 2010: 1. How to Configure SharePoint Foundation 2010 for SharePoint Workspace 2010 2. Integrating SharePoint 2010 and SQL 2008 R2

    Read the article

  • Perl syntax error [closed]

    - by Linny
    I am a beginner taking a Perl programming course. We are trying to write a basic program for counting nucleotides in a DNA string. I'm getting syntax errors on the lines that have a single bracket on lines 28 & 70 and don't know why. It also reads that I have compilation errors. I have no idea where to start figuring that out. # The purpose of this program is to count the number of nucleotides in a strand. Each protein is counted separately # print "/n NOTE: Nucleotide counting /n"; # use strict; # enforce variable declarations use warnings; # enable compiler warnings # Display number of A,a,T,t,G,g,C,c, nucleotides in a word or sequence of letters. # my ($base) = ''; # an extracted letter from a string my ($nuceotide_count) = 0 ; # the current position within the word my ($position) = 0 ; # number of vowels in user-supplied word my ($word) = ''; # word to be processed my ($A_count) = 0 ; # of A nucleotides in the user-supplied sequence my ($a_count) = 0 ; # of A nucleotides in the user-supplied sequence my ($C_count) = 0 ; # of C nucleotides in the user-supplied sequence my ($c_count) = 0 ; # of C nucleotides in the user-supplied sequence my ($G_count) = 0 ; # of G nucleotides in the user-supplied sequence my ($g_count) = 0 ; # of G nucleotides in the user-supplied sequence my ($T_count) = 0 ; # of T nucleotides in the user-supplied sequence my ($t_count) = 0 ; # of T nucleotides in the user-supplied sequence word = (STDIN) for ($position = 0);($position if (($base eq 'a') or ($base eq 'A')) { ++$A_count; } # end if ++$position; if (($base eq 'T') or ($base eq 't')) { ++$T_count; } end if ++$position; if (($base eq 'G') or ($base eq 'g')) { ++$G_count; } # end if ++$position; if (($base eq 'C') or ($base eq 'c')) { ++$C_count; } # end if ++$position; } # end for # Display final results. # print " \n The number of A or a neucleotides is: $A_count"; print " \n The number of T or t neucleotides is: $T_count"; print " \n The number of G or g neucleotides is: $G_count"; print " \n The number of C or c neucleotides is: $C_count"; print " \n\n Program completed successfully. \n" ; exit ;

    Read the article

  • Ubuntu Server and setting up two nic cards

    - by kmalik
    I have ubuntu server on a computer with a wireless and hardwired nic card. The wireless needs to get the internet and pass it to the ubuntu server as well as pass it along to the hardwired nic card to more computers. I am having issues getting the basic set up as I believe the route table is grabbing from the wrong nic card. The router is 192.168.1.0 and the server is set to 192.168.1.11 on the wireless card through DHCP ETH0 (wired nic card) is set up to be 10.10.10.0 and the server is 10.10.10.1) I am not a linux or networking guru but basically I am trying to have internet come from a guest network 192.168.1.0 i believe to give internet to the ubuntu server then the ubuntu server will also A) have the wired nic serve DHCP addresses to other computers via a switch or router (that acts as a switch) via 10.10.10.0 addresses. And I would love if it also passed along internet capabilities as well if possible. Bu really at this point my hope is to at least get the internet working on the server and the DHCP to pass correctly. At the moment the specific issue I am having is getting ubuntu server to connect to the internet and have both nic cards up and running correctly. Any help would be appreciated! The route table is as follows: Destination Gateway GM Flags Metric Iface 0.0.0.0 10.10.10.1 0.0.0.0 UG 100 eth0 10.10.10.0 0.0.0.0 255.255.255.0 U 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 eth0 1992.168.1.0 0.0.0.0 255.255.255.255.0 U 0 eth1 My interfaces is set up as follows: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 10.10.10.1 netmask 255.255.255.0 network 10.10.10.0 broadcast 10.10.10.255 gateway 10.10.10.1 domain-name-servers 192.168.1.0 auto eth1 iface eth1 inet dhcp netmask 255.255.255.0 gateway 192.168.1.0 wpa-driver wext wpa-ssid "ssid_name" wpa-ap-scan 1 wpa-proto wpa wpa-pairwise ccmp wpa-group ccmp wpa-key-mgmt wpa-psk wpa-psk "HASH" My DHCPD.conf (as there is a domain name server on here is as follows): ddns-update-style none default-lease-time 600 max-lease-time 7200 authoritative option domain-name "Kamron's Network" option subnet-mask 255.255.255.0 option broadcast-address 10.10.10.255 option routers 192.168.1.0 option domain-name-server 192.168.1.0 98.223.128.213 ooption subnet 10.10.10.0 netmask 255.255.255.0 { range 10.10.10.10 10.10.10.99 } log-facility local7

    Read the article

  • #SSAS #Tabular Workshop and Community Events in Netherlands and Denmark

    - by Marco Russo (SQLBI)
    Next week I will finally start the roadshow of the SSAS Tabular Workshop, a 2-day seminar about the new BISM Tabular model for Analysis Services that has been introduced in SQL Server 2012. During these roadshows, we always try to arrange some speeches at local community events in the evening - we already defined for Copenhagen, we have some logistic issue in Amsterdam that we're trying to solve. Here is the timetable: Netherlands SSAS Workshop in Amsterdam, NL – April 16-17, 2012 2-day seminar, I and Alberto will be the trainers for this event, register here We're trying to manage a Community event but we still don't have a confirmation, stay tuned        Denmark SSAS Workshop in Copenhagen, DK – April 26-27, 2012 2-day seminar, I and Alberto will be the trainers for this event, register here Community event on April 26, 2012 This event will run in Hellerup, at Microsoft venue All details available here: http://msbip.dk/events/26/msbip-mode-nr-5/ People from Sweden are welcome! Just register to this private group on LinkedIn in order to announce your presence, so we’ll know how many people will attend In community events we’ll deliver two speeches – here are the descriptions: Inside xVelocity (VertiPaq) PowerPivot and BISM Tabular models in Analysis Services share a great columnar-based database engine called xVelocity in-memory analytics engine (VertiPaq). If you want to improve performance and optimize memory used, you have to understand some basic principles about how this engine works, how data is compressed, and how you can design a data model for better optimization. Prepare yourself to change your mind. xVelocity optimization techniques might seem counterintuitive and are absolutely different than OLAP and SQL ones! Choosing between Tabular and Multidimensional You have a new project and you have to make an important decision upfront. Should you use Tabular or Multidimensional? It is not easy to answer, because sometime there is a clear choice, but most of the times both decisions might be correct, at least at the beginning. In this session we’ll help you making an informed decision, correctly evaluating pros and cons of each one according to common scenarios, considering both short-term and long-term consequences of your choice. I hope to meet many people in this first dates. We have many other events coming in May and June, including an online event (for US time zones), and you can also attend our PreCon Day at TechEd US in Orland (PRC06) or TechEd Europe in Amsterdam. I’ll be a good customer for airline companies in the next three months! I’m just sorry that I hadn’t time to write other articles in the last month, but I’m accumulating material that I will need to write down during some flight – stay tuned…

    Read the article

  • In an Entity-Component-System Engine, How do I deal with groups of dependent entities?

    - by John Daniels
    After going over a few game design patterns, I have settle with Entity-Component-System (ES System) for my game engine. I've reading articles (mainly T=Machine) and review some source code and I think I got enough to get started. There is just one basic idea I am struggling with. How do I deal with groups of entities that are dependent on each other? Let me use an example: Assume I am making a standard overhead shooter (think Jamestown) and I want to construct a "boss entity" with multiple distinct but connected parts. The break down might look like something like this: Ship body: Movement, Rendering Cannon: Position (locked relative to the Ship body), Tracking\Fire at hero, Taking Damage until disabled Core: Position (locked relative to the Ship body), Tracking\Fire at hero, Taking Damage until disabled, Disabling (er...destroying) all other entities in the ship group My goal would be something that would be identified (and manipulated) as a distinct game element without having to rewrite subsystem form the ground up every time I want to build a new aggregate Element. How do I implement this kind of design in ES System? Do I implement some kind of parent-child entity relationship (entities can have children)? This seems to contradict the methodology that Entities are just empty container and makes it feel more OOP. Do I implement them as separate entities, with some kind of connecting Component (BossComponent) and related system (BossSubSystem)? I can't help but think that this will be hard to implement since how components communicate seem to be a big bear trap. Do I implement them as one Entity, with a collection of components (ShipComponent, CannonComponents, CoreComponent)? This one seems to veer way of the ES System intent (components here seem too much like heavy weight entities), but I'm know to this so I figured I would put that out there. Do I implement them as something else I have mentioned? I know that this can be implemented very easily in OOP, but my choosing ES over OOP is one that I will stick with. If I need to break with pure ES theory to implement this design I will (not like I haven't had to compromise pure design before), but I would prefer to do that for performance reason rather than start with bad design. For extra credit, think of the same design but, each of the "boss entities" were actually connected to a larger "BigBoss entity" made of a main body, main core and 3 "Boss Entities". This would let me see a solution for at least 3 dimensions (grandparent-parent-child)...which should be more than enough for me. Links to articles or example code would be appreciated. Thanks for your time.

    Read the article

  • Subterranean IL: Exception handling 1

    - by Simon Cooper
    Today, I'll be starting a look at the Structured Exception Handling mechanism within the CLR. Exception handling is quite a complicated business, and, as a result, the rules governing exception handling clauses in IL are quite strict; you need to be careful when writing exception clauses in IL. Exception handlers Exception handlers are specified using a .try clause within a method definition. .try <TryStartLabel> to <TryEndLabel> <HandlerType> handler <HandlerStartLabel> to <HandlerEndLabel> As an example, a basic try/catch block would be specified like so: TryBlockStart: // ... leave.s CatchBlockEndTryBlockEnd:CatchBlockStart: // at the start of a catch block, the exception thrown is on the stack callvirt instance string [mscorlib]System.Object::ToString() call void [mscorlib]System.Console::WriteLine(string) leave.s CatchBlockEnd CatchBlockEnd: // method code continues... .try TryBlockStart to TryBlockEnd catch [mscorlib]System.Exception handler CatchBlockStart to CatchBlockEnd There are four different types of handler that can be specified: catch <TypeToken> This is the standard exception catch clause; you specify the object type that you want to catch (for example, [mscorlib]System.ArgumentException). Any object can be thrown as an exception, although Microsoft recommend that only classes derived from System.Exception are thrown as exceptions. filter <FilterLabel> A filter block allows you to provide custom logic to determine if a handler block should be run. This functionality is exposed in VB, but not in C#. finally A finally block executes when the try block exits, regardless of whether an exception was thrown or not. fault This is similar to a finally block, but a fault block executes only if an exception was thrown. This is not exposed in VB or C#. You can specify multiple catch or filter handling blocks in each .try, but fault and finally handlers must have their own .try clause. We'll look into why this is in later posts. Scoped exception handlers The .try syntax is quite tricky to use; it requires multiple labels, and you've got to be careful to keep separate the different exception handling sections. However, starting from .NET 2, IL allows you to use scope blocks to specify exception handlers instead. Using this syntax, the example above can be written like so: .try { // ... leave.s EndSEH}catch [mscorlib]System.Exception { callvirt instance string [mscorlib]System.Object::ToString() call void [mscorlib]System.Console::WriteLine(string) leave.s EndSEH}EndSEH:// method code continues... As you can see, this is much easier to write (and read!) than a stand-alone .try clause. Next time, I'll be looking at some of the restrictions imposed by SEH on control flow, and how the C# compiler generated exception handling clauses.

    Read the article

  • Have you used the ExecutionValue and ExecValueVariable properties?

    The ExecutionValue execution value property and it’s friend ExecValueVariable are a much undervalued feature of SSIS, and many people I talk to are not even aware of their existence, so I thought I’d try and raise their profile a bit. The ExecutionValue property is defined on the base object Task, so all tasks have it available, but it is up to the task developer to do something useful with it. The basic idea behind it is that it allows the task to return something useful and interesting about what it has performed, in addition to the standard success or failure result. The best example perhaps is the Execute SQL Task which uses the ExecutionValue property to return the number of rows affected by the SQL statement(s). This is a very useful feature, something people often want to capture into a variable, and start using the result set options to do. Unfortunately we cannot read the value of a task property at runtime from within a SSIS package, so the ExecutionValue property on its own is a bit of a let down, but enter the ExecValueVariable and we have the perfect marriage. The ExecValueVariable is another property exposed through the task (TaskHost), which lets us select a SSIS package variable. What happens now is that when the task sets the ExecutionValue, the interesting value is copied into the variable we set on the ExecValueVariable property, and a variable is something we can access and do something with. So put simply if the ExecutionValue property value is of interest, make sure you create yourself a package variable and set the name as the ExecValueVariable. Have  look at the 3 step guide below: 1 Configure your task as normal, for example the Execute SQL Task, which here calls a stored procedure to do some updates. 2 Create variable of a suitable type to match the ExecutionValue, an integer is used to match the result we want to capture, the number of rows. 3 Set the ExecValueVariable for the task, just select the variable we created in step 2. You need to do this in Properties grid for the task (Short-cut key, select the task and press F4) Now when we execute the sample task above, our variable UpdateQueueRowCount will get the number of rows we updated in our Execute SQL Task. I’ve tried to collate a list of tasks that return something useful via the ExecutionValue and ExecValueVariable mechanism, but the documentation isn’t always great. Task ExecutionValue Description Execute SQL Task Returns the number of rows affected by the SQL statement or statements. File System Task Returns the number of successful operations performed by the task. File Watcher Task Returns the full path of the file found Transfer Error Messages Task Returns the number of error messages that have been transferred Transfer Jobs Task Returns the number of jobs that are transferred Transfer Logins Task Returns the number of logins transferred Transfer Master Stored Procedures Task Returns the number of stored procedures transferred Transfer SQL Server Objects Task Returns the number of objects transferred WMI Data Reader Task Returns an object that contains the results of the task. Not exactly clear, but I assume it depends on the WMI query used.

    Read the article

  • Oracle GoldenGate: Knowledge Document Series Post #2

    - by Doug Reid
    0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} 0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} For our second post in this series the team would like to highlight the knowledge document “How-To: Oracle GoldenGate – Heartbeat Process to Monitor Lag and Performance”. This knowledge document outlines a procedure to reliably measure lag between source and target systems through the use of 'heartbeat' tables. The basic idea is to have a table on the source system that gets updated at a predetermined interval. In your capture processes you would capture the update from the heartbeat table. Using tokens you would add some additional information to the heartbeat record to be able to tell which extract process was capturing the update. This additional information would be used downstream to calculate the real lag time between the source and target systems for a given extract and by checking the last update time on the heartbeat at the target you could also determine if data has stopped flowing between the source and target.  Click here to view the document

    Read the article

  • What is the difference between development and R&D?

    - by MainMa
    I was asked by a colleague to explain clearly the difference between ordinary development and research and development (R&D) and was unable to do it. After reading Wikipedia, I still don't have the precise answer. According to Wikipedia (slightly modified): There are two primary models: In one model, the primary function is to develop new products; in the other model, the primary function is to discover and create new knowledge about scientific and technological topics for the purpose of uncovering and enabling development of valuable new products, processes, and services. The first model is confusing. Does it mean that development (not R&D) consists exclusively in adding new features to a product, solving bugs and doing maintenance? What if something which was previously developed as a new feature becomes a separate product? The second model is less confusing, but still, how to qualify whether something is new knowledge or existent knowledge which is just rediscovered? Later, Wikipedia adds that ordinary development is different from R&D because of its: nearly immediate profit or immediate improvement. It's still not clear enough. How to qualify "nearly immediate profit"? What if a task has an immediate profit but requires heavy research? Or if it is basic but has uncertain profit, like the enforcement of a common style over the codebase? For example, does it belong to development or R&D to: Develop an engine which abstracts the access to the database, simplifying and shortening enormously the code of other applications (existent or ones which will be written in future) which should access to the database? Establish a new service-oriented architecture for the entire organization of company resources, in order to move from a bunch of separate and autonomous applications to a set of well-organized, interconnected web services, like what is used by Amazon? Design a new communication protocol to allow faster replication of data between two data centers of the company? Conceive a new type of software testing while working on a specific product, knowing that this type of testing will improve/simplify the testing process? Prove that Functional programming is more appropriate than OOP for a specific application, based on evidence, logic and previous experience? Enhance the existent application by adding gestures on tactile screens, after doing studies and testing that shows that those gestures improve the productivity of the users by a ratio of at least 1.4 for a precise set of tasks? Find a way to strongly enhance the Power usage effectiveness (PUE) of a data center? Create a Domain-Specific Language (DSL)? In short, how could I determine whether I'm doing R&D while working on something?

    Read the article

  • Set iPhone Style Location Based Alerts On Your Android Device With Google Now

    - by Gopinath
    Location based alerts of iPhone are very useful. You can set an alert to popup as soon as you reach a specific location like “Pickup milk and eggs” when I’m near a grocery store. This feature was missing in Android for a long time, but last week at Google I/O conference Google released an update to Google Now which supports location based alerts. To setup a location based alert 1. Launch Google Now 2. Type or say add reminder 3. By default it shows time based alert interface, switch it location based by touching Location icon 4. Set reminder text, choose a location and touch Set reminder 5. Your alert is set now and as soon as you are close by the specified location, you’ll see an alert on your device. This is a nice feature and I’m using it quite often for the past couple of days.  There are couple of things missing from the current version of Google Now location based alerts– recurring alerts and ability to set alerts on leaving a specific location. It is not possible to recur location based alerts. You will be alerted only once as soon as you reach the location and it is not possible to repeat the alert next time you visit the location. Lets say you want to be reminded to say hi to friend’s parents whenever you are travelling close by their home. It does not work. The second missing feature is something basic and some how Google did not incorporate in their first iteration. Lets say you are at office now and you want to set up alert to pickup flowers when you leave office. Sounds like a simple use case for location based alerts right? But there is no way to set this type of alerts. Google Now alerts you as soon as you reach a location, but not when you leave a location. Do you have an Android that supports Google Now? If so what are your thoughts on location based alerts?

    Read the article

  • Cloud service and IM protocol advice, for a backend to group chat mobile app

    - by Jonathan
    Overview I’m going to develop an app on Android and iOS. It will allow users to set up group ‘chat rooms’ and talk on chat rooms set up by other users. The service needs to be highly scalable, such that it could accommodate a massive increase in users overnight (we can only dream). Chat requirements The chat protocol used should be flexible: it should allow me to determine who can view/post on ‘chat rooms’ based on certain other factors, as determined by the first poster/creator of the particular ‘chat room’. It should also allow for users to simply install the app and begin using the service, after only providing a simple nickname (which could be changed later). Chat protocol plans Having looked around I think the XMPP protocol is the best candidate. In particular the Multi-user chat extension looks like what I’ll need. Would this be most suited to my requirements, or do you know another potential solution? Cloud service I have been deciding between Amazon Web Services, Google App Engine and Windows Azure. I’m coming to the conclusion that Azure will be best, as it is easier to manage than AWS (ease of scalability will be a key factor in the design), I think it may be less restricted than GAE, plus Azure will soon have toolkits to allow easy interfacing with both Android and iOS phones. Is this the decision you would have made, or would you recommend/look into other cloud services? General project philosophy I have only recently started looking into this project’s feasibility, and am no expert on any of its aspects. So wherever possible I will leave the actual implementations to experts, i.e. choosing a higher-level cloud service, using a well-documented plugin of a, proven reliable, group chat protocol etc. My background I have some programming knowledge from a computer science degree. Main languages I’ve used have been Java and Python, but I don’t want this to affect design decisions for the project. The most appropriate languages for the task should be used, i.e. I don’t mind learning a lot of new skills (my current programming levels are relatively basic anyway). Thank you Thanks for reading, and any advice you have about any aspect would be greatly appreciated :-)

    Read the article

  • Is There a Real Advantage to Generic Repository?

    - by Sam
    Was reading through some articles on the advantages of creating Generic Repositories for a new app (example). The idea seems nice because it lets me use the same repository to do several things for several different entity types at once: IRepository repo = new EfRepository(); // Would normally pass through IOC into constructor var c1 = new Country() { Name = "United States", CountryCode = "US" }; var c2 = new Country() { Name = "Canada", CountryCode = "CA" }; var c3 = new Country() { Name = "Mexico", CountryCode = "MX" }; var p1 = new Province() { Country = c1, Name = "Alabama", Abbreviation = "AL" }; var p2 = new Province() { Country = c1, Name = "Alaska", Abbreviation = "AK" }; var p3 = new Province() { Country = c2, Name = "Alberta", Abbreviation = "AB" }; repo.Add<Country>(c1); repo.Add<Country>(c2); repo.Add<Country>(c3); repo.Add<Province>(p1); repo.Add<Province>(p2); repo.Add<Province>(p3); repo.Save(); However, the rest of the implementation of the Repository has a heavy reliance on Linq: IQueryable<T> Query(); IList<T> Find(Expression<Func<T,bool>> predicate); T Get(Expression<Func<T,bool>> predicate); T First(Expression<Func<T,bool>> predicate); //... and so on This repository pattern worked fantastic for Entity Framework, and pretty much offered a 1 to 1 mapping of the methods available on DbContext/DbSet. But given the slow uptake of Linq on other data access technologies outside of Entity Framework, what advantage does this provide over working directly with the DbContext? I attempted to write a PetaPoco version of the Repository, but PetaPoco doesn't support Linq Expressions, which makes creating a generic IRepository interface pretty much useless unless you only use it for the basic GetAll, GetById, Add, Update, Delete, and Save methods and utilize it as a base class. Then you have to create specific repositories with specialized methods to handle all the "where" clauses that I could previously pass in as a predicate. Is the Generic Repository pattern useful for anything outside of Entity Framework? If not, why would someone use it at all instead of working directly with Entity Framework? Edit: Original link doesn't reflect the pattern I was using in my sample code. Here is an (updated link).

    Read the article

  • ArchBeat Facebook Friday: Top 10 Shared Links - May 23-29, 2014

    - by OTN ArchBeat
    Among the 5,144 fans of the OTN ArchBeat Facebook Page the following Top 10 items were the most popular over the last seven days, May 23-29, 2014. GlassFish/Java EE Community Open Forum Today! | Reza Rahman Have questions about Glassfish? Java EE/GlassFish evangelist Reza Rahman has answers, and you can pick his brain tomorrow during an online forum organized by the London Glassfish User Group and C2B2. The event is free, but you must register in order to participate. Click the link for more information. Twitter Tuesday - Top 10 @ArchBeat Tweets - May 20-26, 2014 The top 10 @OTNArchBeat tweets for the week of May 20-26, 2014. Topics covered include ADF, Cloud, GoldenGate, KScope14, OBIEE, ODI, WebLogic, WebCenter, and more. FrameworkFolders Support has come to Oracle WebCenter Portal | JayJay Zheng Interested in working with Framework Folders in Oracle WebCenter Portal? Oracle ACE JayJay Zheng reviews the essentials. Video: Programming Best Practices - ADF Business Components | Frank Nimphius Frank Nimphius discusses best practices and recommendations for ADF Business Components in the latest video from ADF Architecture TV. Video: Kscope 2014 Preview: Data Modeling and Moving Meditation with Kent Graziano For your mind and your body! Oracle ACE Director Kent Graziano previews his Kscope 2014 data modeling presentations and the early morning Chi Gung sessions he will once again lead for Kscope attendees. OAG and OES Integration for Web API Security: skin and guts | Andre Correa A-Team architect Andre Correa's post examines a strategy for web API security that uses OAG (Oracle API Gateway) and OES (Oracle Entitlements Server). Getting Started with Coherence*Web in WebLogic Server 12.1.2 | Tim Middleton Solution architect Tim Middleton shows you how to configure Coherence*Web in WebLogic Server 12.1.2 and deploy a basic web application. SOA and Business Processes: You are the Process! Part of the 13-part "Industrial SOA" article series, this article looks at best practices for modeling and managing effective business processes. Authentication in Oracle Identity Federation/ IdP | Damien Carru Damien Carru discuss authentication when OIF acts as an IdP and how the server can be configured to use specific OAM Authentication Schemes to challenge the user. Caveats on Using WebLogic Server with JDK7 | JayJay Zheng Quick tech tips from Oracle ACE JayJay Zheng.

    Read the article

  • How to execute a Ruby file in Java, capable of calling functions from the Java program and receiving primitive-type results?

    - by Omega
    I do not fully understand what am I asking (lol!), well, in the sense of if it is even possible, that is. If it isn't, sorry. Suppose I have a Java program. It has a Main and a JavaCalculator class. JavaCalculator has some basic functions like public int sum(int a,int b) { return a + b } Now suppose I have a ruby file. Called MyProgram.rb. MyProgram.rb may contain anything you could expect from a ruby program. Let us assume it contains the following: class RubyMain def initialize print "The sum of 5 with 3 is #{sum(5,3)}" end def sum(a,b) # <---------- Something will happen here end end rubyMain = RubyMain.new Good. Now then, you might already suspect what I want to do: I want to run my Java program I want it to execute the Ruby file MyProgram.rb When the Ruby program executes, it will create an instance of JavaCalculator, execute the sum function it has, get the value, and then print it. The ruby file has been executed successfully. The Java program closes. Note: The "create an instance of JavaCalculator" is not entirely necessary. I would be satisfied with just running a sum function from, say, the Main class. My question: is such possible? Can I run a Java program which internally executes a Ruby file which is capable of commanding the Java program to do certain things and get results? In the above example, the Ruby file asks the Java program to do a sum for it and give the result. This may sound ridiculous. I am new in this kind of thing (if it is possible, that is). WHY AM I ASKING THIS? I have a Java program, which is some kind of game engine. However, my target audience is a bunch of Ruby coders. I don't want to have them learn Java at all. So I figured that perhaps the Java program could simply offer the functionality (capacity to create windows, display sprites, play sounds...) and then, my audience can simply code with Ruby the logic, which basically justs asks my Java engine to do things like displaying sprites or playing sounds. That's when I though about asking this.

    Read the article

  • efficient collision detection - tile based html5/javascript game

    - by Tom Burman
    Im building a basic rpg game and onto collisions/pickups etc now. Its tile based and im using html5 and javascript. i use a 2d array to create my tilemap. Im currently using a switch statement for whatever key has been pressed to move the player, inside the switch statement. I have if statements to stop the player going off the edge of the map and viewport and also if they player is about to land on a tile with tileID 3 then the player stops. Here is the statement: canvas.addEventListener('keydown', function(e) { console.log(e); var key = null; switch (e.which) { case 37: // Left if (playerX > 0) { playerX--; } if(board[playerX][playerY] == 3){ playerX++; } break; case 38: // Up if (playerY > 0) playerY--; if(board[playerX][playerY] == 3){ playerY++; } break; case 39: // Right if (playerX < worldWidth) { playerX++; } if(board[playerX][playerY] == 3){ playerX--; } break; case 40: // Down if (playerY < worldHeight) playerY++; if(board[playerX][playerY] == 3){ playerY--; } break; } viewX = playerX - Math.floor(0.5 * viewWidth); if (viewX < 0) viewX = 0; if (viewX+viewWidth > worldWidth) viewX = worldWidth - viewWidth; viewY = playerY - Math.floor(0.5 * viewHeight); if (viewY < 0) viewY = 0; if (viewY+viewHeight > worldHeight) viewY = worldHeight - viewHeight; }, false); My question is, is there a more efficient way of handling collisions, then loads of if statements for each key? The reason i ask is because i plan on having many items that the player will need to be able to pickup or not walk through like walls cliffs etc. Thanks for your time and help Tom

    Read the article

  • As a tooling/automation developer, can I be making better use of OOP?

    - by Tom Pickles
    My time as a developer (~8 yrs) has been spent creating tooling/automation of one sort or another. The tools I develop usually interface with one or more API's. These API's could be win32, WMI, VMWare, a help-desk application, LDAP, you get the picture. The apps I develop could be just to pull back data and store/report. It could be to provision groups of VM's to create live like mock environments, update a trouble ticket etc. I've been developing in .Net and I'm currently reading into design patterns and trying to think about how I can improve my skills to make better use of and increase my understanding of OOP. For example, I've never used an interface of my own making in anger (which is probably not a good thing), because I honestly cannot identify where using one would benefit later on when modifying my code. My classes are usually very specific and I don't create similar classes with similar properties/methods which could use a common interface (like perhaps a car dealership or shop application might). I generally use an n-tier approach to my apps, having a presentation layer, a business logic/manager layer which interfaces with layer(s) that make calls to the API's I'm working with. My business entities are always just method-less container objects, which I populate with data and pass back and forth between my API interfacing layer using static methods to proxy/validate between the front and the back end. My code by nature of my work, has few common components, at least from what I can see. So I'm struggling to see how I can better make use of OOP design and perhaps reusable patterns. Am I right to be concerned that I could be being smarter about how I work, or is what I'm doing now right for my line of work? Or, am I missing something fundamental in OOP? EDIT: Here is some basic code to show how my mgr and api facing layers work. I use static classes as they do not persist any data, only facilitate moving it between layers. public static class MgrClass { public static bool PowerOnVM(string VMName) { // Perform logic to validate or apply biz logic // call APIClass to do the work return APIClass.PowerOnVM(VMName); } } public static class APIClass { public static bool PowerOnVM(string VMName) { // Calls to 3rd party API to power on a virtual machine // returns true or false if was successful for example } }

    Read the article

  • The Solution

    - by Patrick Liekhus
    So I recently attended a class about time management as well as read the book “The Seven Habits of Highly Effective People” by Stephen Covey.  Both have been instrumental in helping me get my priorities aligned as well as keep me focused. The reason I bring this up is that it gave me a great idea for a small application with which to create a great technical stack solution that would be easy to demo and explain.  Therefore, the project from this point forward with be the Liekhus.TimeTracker application which will bring some the time management skills that I have acquired into a technical implementation.  The idea is rather simple, but leverages some of the basic principles of Covey along with some of the worksheets that I garnered from class.  The basics are as such: 1) a plan is a must have and 2) write it down!  A plan not written down is just an idea.  How many times have you had an idea that didn’t materialize?  Exactly.  Hence why I am writing it all down now! The worksheet consists of a few simple columns that I will outline below as well as some modifications that I made according to the Covey habits.  The worksheet looks like the following: Status Issue Area CQ Notes P  F  L     1234   P  F  L     1234   P  F  L     1234   P  F  L     1234   P  F  L     1234   P  F  L     1234   P  F  L     1234   P  F  L     1234   P  F  L     1234   The idea is really simple and straightforward; you write down all your tasks and keep track of them along the way.  The status stands for (P)ending, (F)inished or (L)ater.  You write a quick title for the issue and select the CQ (Covey Quadrant) with which the issue occurs.  The notes section is for things that happen while you are working through the issue.  And last, but not least, is the Area column that I added as a way to identify the Role or Area of your life that this task falls within based upon Covey’s teachings. The second part of this application is a simple phone log that allows you to track your phone conversations throughout the day.  All of this is currently done on a sheet of paper, but being involved in technology, I want it to have bells and whistles.  Therefore, this is my simple idea for a project that will allow me to test my theories about coding and implementations.  Stay tuned as the next session will be flushing out the concept and coming up with user stories to begin the SCRUM process. Thanks

    Read the article

  • Failed to install GRUB on a separate '/boot' partition on a fake RAID 0 (12.04LTS)

    - by gerben
    I'm having some problems getting GRUB configured for Ubuntu 12.04LTS on a fake RAID 0. I can either get the GRUB rescue prompt at startup, or just a GRUB prompt but I cannot boot to Ubuntu manually. How can I configure the GRUB to actually use the Ubuntu install? The steps taken: Installing Ubuntu on fake raid The Ubuntu installer cannot install Ubuntu on the drive. After defining the partitions to use it fails with "Error: ???", pressing OK terminates the installer. Therefore, I used GParted to configure the partitions: /dev/mapper/sil_agadaccfacbg : (the RAID configuration, created partition): /dev/mapper/sil_agadaccfacbg1:ext2, 200MiB, (with 'boot' flag) /dev/mapper/sil_agadaccfacbg3:ext2, 67.75GiB, (which will contain Ubuntu) /dev/mapper/sil_agadaccfacbg2:extended, 1.00GiB, (for swap) Contains: /dev/mapper/sil_agadaccfacbg5: unknown Because of the fake-RAID, I already mounted the destination partitions before running the Ubuntu installer: > mkdir /mnt/boot > sudo mount /dev/mapper/sil_agadaccfacbg1 /mnt/boot > mkdir /mnt/ubuntu > sudo mount /dev/mapper/sil_agadaccfacbg3 /mnt/ubuntu In the installer I chose the following partition usage: /dev/mapper/sil_agadaccfacbg1 ext2, mount at /boot (209MB) /dev/mapper/sil_agadaccfacbg3 ext2, mount at / (72751MB) /dev/mapper/sil_agadaccfacbg5 swap Device for boot loader installation: /dev/mapper/sil_agadaccfacbg, linux device-mapper (striped) (74.0GB) This will install Ubuntu, but will fail to install GRUB (it seems to use /dev/sda no matter which one I choose) Installing GRUB with dpkg-reconfigure I followed this guide, but adapted it for two partitions: sudo mount /dev/mapper/sil_agadaccfacbg3 /mnt/ubuntu sudo mount --bind /dev /mnt/ubuntu/dev sudo mount --bind /proc /mnt/ubuntu/proc sudo mount --bind /sys /mnt/ubuntu/sys sudo mount /dev/mapper/sil_agadaccfacbg1 /mnt/boot sudo mount --bind /boot /mnt/boot sudo chroot /mnt/ubuntu dpkg-reconfigure grub-pc However, it does not ask where to install GRUB (I should choose /dev/mapper/sil_agadaccfacbg somewhere..) After reboot I get the GRUB rescue prompt with message no such device Installing GRUB with grub-install After the same mount commands as above, I continued with: > sudo grub-install --root-directory=/mnt/boot /dev/mapper/sil_agadaccfacbg This gives the following message: /usr/sbin/grub-probe: error: cannot find a device for /mnt/boot/boot/grub (is /dev mounted?) It does succeed when mounting just the boot partition : sudo mount /dev/mapper/sil_agadaccfacbg1 /mnt sudo grub-install --root-directory=/mnt/ /dev/mapper/sil_agadaccfacbg This finishes with: Installation finished. No error reported. After reboot I get the GRUB console, with welcome text. Attempting to manually start Ubuntu: ls (hd0) (hd0,msdos3) : (Ubuntu install partition) (hd0,msdos1) : (Ubuntu boot partition) (hd1) (hd1,msdos1) : (Ubuntu live USB) ls (hd0,msdos3)/ contains: - vmlinuz - lib/ - tmp/ - initrd.img - mnt/ - var/ - proc/ - boot/ - root/ - etc/ - run/ - media/ - sbin/ - bin/ - selinux/ - dev/ - srv/ - home/ - sys/ ls (hd0,msdos1)/ contains: -grub/ -boot/ -initrd.img-3.8.0-29-generic -vmlinuz-3.8.0.29-generic -config-3.8 linux (hd0,msdos3)/vmlinuz This returns "error: out of disk" Installing GRUB on Ubuntu partition with grub-install > sudo mount /dev/mapper/sil_agadaccfacbg3 /mnt > sudo grub-install --root-directory=/mnt/ /dev/mapper/sil_agadaccfacbg This finishes with message: > Installation finished. No error reported. After reboot get the message "error: out of disk" and the GRUB rescue prompt. Configuring GRUB with grub-mkconfig Attempting to run grub-mkconfig with different destinations results in the same message: /usr/sbin/grub-probe: error: cannot find a device for / (is /dev mounted?). Remarks: Initially I didn't use a separate /boot partition, but the GRUB install then also failed. Because some mention that a small partition at the beginning of the drive is necessary on old machines, I retried with a /boot partition This is a single boot (no other OS's installed/used)

    Read the article

  • Is it ok to initialize an RB_ConstraintActor in PostBeginPlay?

    - by Almo
    I have a KActorSpawnable subclass that acts weird. In PostBeginPlay, I initialize an RB_ConstraintActor; the default is not to allow rotation. If I create one in the editor, it's fine, and won't rotate. If I spawn one, it rotates. Here's the class: class QuadForceKActor extends KActorSpawnable placeable; var(Behavior) bool bConstrainRotation; var(Behavior) bool bConstrainX; var(Behavior) bool bConstrainY; var(Behavior) bool bConstrainZ; var RB_ConstraintActor PhysicsConstraintActor; simulated event PostBeginPlay() { Super.PostBeginPlay(); PhysicsConstraintActor = Spawn(class'RB_ConstraintActorSpawnable', self, '', Location, rot(0, 0, 0)); if(bConstrainRotation) { PhysicsConstraintActor.ConstraintSetup.bSwingLimited = true; PhysicsConstraintActor.ConstraintSetup.bTwistLimited = true; } SetLinearConstraints(bConstrainX, bConstrainY, bConstrainZ); PhysicsConstraintActor.InitConstraint(self, None); } function SetLinearConstraints(bool InConstrainX, bool InConstrainY, bool InConstrainZ) { if(InConstrainX) { PhysicsConstraintActor.ConstraintSetup.LinearXSetup.bLimited = 1; } else { PhysicsConstraintActor.ConstraintSetup.LinearXSetup.bLimited = 0; } if(InConstrainY) { PhysicsConstraintActor.ConstraintSetup.LinearYSetup.bLimited = 1; } else { PhysicsConstraintActor.ConstraintSetup.LinearYSetup.bLimited = 0; } if(InConstrainZ) { PhysicsConstraintActor.ConstraintSetup.LinearZSetup.bLimited = 1; } else { PhysicsConstraintActor.ConstraintSetup.LinearZSetup.bLimited = 0; } } DefaultProperties { bConstrainRotation=true bConstrainX=false bConstrainY=false bConstrainZ=false bSafeBaseIfAsleep=false bNoEncroachCheck=false } Here's the code I use to spawn one. It's a subclass of the one above, but it doesn't reference the constraint at all. local QuadForceKCreateBlock BlockActor; BlockActor = spawn(class'QuadForceKCreateBlock', none, 'PowerCreate_Block', BlockLocation(), m_PreparedRotation, , false); BlockActor.SetDuration(m_BlockDuration); BlockActor.StaticMeshComponent.SetNotifyRigidBodyCollision(true); BlockActor.StaticMeshComponent.ScriptRigidBodyCollisionThreshold = 0.001; BlockActor.StaticMeshComponent.SetStaticMesh(m_ValidCreationBlock.StaticMesh); BlockActor.StaticMeshComponent.AddImpulse(m_InitialVelocity); I used to initialize an RB_ConstraintActor where I spawned it from the outside. This worked, which is why I'm pretty sure it has nothing to do with the other code in QuadForceKCreateBlock. I then added the internal constraint in QuadForceKActor for other purposes. When I realized I had two constraints on the CreateBlock doing the same thing, I removed the constraint code from the place where I spawn it. Then it started rotating. Is there a reason I should not be initializing an RB_ConstraintActor in PostBeginPlay? I feel like there's some basic thing about how the engine works that I'm missing.

    Read the article

  • Ubuntu hangs on booting up after a update

    - by alFReD NSH
    I've made a clean install yesterday, for the first time restarted, everything went good and then after I updated packages and copied my old home directory to replace the new one, when I restarted it hung when it was booting. I tried reinstalling again and doing the same thing, but again same thing happened. Here's what I see, before when the Ubuntu logo with the five dots is shown: Then after that, 3 or 4 of the dots will load and hangs there. If I press arrow up before that, this will be shown I started my laptop again today(the pictures are for the day before) and after that, boot up with live CD and got the logs. dmesg: http://pastebin.com/aVxV7BQF syslog: http://pastebin.com/4E2BrRUK And some info: alfred@alFitop:~$ uname -a Linux alFitop 3.2.0-24-generic #39-Ubuntu SMP Mon May 21 16:52:17 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux lshw: http://pastebin.com/AZbKJmsT sources.list : http://pastebin.com/2HazmuyV My problem is a bit similar to here: http://ubuntuforums.org/showthread.php?t=1918271 Though I didn't change my x.org config. Only changed home directory and updated packages. I've tried memtest and fschk, both passed. In the recovery mode boot option, I've also realized that same things happen in failsafe graphical mode. But when I go into the network mode, I can boot up my system, but of course same the graphics are just basic. Adding blacklist intel_ips to /etc/modprobe.d/blacklist.conf solves the first message, but still I get the broken pipe and CPU stack traces. The current kernel version is 3.2.0-25, I've tried booting up in the 3.2.0-23(the one the installer came with, but same results.) Also uninstalled apparmor, didn't help. I've installed Ubuntu again, this time without copying the home directory, also same result. --- UPDATE --- This problem was solved before with removing backports, but its back again! I've updated my laptop last night and the problem came back. It's definitely one of these packages. My /var/log/apt/term.log and /var/log/apt/history.log. I'm almost having the same situation. --- UPDATE --- I realized this also have happened on times that I have updated(haven't restarted after it) and my computer power has been cut off and its shutdown due to lack of power. And I realized if I just do as I answered but not in somewhere without GUI(networking mode has the GUI) it wouldn't work.

    Read the article

  • What location to put bootloader, when running multiple drives and partition

    - by Matt G
    I have Win8 on my desktop, where a 120G SSD is used to run windows and some select applications, while I have a 2TB HDD to provide basic file storage and where possible, install applications instead of on the SSD. I want to install Ubuntu on a new partition of the HDD (I allocated 300GB, with 5GB swap file). I've used a USB to install the OS, which seemed to have done the job. However, after prompting for a restart, I can no longer boot to ubuntu. During instillation I was confused about where to install the "boot loader instillation". I ended up selecting "/dev/stb" because I figured I would be able to boot with BIOS by selecting the HDD drive as a priority over the SSD. The bootloader is a large part of where I think I went wrong. My partition system looked something like this: /dev/sta ... //SSD ~120 GB /dev/sta1 NTFS (350 MB) //Win8System /dev/sta2 NTFS (118 GB) //Win8C-Drive /dev/stb ... //HDD ~2TB /dev/stb1 NTFS (1563 GB) //FileStorage /dev/stb5 Free Space (300 GB) //Space I want to use for Linux (NOTE: Created two partitions from the 300GB, ~5GB and 295GB. stb5,stb6.) It'd be great if I could get an explanation of what drive you'd select for the boot loader and why, and what selections won't work with regards to the Boot Loader Instillation. I think I understand what Grub is, but I have no idea on how to use it, or play around with it. I seem to be able to get back into OS from my usb, however I believe it's just showing me a preview/trial of Ubuntu (ie, can't access any of the system NTFS drives). Note, if I try to install from the USB again, it will recognize that a version of Ubuntu 13.10 exists on the system. Apologies in advance, have used windows all my life, don't really know to much about Linux at all. Did have a brief skim over some similar questions, didn't find anything too useful. - Where to install bootloader when installing Ubuntu as secondary OS? - ubuntu 12.10 dual boot with windows 8 on two hdds - Dual-boot Windows 7 and Ubuntu on two SSDs with UEFI

    Read the article

  • EBS – ATG Webcast 9/11 - 9/12

    - by cwarticki
    EBS – ATG Webcast in September 2012 EBS – Multiple Language Support (MLS) Agenda :EBS is MLS Ready                                                                                 NLS / MLS Basic ArchitectureNLS / MLS InstallationNLS / MLS Configuration Settings                                                                    TroubleshootingQuestion and AnswersEMEA Session : September 11, 2012 at 09:00 UK / 10:00 CET / 13:30 India / 17:00 Japan / 18:00 Sydney (Australia) Details & Registration : Note 1480084.1 Direct link to register in WebEx US Session : September 12, 2012 at 18:00 UK / 19:00 CET / 10:00 AM Pacific / 11:00 AM Mountain/ 01:00 PM Eastern ·      Details & Registration : Note 1480085.1 ·      Direct link to register in WebEx ·         Schedules, recordings and the Presentations of the Advisor Webcast drove under the EBS Applications Technology area can be found in Note 1186338.1. ·         Current Schedules of Advisor Webcast for all Oracle Products can be found on Note 740966.1 ·         Post Presentation Recordings of the Advisor Webcasts for all Oracle Products can be found on Note 740964.1 If you have any question about the schedules or if you have a suggestion for an Advisor Webcast to be planned in future, please send an E-Mail to Ruediger Ziegler.

    Read the article

  • The Buzz at the JavaOne Bookstore

    - by Janice J. Heiss
    I found my way to the JavaOne bookstore, a hub of activity. Who says brick and mortar bookstores are dead? I asked what was hot and got two answers: Hadoop in Practice by Alex Holmes was doing well. And Scala for the Impatient by noted Java Champion Cay Horstmann also seemed to be a fast seller. Hadoop in PracticeHadoop is a framework that organizes large clusters of computers around a problem. It is touted as especially effective for large amounts of data, and is use such companies as  Facebook, Yahoo, Apple, eBay and LinkedIn. Hadoop in Practice collects nearly 100 Hadoop examples and presents them in a problem/solution format with step by step explanations of solutions and designs. It’s very much a participatory book intended to make developers more at home with Hadoop.The author, Alex Holmes, is a senior software engineer with more than 15 years of experience developing large-scale distributed Java systems. For the last four years, he has gained expertise in Hadoop solving Big Data problems across a number of projects. He has presented at JavaOne and Jazoon and is currently a technical lead at VeriSign.At this year’s JavaOne, he is presenting a session with VeriSign colleague, Karthik Shyamsunder called “Java: A Perfect Platform for Data Science” where they will explain how the Java platform has emerged as a perfect platform for practicing data science, and also talk about such technologies as Hadoop, Hive, Pig, HBase, Cassandra, and Mahout. Scala for the ImpatientSan Jose State University computer science professor and Java Champion Cay Horstmann is the principal author of the highly regarded Core Java. Scala for the Impatient is a basic, practical introduction to Scala for experienced programmers. Horstmann has a presentation summarizing the themes of his book on at his website. On the final page he offers an enticing summary of his conclusions:* Widespread dissatisfaction with Java + XML + IDEs               --Don't make me eat Elephant again * A separate language for every problem domain is not efficient               --It takes time to master the idioms* ”JavaScript Everywhere” isn't going to scale* Trend is towards languages with more expressive power, less boilerplate* Will Scala be the “one ring to rule them”?* Maybe              --If it succeeds in industry             --If student-friendly subsets and tools are created The popularity of both books echoed comments by IBM Distinguished Engineer Jason McGee who closed his part of the Sunday JavaOne keynote by pointing out that the use of Java in complex applications is increasingly being augmented by a host of other languages with strong communities around them – JavaScript, JRuby, Scala, Python and so forth. Java developers increasingly must know the strengths and weaknesses of such languages going forward.

    Read the article

  • design a model for a system of dependent variables

    - by dbaseman
    I'm dealing with a modeling system (financial) that has dozens of variables. Some of the variables are independent, and function as inputs to the system; most of them are calculated from other variables (independent and calculated) in the system. What I'm looking for is a clean, elegant way to: define the function of each dependent variable in the system trigger a re-calculation, whenever a variable changes, of the variables that depend on it A naive way to do this would be to write a single class that implements INotifyPropertyChanged, and uses a massive case statement that lists out all the variable names x1, x2, ... xn on which others depend, and, whenever a variable xi changes, triggers a recalculation of each of that variable's dependencies. I feel that this naive approach is flawed, and that there must be a cleaner way. I started down the path of defining a CalculationManager<TModel> class, which would be used (in a simple example) something like as follows: public class Model : INotifyPropertyChanged { private CalculationManager<Model> _calculationManager = new CalculationManager<Model>(); // each setter triggers a "PropertyChanged" event public double? Height { get; set; } public double? Weight { get; set; } public double? BMI { get; set; } public Model() { _calculationManager.DefineDependency<double?>( forProperty: model => model.BMI, usingCalculation: (height, weight) => weight / Math.Pow(height, 2), withInputs: model => model.Height, model.Weight); } // INotifyPropertyChanged implementation here } I won't reproduce CalculationManager<TModel> here, but the basic idea is that it sets up a dependency map, listens for PropertyChanged events, and updates dependent properties as needed. I still feel that I'm missing something major here, and that this isn't the right approach: the (mis)use of INotifyPropertyChanged seems to me like a code smell the withInputs parameter is defined as params Expression<Func<TModel, T>>[] args, which means that the argument list of usingCalculation is not checked at compile time the argument list (weight, height) is redundantly defined in both usingCalculation and withInputs I am sure that this kind of system of dependent variables must be common in computational mathematics, physics, finance, and other fields. Does someone know of an established set of ideas that deal with what I'm grasping at here? Would this be a suitable application for a functional language like F#? Edit More context: The model currently exists in an Excel spreadsheet, and is being migrated to a C# application. It is run on-demand, and the variables can be modified by the user from the application's UI. Its purpose is to retrieve variables that the business is interested in, given current inputs from the markets, and model parameters set by the business.

    Read the article

  • Need help with xorg.conf for dual Radeon HD6450 video cards with 4 monitors

    - by Eriks Goodwin-Pfister
    I am running 64-bit Ubuntu 13.10 with Unity and have dual (2) Radeon HD6450 video cards and 4 Hanns-G HL273 monitors. Each Radeon card is driving one monitor via DVI and the other via VGA. I am running the proprietary video drivers from AMD's web site: "amd-catalyst-13.11-beta V9.4-linux-x86.x86_64.run" I tried to use "amd-catalyst-13.12-linux-x86.x86_64.run" but could not get that newer version to install. What I need help with is how to "correct" my xorg.conf file and any other needed instructions to get all four of my monitors to work as a continuous desktop that allows me to drag things from one monitor to the next, etc. When I tried to use the default open source drivers that came in Ubuntu 13.10, only three of the monitors would work. Now that I am running the proprietary ones, all four monitors come on and I can move my mouse from one end to the other--but only the right-most monitor displays my desktop and allows me to "do anything". Any time I move my mouse to any of the other three monitors (which display all-white), it turns into an "X" and does not do anything else but move. Enabling xinerama makes all four displays go all-black after login. I do have amdcccle installed, but it does not seem to have the ability to handle my particular configuration. My Current xorg.conf: Section "ServerLayout" Identifier "Basic Layout" Screen 0 "Screen1" 5760 0 Screen 1 "Screen0" 0 0 Screen 2 "Screen2" 3840 0 Screen 3 "Screen3" 1920 0 EndSection Section "Module" EndSection Section "Monitor" Identifier "0-DFP2" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" Option "PreferredMode" "1920x1080" Option "TargetRefresh" "60" Option "Position" "0 0" Option "Rotate" "normal" Option "Disable" "false" EndSection Section "Monitor" Identifier "0-CRT1" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" Option "PreferredMode" "1920x1080" Option "TargetRefresh" "60" Option "Position" "0 0" Option "Rotate" "normal" Option "Disable" "false" EndSection Section "Monitor" Identifier "1-DFP2" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" Option "PreferredMode" "1920x1080" Option "TargetRefresh" "60" Option "Position" "0 0" Option "Rotate" "normal" Option "Disable" "false" EndSection Section "Monitor" Identifier "1-CRT1" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" Option "PreferredMode" "1920x1080" Option "TargetRefresh" "60" Option "Position" "0 0" Option "Rotate" "normal" Option "Disable" "false" EndSection Section "Device" Identifier "Device0" Driver "fglrx" Option "Monitor-CRT1" "1-CRT1" BusID "PCI:1:0:0" EndSection Section "Device" Identifier "Device1" Driver "fglrx" Option "Monitor-DFP2" "0-DFP2" BusID "PCI:4:0:0" EndSection Section "Device" Identifier "Device2" Driver "fglrx" Option "Monitor-DFP2" "1-DFP2" BusID "PCI:1:0:0" Screen 1 EndSection Section "Device" Identifier "Device3" Driver "fglrx" Option "Monitor-CRT1" "0-CRT1" BusID "PCI:4:0:0" Screen 1 EndSection Section "Screen" Identifier "Screen0" Device "Device0" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen1" Device "Device1" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen2" Device "Device2" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen3" Device "Device3" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection

    Read the article

< Previous Page | 568 569 570 571 572 573 574 575 576 577 578 579  | Next Page >