Search Results

Search found 21319 results on 853 pages for 'state management'.

Page 585/853 | < Previous Page | 581 582 583 584 585 586 587 588 589 590 591 592  | Next Page >

  • when I type apt-get -f install, I get the error message

    - by gene
    xserver-xorg-core (2:1.11.4-0ubuntu10.8) breaks xserver-xorg-video-5 and is installed. Also I can not upgrade my software, It said that the package system is broken, with detail information: The following packages have unmet dependencies: xserver-xorg-core: Depends: xserver-common (>= 2:1.11.4-0ubuntu10.8) but 2:1.11.4-0ubuntu10.8 is installed when I issue sudo apt-get update, the output seems fine the source is(sorry the output has too many links that I can not post in);http://archive.ubuntu.com Reading package lists... Done ====================== when I issue sudo apt-get dist-upgrade, the output is: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: xserver-xorg-core : Breaks: xserver-xorg-video-5 E: Unmet dependencies. Try using -f. ================== when I issue 'sudo apt-get -f install', the output is: dpkg: dependency problems prevent configuration of xserver-xorg-video-radeon: xserver-xorg-core (2:1.11.4-0ubuntu10.8) breaks xserver-xorg-video-5 and is installed. xserver-xorg-video-radeon (1:6.12.1-0ubuntu2) provides xserver-xorg-video-5. dpkg: error processing xserver-xorg-video-radeon (--configure):dependency problems leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: xserver-xorg-video-radeon E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Kinect Click counter function

    - by Sweta Dwivedi
    So i have the following kinect click function which will check if the hand is within the bounds then it will click with a counter . . however there is a slight problem . .the first few button clicks work fine.. but after it clicks one of the buttons it changes the game state and immediately clicks the other button without the counter reaching 200. . . Kinect click is a method in the button class. . .and each button inside a list can access the Kinect click method. . . public bool KinectClick(int x,int y) { if ((x >= position.X && x <= position.X + position.Width) && (y >= position.Y && y <= position.Y + position.Height)) { counter++; if (counter > 200) { counter = 0; return true; } } else { counter = 0; } return false; } I call to check if this property is true in the Game update method to act as a button click. . foreach(Button g_t in Game_theme) { if ((g_t.KinectClick(x_c, y_c) == true || g_t.ButtonClicked() == true) && g_t.name == "animoe") { Selected_anim = true; currentGameState = GameState.InGame; } if ((g_t.KinectClick(x_c, y_c) == true || g_t.ButtonClicked() == true) && g_t.name == "planet") { Selected_planet = true; currentGameState = GameState.InGame; }

    Read the article

  • How do I require use of the 5 GHz band when connecting to a Wireless N access point?

    - by cqcallaw
    What is says in the topic: there's a Wireless N access point to which I'd like to connect using the 5 GHz band exclusively. How does one accomplish this? Using the directive band=a in the connection configuration file in /etc/NetworkManager/system-connections doesn't seem to affect anything (/var/log/syslog still shows attempts by wpa_supplicant to connect using the 2.4 GHz band), and running iwconfig wlan0 freq 5G per this question results in the following error: Error for wireless request "Set Frequency" (8B04) : SET failed on device wlan0 ; Invalid argument. [Edit] I'm hoping the answer won't depend on the hardware in use, but here's some information about the hardware, just in case: System is an Asus ZenBook Prime UX31A-DB51, running Ubuntu 12.04. lspci output: 00:00.0 Host bridge: Intel Corporation Ivy Bridge DRAM Controller (rev 09) 00:02.0 VGA compatible controller: Intel Corporation Ivy Bridge Graphics Controller (rev 09) 00:04.0 Signal processing controller: Intel Corporation Device 0153 (rev 09) 00:14.0 USB controller: Intel Corporation Panther Point USB xHCI Host Controller (rev 04) 00:16.0 Communication controller: Intel Corporation Panther Point MEI Controller #1 (rev 04) 00:1a.0 USB controller: Intel Corporation Panther Point USB Enhanced Host Controller #2 (rev 04) 00:1b.0 Audio device: Intel Corporation Panther Point High Definition Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 1 (rev c4) 00:1c.1 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 2 (rev c4) 00:1d.0 USB controller: Intel Corporation Panther Point USB Enhanced Host Controller #1 (rev 04) 00:1f.0 ISA bridge: Intel Corporation Panther Point LPC Controller (rev 04) 00:1f.2 SATA controller: Intel Corporation Panther Point 6 port SATA Controller [AHCI mode] (rev 04) 00:1f.3 SMBus: Intel Corporation Panther Point SMBus Controller (rev 04) 00:1f.6 Signal processing controller: Intel Corporation Panther Point Thermal Management Controller (rev 04) 02:00.0 Network controller: Intel Corporation Centrino Advanced-N 6235 (rev 24) The driver for the wireless interface is iwlwifi.

    Read the article

  • Accenture Foundation Platform for Oracle (AFPO)

    - by Lionel Dubreuil
    The Accenture Foundation Platform for Oracle (AFPO) is a pre-built, tested reference application, common services framework and development accelerator for Oracle’s Fusion Middleware 11g product suite that can help to reduce development time and cost by up to 30 percent. AFPO is a unique accelerator that includes documentation, day one deliverables and quick start virtual machine images, along with access to a skilled team of resources, to reduce risk and cost while improving project quality. It can be delivered all at once or in stages, on-site, hosted, or as a cloud solution. Accenture recently released AFPO v5 for use with their clients. Accenture added significant updates in v5 including Day 1 images & documentation for Webcenter & ADF Mobile that are integrated with 30 other Oracle Middleware products that signifigantly reduced the services aspect to standing these products up. AFPO v5 also features rapid configuration and implementation capabilities for SOA/BPM integrated with Oracle WebCenter Portal, Oracle WebCenter Content, Oracle Business Intelligence, Oracle Identity Management and Oracle ADF Mobile.  AFPO v5 also delivers a starter kit for Oracle SOA Suite which builds upon the integration methodology, leading practices and extended tooling contained within the Oracle Foundation Pack. The combination of the AFPO starter kit and Foundation Pack jump-start and streamline Oracle SOA Suite implementation initiatives, helping to reduce the risk of deploying new technologies and making architectural decisions, so clients can ultimately reduce cost, risk and the time needed for an implementation.  You'll find more information at: Accenture's website:  www.accenture.com/afpo YouTube AFPO Telestration:  http://www.youtube.com/watch?v=_x429DcHEJs Press Release Brochure  Contacts: [email protected] Patrick J Sullivan (Accenture – Global Oracle Technology Lead), [email protected]

    Read the article

  • Oracle Linux Partner Pavilion Spotlight - Part IV

    - by Ted Davis
    Welcome to the final Oracle Linux Partner Pavilion Spotlight Part IV.  Two days left till the Big Show. You are gearing up. We are gearing up. You can feel the excitement.  We can feel the excitement. This. Will. Be. The. Best. Show. EVER. See you at the Partner Pavilion (Moscone south # 1033) at Oracle OpenWorld. - Oracle Linux / Oracle VM Team HP and Oracle are pleased to announce another Oracle Validated Configuration based on the ProLiant DL980 server. Many choose to deploy Oracle workloads on the ProLiant DL980 based on the cost/performance ratio they achieve running Oracle Linux Unbreakable Enterprise Kernel. You can be confident that Oracle Validated Configurations based on ProLiant servers will help you achieve your most demanding performance goals. QLogic The QLogic-Oracle partnership spans over 20 years resulting in the most comprehensive line of Oracle Linux I/O adapter technology. Interface options include Ethernet, Fibre-Channel, and FCoE. Host side connectivity is offered in both low profile PCIe and Express Module PCIe form factors. QLogic software drives are jointly qualified and “in-box” with Oracle Linux 5.x, 6,x and Oracle VM enabling simplified installation and management while simultaneously taking risk out of the solution. Bringing innovations such as NPIV, T10-PI, and intelligent caching adapter technology to the Oracle Linux environment further strengthens the QLogic advantage. A big thank you to all of our Oracle Linux Partner Pavilion participants. We - they- look forward to meeting you next week at Oracle OpenWorld. If you've missed our three previous Partner Spotlight's - here are the links: Part I, Part II, Part III. 

    Read the article

  • Making Modular, Reusable and Loosely Coupled MVC Components

    - by Dusan
    I am building MVC3 application and need some general guidelines on how to manage complex client side interaction between my components. Here is my definition of one component in general way: Component which has it's own controller, model and view. All of the component's logic is placed inside these three parts and component is sort of "standalone", it contains it's own form, data needed for interaction, updates itself with Ajax and so on. Beside this internal logic and behavior of the component, it needs to be able to "Talk" to the outside world. By this I mean it should provide data and events (sort of) so when this component gets embedded in pages can notify other components which then can update based on the current state and data. I have an idea to use client ViewModel (in java-script) which would hookup all relevant components on page and control interaction between them. This would make components reusable, modular - independent of the context in which they are used. How would you do this, I am a bit stuck as I do not know if this is a good approach and there is a technical possibility to achieve this using java-script/jquery. The confusing part is about update via Ajax, how to ensure that component is properly linked to ViewModel when component is Ajax updated (or even worse removed or dynamically added). Also, how should this ViewModel be constructed and which technicalities to use here and in components to work as synergy??? On the web, I have found the various examples of the similar approach, but they are oversimplified (even for dummies) or over specific and do not provide valuable resource or general solution for this kind of implementation. If you have some serious examples it would be, also, very helpful. Note: My aim is to make interactions between many components on the same page simpler and more robust and elegant.

    Read the article

  • Windows 8: SL and HTML

    - by xamlnotes
    I  was just pointed to comment on my friend Andrew Brust’s blog about Silverlight versus HTML 5. Andrews blog is here: http://geekswithblogs.net/andrewbrust/archive/2011/11/23/windows-8-will-be-here-tomorrow-but-should-silverlight-be.aspx#600915 You can get another idea from another friend of mine Billy Hollis here: http://geekswithblogs.net/jalexander/archive/2011/04/09/the-eternal-battle-rich-v.-reachhellip--guest-blogger-billy-hollis.aspx The commenter is raving about HTML 5 and how that’s the future and SL is not. Well, my reaction is “hogwash”. Sure, HTML 5 is important and does some interesting stuff. Checkout what Bing.com is doing with it on some days and you can see. But to say that XAML is dead is nuts. I have been wrapping up bugs on a cross browser version of an application for awhile now. Whats the state of cross browser today? Well, better than a few years ago but far from perfect.  Each browser vendor interprets the specs in a little different way and you must account for them. The worst offender for major browsers? Apple and its Safari.  I had to make more changes for it than any other. Whats that got to do with XAML and SL/WPF?  Well, you write your SL code once and it runs in all browsers that support it, no changes. ipad does not? Well, they should be taken to court and forced too just like MS and others have been in the past for locking out competitors. Line of business applications? Write them in SL or WPF or both.  Use the power of XAML witch far out reaches html in any flavor and move on. We do need HTML 5 but its not a panacea nor will it replace all other technologies.

    Read the article

  • Data Center Modernization: Harness the power of Oracle Exalogic and Exadata with PeopleSoft

    - by Michelle Kimihira
    Author: Latha Krishnaswamy, Senior Manager, Exalogic Product Management   Allegis Group - a Hanover, MD-based global staffing company is the largest privately held staffing company in the United States with more than 10,000 internal employees and 90,000 contract employees. Allegis Group is a $6+ billion company, offering a full range of specialized staffing and recruiting solutions to clients in a wide range of industries.   The company processes about 133,000 paychecks per week, every week of the year. With 300 offices around the world and the hefty task of managing HR and payroll, the PeopleSoft system at Allegis  is a mission-critical application. The firm is in the midst of a data center modernization initiative. Part of that project meant moving the company's PeopleSoft applications (Financials and HR Modules as well as Custom Time & Expense module) to a converged infrastructure.     The company ran a proof of concept with four different converged architectures before deciding upon Exadata and Exalogic as the platform of choice.   Performance combined with High availability for running mission-critical payroll processes drove this decision.  During the testing on Exadata and Exalogic Allegis applied a particular (11-F) tax update in production environment. What job ran for roughly six hours completed in less than 1.5 hours. With additional tuning the second run of the Tax update 11-F reduced to 33 minutes - a 90% improvement!     Not only that, the move will help the company save money on middleware by consolidating use of Oracle licensing in a single platform.   Summary With a modern data center powered by Exalogic and Exadata to run mission-critical PeopleSoft HR and Financial Applications, Allegis is positioned to manage business growth and improve employee productivity. PeopleSoft applications run on engineered systems platform minimizing hardware and software integration risks. Additional Information Product Information on Oracle.com: Oracle Fusion Middleware Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • Changes to the LINQ-to-StreamInsight Dialect

    - by Roman Schindlauer
    In previous versions of StreamInsight (1.0 through 2.0), CepStream<> represents temporal streams of many varieties: Streams with ‘open’ inputs (e.g., those defined and composed over CepStream<T>.Create(string streamName) Streams with ‘partially bound’ inputs (e.g., those defined and composed over CepStream<T>.Create(Type adapterFactory, …)) Streams with fully bound inputs (e.g., those defined and composed over To*Stream – sequences or DQC) The stream may be embedded (where Server.Create is used) The stream may be remote (where Server.Connect is used) When adding support for new programming primitives in StreamInsight 2.1, we faced a choice: Add a fourth variety (use CepStream<> to represent streams that are bound the new programming model constructs), or introduce a separate type that represents temporal streams in the new user model. We opted for the latter. Introducing a new type has the effect of reducing the number of (confusing) runtime failures due to inappropriate uses of CepStream<> instances in the incorrect context. The new types are: IStreamable<>, which logically represents a temporal stream. IQStreamable<> : IStreamable<>, which represents a queryable temporal stream. Its relationship to IStreamable<> is analogous to the relationship of IQueryable<> to IEnumerable<>. The developer can compose temporal queries over remote stream sources using this type. The syntax of temporal queries composed over IQStreamable<> is mostly consistent with the syntax of our existing CepStream<>-based LINQ provider. However, we have taken the opportunity to refine certain aspects of the language surface. Differences are outlined below. Because 2.1 introduces new types to represent temporal queries, the changes outlined in this post do no impact existing StreamInsight applications using the existing types! SelectMany StreamInsight does not support the SelectMany operator in its usual form (which is analogous to SQL’s “CROSS APPLY” operator): static IEnumerable<R> SelectMany<T, R>(this IEnumerable<T> source, Func<T, IEnumerable<R>> collectionSelector) It instead uses SelectMany as a convenient syntactic representation of an inner join. The parameter to the selector function is thus unavailable. Because the parameter isn’t supported, its type in StreamInsight 1.0 – 2.0 wasn’t carefully scrutinized. Unfortunately, the type chosen for the parameter is nonsensical to LINQ programmers: static CepStream<R> SelectMany<T, R>(this CepStream<T> source, Expression<Func<CepStream<T>, CepStream<R>>> streamSelector) Using Unit as the type for the parameter accurately reflects the StreamInsight’s capabilities: static IQStreamable<R> SelectMany<T, R>(this IQStreamable<T> source, Expression<Func<Unit, IQStreamable<R>>> streamSelector) For queries that succeed – that is, queries that do not reference the stream selector parameter – there is no difference between the code written for the two overloads: from x in xs from y in ys select f(x, y) Top-K The Take operator used in StreamInsight causes confusion for LINQ programmers because it is applied to the (unbounded) stream rather than the (bounded) window, suggesting that the query as a whole will return k rows: (from win in xs.SnapshotWindow() from x in win orderby x.A select x.B).Take(k) The use of SelectMany is also unfortunate in this context because it implies the availability of the window parameter within the remainder of the comprehension. The following compiles but fails at runtime: (from win in xs.SnapshotWindow() from x in win orderby x.A select win).Take(k) The Take operator in 2.1 is applied to the window rather than the stream: Before After (from win in xs.SnapshotWindow() from x in win orderby x.A select x.B).Take(k) from win in xs.SnapshotWindow() from b in     (from x in win     orderby x.A     select x.B).Take(k) select b Multicast We are introducing an explicit multicast operator in order to preserve expression identity, which is important given the semantics about moving code to and from StreamInsight. This also better matches existing LINQ dialects, such as Reactive. This pattern enables expressing multicasting in two ways: Implicit Explicit var ys = from x in xs          where x.A > 1          select x; var zs = from y1 in ys          from y2 in ys.ShiftEventTime(_ => TimeSpan.FromSeconds(1))          select y1 + y2; var ys = from x in xs          where x.A > 1          select x; var zs = ys.Multicast(ys1 =>     from y1 in ys1     from y2 in ys1.ShiftEventTime(_ => TimeSpan.FromSeconds(1))     select y1 + y2; Notice the product translates an expression using implicit multicast into an expression using the explicit multicast operator. The user does not see this translation. Default window policies Only default window policies are supported in the new surface. Other policies can be simulated by using AlterEventLifetime. Before After xs.SnapshotWindow(     WindowInputPolicy.ClipToWindow,     SnapshotWindowInputPolicy.Clip) xs.SnapshotWindow() xs.TumblingWindow(     TimeSpan.FromSeconds(1),     HoppingWindowOutputPolicy.PointAlignToWindowEnd) xs.TumblingWindow(     TimeSpan.FromSeconds(1)) xs.TumblingWindow(     TimeSpan.FromSeconds(1),     HoppingWindowOutputPolicy.ClipToWindowEnd) Not supported … LeftAntiJoin Representation of LASJ as a correlated sub-query in the LINQ surface is problematic as the StreamInsight engine does not support correlated sub-queries (see discussion of SelectMany). The current syntax requires the introduction of an otherwise unsupported ‘IsEmpty()’ operator. As a result, the pattern is not discoverable and implies capabilities not present in the server. The direct representation of LASJ is used instead: Before After from x in xs where     (from y in ys     where x.A > y.B     select y).IsEmpty() select x xs.LeftAntiJoin(ys, (x, y) => x.A > y.B) from x in xs where     (from y in ys     where x.A == y.B     select y).IsEmpty() select x xs.LeftAntiJoin(ys, x => x.A, y => y.B) ApplyWithUnion The ApplyWithUnion methods have been deprecated since their signatures are redundant given the standard SelectMany overloads: Before After xs.GroupBy(x => x.A).ApplyWithUnion(gs => from win in gs.SnapshotWindow() select win.Count()) xs.GroupBy(x => x.A).SelectMany(     gs =>     from win in gs.SnapshotWindow()     select win.Count()) xs.GroupBy(x => x.A).ApplyWithUnion(gs => from win in gs.SnapshotWindow() select win.Count(), r => new { r.Key, Count = r.Payload }) from x in xs group x by x.A into gs from win in gs.SnapshotWindow() select new { gs.Key, Count = win.Count() } Alternate UDO syntax The representation of UDOs in the StreamInsight LINQ dialect confuses cardinalities. Based on the semantics of user-defined operators in StreamInsight, one would expect to construct queries in the following form: from win in xs.SnapshotWindow() from y in MyUdo(win) select y Instead, the UDO proxy method is referenced within a projection, and the (many) results returned by the user code are automatically flattened into a stream: from win in xs.SnapshotWindow() select MyUdo(win) The “many-or-one” confusion is exemplified by the following example that compiles but fails at runtime: from win in xs.SnapshotWindow() select MyUdo(win) + win.Count() The above query must fail because the UDO is in fact returning many values per window while the count aggregate is returning one. Original syntax New alternate syntax from win in xs.SnapshotWindow() select win.UdoProxy(1) from win in xs.SnapshotWindow() from y in win.UserDefinedOperator(() => new Udo(1)) select y -or- from win in xs.SnapshotWindow() from y in win.UdoMacro(1) select y Notice that this formulation also sidesteps the dynamic type pitfalls of the existing “proxy method” approach to UDOs, in which the type of the UDO implementation (TInput, TOuput) and the type of its constructor arguments (TConfig) need to align in a precise and non-obvious way with the argument and return types for the corresponding proxy method. UDSO syntax UDSO currently leverages the DataContractSerializer to clone initial state for logical instances of the user operator. Initial state will instead be described by an expression in the new LINQ surface. Before After xs.Scan(new Udso()) xs.Scan(() => new Udso()) Name changes ShiftEventTime => AlterEventStartTime: The alter event lifetime overload taking a new start time value has been renamed. CountByStartTimeWindow => CountWindow

    Read the article

  • Solaris at LISA 2011

    - by dminer
    As is our custom, the Solaris team will be out in force at the USENIX LISA conference; this year it's in Boston so it's sort of a home game for me for a change.  The big event we'll have is Tuesday, December 6, the Oracle Solaris 11 Summit Day.  We'll be covering deployment, ZFS, Networking, Virtualization, Security, Clustering, and how Oracle apps run best on Solaris 11.  We've done this the past couple of years and it's always a very full day.On Wednesday, December 7, we've got a couple of BOF sessions scheduled back-to-back.  At 7:30 we'll have the ever-popular engineering panel, with all of us who are speaking at Tuesday's summit day there for a free-flowing discussion of all things Solaris.  Following that, Bart & I are hosting a second BOF at 9:30 to talk more about deployment for clouds and traditional data centers.Also, on Wednesday and Thursday we'll have a booth at the exhibition where there'll be demos and just a general chance to talk with various Solaris staff from engineering and product management.The conference program looks great and I look forward to seeing you there!

    Read the article

  • How to improve Minecraft-esque voxel world performance?

    - by SomeXnaChump
    After playing Minecraft I marveled a bit at its large worlds but at the same time I found them extremely slow to navigate, even with a quad core and meaty graphics card. Now I assume Minecraft is fairly slow because: A) It's written in Java, and as most of the spatial partitioning and memory management activities happen in there, it would naturally be slower than a native C++ version. B) It doesn't partition its world very well. I could be wrong on both assumptions; however it got me thinking about the best way to manage large voxel worlds. As it is a true 3D world, where a block can exist in any part of the world, it is basically a big 3D array [x][y][z], where each block in the world has a type (i.e BlockType.Empty = 0, BlockType.Dirt = 1 etc.) Now, I am assuming to make this sort of world perform well you would need to: A) Use a tree of some variety (oct/kd/bsp) to split all the cubes out; it seems like an oct/kd would be the better option as you can just partition on a per cube level not a per triangle level. B) Use some algorithm to work out which blocks can currently be seen, as blocks closer to the user could obfuscate the blocks behind, making it pointless to render them. C) Keep the block object themselves lightweight, so it is quick to add and remove them from the trees. I guess there is no right answer to this, but I would be interested to see peoples' opinions on the subject. How would you improve performance in a large voxel-based world?

    Read the article

  • Happy Day! VS2010 SP1, Project Server Integration, Load Test Feature Pack

    - by Aaron Kowall
    Microsoft released a PILE of Visual Studio goodness today: Visual Studio 2010 SP1(Including TFS SP1) Finally done with remembering which GDR packs, KB Patches, etc need to be installed with a new VS/TFS 2010 deployment.  Just grab the SP1.  It’s available today for MSDN Subscribers and March 10th for public download. TFS-Project Server Integration Feature Pack MSDN Subscribers got another little treat today with the TFS-Project Server integration feature pack.  We can now get project rollups and portfolio level management with Project Server yet still have the tight developer interaction with TFS.  Finally we can make the PMO happy without duplicate entry or MS Project gymnastics. Visual Studio Load Test Feature Pack This is a new benefit for Visual Studio 2010 Ultimate subscribers.  Previously there was a limit to Ultimate Load Testing of 250 virtual users. If you needed more, you had to buy virtual user license packs.  No more.  Now your Visual Studio Ultimate license allows you to simulate as many virtual users as you need!!  This is HUGE in improving adoption of regular load testing for development projects. All the Details are available from Soma’s blog. Technorati Tags: VS2010,TFS,Load Test

    Read the article

  • How to refactor when all your development is on branches?

    - by Mark
    At my company, all of our development (bug fixes and new features) is done on separate branches. When it's complete, we send it off to QA who tests it on that branch, and when they give us the green light, we merge it into our main branch. This could take anywhere between a day and a year. If we try to squeeze any refactoring in on a branch, we don't know how long it will be "out" for, so it can cause many conflicts when it's merged back in. For example, let's say I want to rename a function because the feature I'm working on is making heavy use of this function, and I found that it's name doesn't really fit its purpose (again, this is just an example). So I go around and find every usage of this function, and rename them all to its new name, and everything works perfectly, so I send it off to QA. Meanwhile, new development is happening, and my renamed function doesn't exist on any of the branches that are being forked off main. When my issue gets merged back in, they're all going to break. Is there any way of dealing with this? It's not like management will ever approve a refactor-only issue so it has to be squeezed in with other work. It can't be developed directly on main because all changes have to go through QA and no one wants to be the jerk that broke main so that he could do a little bit of non-essential refactoring.

    Read the article

  • How do I (quickly) let people know that software I am providing for free is not abandon-ware?

    - by blueberryfields
    As an independent, individual programmer: How do I let people very quickly know that I have not abandoned the software I've written and given away for free? That I am putting in the effort required to maintain and support my software to a professional level? When software written by one or two developers is available for free, or marked as open-source, usually the default assumption is that it's abandon-ware. This is usually a safe assumption - check out the answers to this question if you doubt it: Why do programmers write applications and then make them free?. There are lots of programmers who provide free and/or open-source tools which are not abandon-ware, though. If we're talking about large companies, ie Google, there's no real problem telling the difference between supported, live tools and software, and those which are abandoned or discontinued. A lively git repository isn't quick - users will have to be savvy enough to understand the repository and know where to look for it. Consistent marketing and community management take more time and effort than I can put in on my own. Also, if my software becomes popular/successful, I assume those will grow on their own, and be supported by power users in the community.

    Read the article

  • Virtual Networks in Oracle Solaris - Part 5

    - by user12616590
               A         long         time      ago in a    blogosphere   far, far away... I wrote four blog entries to describe the new network virtualization features that were in Solaris 11 Express: Part 1 introduced the concept of network virtualization and listed the basic virtual network elements. Part 2 expanded on the concepts and discussed the resource management features. Part 3 demonstrated the creation of some of these virtual network elements. Part 4 demonstrated the network resource controls. I had planned a final entry that added virtual routers to the list of virtual network elements, but Jeff McMeekin wrote a paper that discuses the same features. That paper is available at OTN. And this Jeff can't write any better than that Jeff... All of the features described in those blog entries and that paper are also available in Solaris 11. It is possible that some details have changed, but the vast majority of the content is unchanged.

    Read the article

  • Writing Acceptance test cases

    - by HH_
    We are integrating a testing process in our SCRUM process. My new role is to write acceptance tests of our web applications in order to automate them later. I have read a lot about how tests cases should be written, but none gave me practical advices to write test cases for complex web applications, and instead they threw conflicting principles that I found hard to apply: Test cases should be short: Take the example of a CMS. Short test cases are easy to maintain and to identify the inputs and outputs. But what if I want to test a long series of operations (eg. adding a document, sending a notification to another user, the other user replies, the document changes state, the user gets a notice). It rather seems to me that test cases should represent complete scenarios. But I can see how this will produce overtly complex test documents. Tests should identify inputs and outputs:: What if I have a long form with many interacting fields, with different behaviors. Do I write one test for everything, or one for each? Test cases should be independent: But how can I apply that if testing the upload operation requires that the connect operation is successful? And how does it apply to writing test cases? Should I write a test for each operation, but each test declares its dependencies, or should I rewrite the whole scenario for each test? Test cases should be lightly-documented: This principles is specific to Agile projects. So do you have any advice on how to implement this principle? Although I thought that writing acceptance test cases was going to be simple, I found myself overwhelmed by every decision I had to make (FYI: I am a developer and not a professional tester). So my main question is: What steps or advices do you have in order to write maintainable acceptance test cases for complex applications. Thank you.

    Read the article

  • Hidden Gems: Accelerating Oracle Data Integrator with SOA, Groovy, SDK, and XML

    - by Alex Kotopoulis
    On the last day of Oracle OpenWorld, we had a final advanced session on getting the most out of Oracle Data Integrator through the use of various advanced techniques. The primary way to improve your ODI processes is to choose the optimal knowledge modules for your load and take advantage of the optimized tools of your database, such as OracleDataPump and similar mechanisms in other databases. Knowledge modules also allow you to customize tasks, allowing you to codify best practices that are consistently applied by all integration developers. ODI SDK is another very powerful means to automate and speed up your integration development process. This allows you to automate Life Cycle Management, code comparison, repetitive code generation and change of your integration projects. The SDK is easily accessible through Java or scripting languages such as Groovy and Jython. Finally, all Oracle Data Integration products provide services that can be integrated into a larger Service Oriented Architecture. This moved data integration from an isolated environment into an agile part of a larger business process environment. All Oracle data integration products can play a part in thisracle GoldenGate can integrate into business event streams by processing JMS queues or publishing new events based on database transactions. Oracle GoldenGate can integrate into business event streams by processing JMS queues or publishing new events based on database transactions. Oracle Data Integrator allows full control of its runtime sessions through web services, so that integration jobs can become part of business processes. Oracle Data Service Integrator provides a data virtualization layer over your distributed sources, allowing unified reading and updating for heterogeneous data without replicating and moving data. Oracle Enterprise Data Quality provides data quality services to cleanse and deduplicate your records through web services.

    Read the article

  • ArchBeat Link-o-Rama for 2012-04-12

    - by Bob Rhubart
    2012 Real World Performance Tour Dates |Performance Tuning | Performance Engineering www.ioug.org Coming to your town: a full day of real world database performance with Tom Kyte, Andrew Holdsworth, and Graham Wood. Rochester, NY - March 8 Los Angeles, CA - April 30 Orange County, CA - May 1 Redwood Shores, CA - May 3 Oracle Technology Network Developer Day: MySQL - New York www.oracle.com Wednesday, May 02, 2012 8:00 AM – 4:30 PM Grand Hyatt New York 109 East 42nd Street, Grand Central Terminal New York, NY 10017 Webcast Series: Data Warehousing Best Practices event.on24.com April 19, 2012 - Best Practices for Workload Management of a Data Warehouse on Oracle Exadata May 10, 2012 - Best Practices for Extreme Data Warehouse Performance on Oracle Exadata Webcast: Untangle Your Business with Oracle Unified SOA and Data Integration event.on24.com Date: Tuesday, April 24, 2012 Time: 10:00 AM PT / 1:00 PM ET Speakers: Mala Narasimharajan - Senior Product Marketing Manager, Oracle Data Integration, Oracle Bruce Tierney - Director of Product Marketing, Oracle SOA Suite, Oracle The Increasing Focus on Architecture (ArchBeat) blogs.oracle.com As a "third wave" of computing, Cloud computing is changing how IT organizations and individuals within those organizations approach the creation of solutions. Updated SOA Documents now available in ITSO Reference Library blogs.oracle.com Nine updated documents have just been added to the IT Strategies from Oracle library, including SOA Practitioner Guides, SOA Reference Architectures, and SOA White Papers and Data Sheets. Access to all documents within the ITSO library is free to those with a free Oracle.com membership. WebLogic JMS Clustering and Spring | Rene van Wijk middlewaremagic.com Oracle ACE Rene van Wijk sets up a WebLogic cluster that includes a JMS environment, which will be used by Spring. Running Built-In Test Simulator with SOA Suite Healthcare 11g in PS4 and PS5 | Shub Lahiri blogs.oracle.com Shub Lahiri shows how the pre-installed simulator that comes with the SOA Suite for Healthcare Integration pack can be used as an external endpoint to generate inbound and outbound HL7 traffic on specified MLLP ports. In the cloud era, let's start calling IT what it is: 'Innovation Team' | Joe McKendrick www.zdnet.com Cloud, the third great shift in 50 years of computing, presents a golden opportunity for IT to get out in front and lead. Thought for the Day "Why do we never have time to do it right, but always have time to do it over?" — Anonymous

    Read the article

  • Who is a CMS really for?

    - by Eirc man
    I have started lately discovering Content Management Systems, and I was wondering, who is really CMS for? What I mean by that: is it only for companies, small businesses or individuals, that pays a contractor to make a website that it's users can just upload content through a easy interface. Or is it used also by programmers, to build their own websites, projects? Would a Facebook, Tweeter, StackExhange ever started by using a CMS, a very powerful one for example. Would you as a programmer build your own "fancy" website on top of a CMS, for example like Typo3, or you would build it from scratch? P.S To be more clear is a summary: What I mean to begin with is, would I as a developer choose a CMS to develop a website that can be scaled with a big base of users, be stuck if I choose to start with a CMS system. What if I build a website using CMS, and the website explodes in popularity, and then I wanted to add much more functionality that I have planed, is it possible that the CMS will limit the growth, because it might have not been build for that kind of scale?

    Read the article

  • Can DVCSs enforce a specific workflow?

    - by dukeofgaming
    So, I have this little debate at work where some of my colleagues (which are actually in charge of administrating our Perforce instance) say that workflows are strictly a process thing, and that the tools that we use (in this case, the version control system) have no take on it. In otherwords, the point that they make is that workflows (and their execution) are tool-agnostic. My take on this is that DVCSs are better at encouraging people in more flexible and well-defined ways, because of the inherent branching occurring in the background (anonymous branches), and that you can enforce workflows through the deployment model you establish (e.g. pull requests through repository management, dictator/liutenant roles with their machines setup as servers, etc.) I think in CVCSs you have to enforce workflows through policies and policing, because there is only one way to share the code, while in DVCSs you just go with the flow based on the infrastructure/permissions that were setup for you. Even when I have provided the earlier arguments, I'm still unable to fully convince them. Am I saying something the wrong way?, if not, what other arguments or examples do you think would be useful to convince them? Edit: The main workflow we have been focusing on, because it makes sense to both sides is the Dictator/Lieutenants workflow: My argument for this particular workflow is that there is no pipeline in a CVCS (because there is just sharing work in a centralized way), whereas there is an actual pipeline in DVCSs depending on how you deploy read/write permissions. Their argument is that this workflow can be done through branching, and while they do this in some projects (due to policy/policing) in other projects they forbid developers from creating branches.

    Read the article

  • How to install Gyachi on ubuntu 12.10

    - by Oguz Can Sertel
    I would like to use Gyachi on ubuntu 12.10. I tried these steps but it doesn't work.. I wanted to compile it myself... but it need some libs... it made me confused... so I gave up sudo add-apt-repository ppa:adilson/experimental sudo apt-get update sudo apt-get install gyachi Thank you for your helps at first command the output: sudo add-apt-repository ppa:adilson/experimental You are about to add the following PPA to your system: Contains packages that are not in the official Debian/Ubuntu repositories and newer versions and snapshots which are not available yet in the repositories. Theses packages are experimental. Use them at your own risk. More info: https://launchpad.net/~adilson/+archive/experimental Press [ENTER] to continue or ctrl-c to cancel adding it gpg: keyring `/tmp/tmp3y3i7p/secring.gpg' created gpg: keyring `/tmp/tmp3y3i7p/pubring.gpg' created gpg: requesting key 27B81625 from hkp server keyserver.ubuntu.com gpg: /tmp/tmp3y3i7p/trustdb.gpg: trustdb created gpg: key 27B81625: public key "Launchpad Experimental Packages PPA" imported gpg: Total number processed: 1 gpg: imported: 1 (RSA: 1) OK and after sudo apt-get update; this is (sudo apt-get install gyachi)'s output here is the output: sudo apt-get install gyachi Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package gyachi

    Read the article

  • What should a domain object's validation cover?

    - by MarcoR88
    I'm trying to figure out how to do validation of domain objects that need external resources, such as data mappers/dao Firstly here's my code class User { const INVALID_ID = 1; const INVALID_NAME = 2; const INVALID_EMAIL = 4; int getID(); void setID(Int i); string getName(); void setName(String s); string getEmail(); void setEmail(String s); int getErrorsForInsert(); // returns a bitmask for INVALID_* constants int getErrorsForUpdate(); } My worries are about the uniqueness of the email, checking it would require the storage layer. Reading others' code seems that two solutions are equally accepted: both perform the unique validation in data mapper but some set an error state to the DO user.addError(User.INVALID_EMAIL) while others prefer to throw a totally different type of exception that covers only persistence, like: UserStorageException { const INVALID_EMAIL = 1; const INVALID_CITY = 2; } What are the pros and cons of these solutions?

    Read the article

  • Never Bet Against the Impossible

    - by BuckWoody
    My uncle used to say “If a man tells you that his car squirts milk in his eye when you lift the hood, don’t bet against that. You’ll end up with milk in your eye.” My friend Allen White tells me this is taken from a play (and was said about playing cards), but I think the sentiment holds, even in database work. I mentioned the other day that you should allow the other person to talk and actively listen before you propose a solution. Well, I saw a consultant “bet against the impossible”  the other day – and it bit her. She explained to the person telling her the problem that the situation simply couldn’t exist that way, and he proceeded to show her that it did. She got silent, typed a few things, muttered a little, and then said “well, must be something else.” She just couldn’t admit she was wrong. So don’t go there. If someone explains a problem to you with their database, listen with purpose, and then explore the troubleshooting steps you know to find the problem. But keep your absolutes to yourself. In fact, I have a friend that has recently sent me one of those. He connects to a system with SQL Server Management Studio (SSMS) version 2008 (if I recall correctly) and it shows a certain version number of the target system in the connection tab. Then he connects to it using SSMS 2008 R2 and gets a different number. Now, as far as I know, we didn’t change the connection string information, and that’s provided by the target system, so this is impossible. But I won’t tell him that. Not until I look a little more. :) Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Balancing dependency injection with public API design

    - by kolektiv
    I've been contemplating how to balance testable design using dependency injection with providing simple fixed public API. My dilemma is: people would want to do something like var server = new Server(){ ... } and not have to worry about creating the many dependencies and graph of dependencies that a Server(,,,,,,) may have. While developing, I don't worry too much, as I use an IoC/DI framework to handle all that (I'm not using the lifecycle management aspects of any container, which would complicate things further). Now, the dependencies are unlikely to be re-implemented. Componentisation in this case is almost purely for testability (and decent design!) rather than creating seams for extension, etc. People will 99.999% of the time wish to use a default configuration. So. I could hardcode the dependencies. Don't want to do that, we lose our testing! I could provide a default constructor with hard-coded dependencies and one which takes dependencies. That's... messy, and likely to be confusing, but viable. I could make the dependency receiving constructor internal and make my unit tests a friend assembly (assuming C#), which tidies the public API but leaves a nasty hidden trap lurking for maintenance. Having two constructors which are implicitly connected rather than explicitly would be bad design in general in my book. At the moment that's about the least evil I can think of. Opinions? Wisdom?

    Read the article

  • Strange resizing of partition after reinstalling Ubuntu 14.04 64bit

    - by Mike
    I started with Windows 7 on 120GB SSD and Ubuntu 14.04 32bit installed on 60GB partition on separate 1TB HDD. I just did a fresh reinstall of 14.04 64bit on the 1TB HDD. In the installation set up process, I selected the second choice of "deleting Ubuntu 14.04 and all it's files,documents, photos etc and reinstalling" to what I figured would reinstall the 64bit OS on the already existing 60GB allocated partition. Instead, it reinstalled Ubuntu as 43.5 GB and created a separate 15.8 partition. So now it reads that my disk space for Ubuntu ( in settingsdetails) is 43.5GB (instead of the previous 60GB that my old 32bit had) The upside is I can now access my 1TB HDD from my toolbar(and all the files located on it) Before, I could only access that through Windows (I can also access the SSD too, but that was always the case) Both drives are mounted now. My initial reaction was to go into Windows 7 disk management delete the strange/new 15gb partitionextend the 43.5 to the unallocated space. But I'm not sure if this is necessary or would even work. My question is why did it create a 15gb partition shrinking my ubuntu disk space, and is it useful? I don't want wasted space, so before I go through all my set up of Ubuntu, should I change this. At this time my HDD reads as 43.5 partiton, 15.8 partition, and 874GB exfat32 (939GB total)

    Read the article

< Previous Page | 581 582 583 584 585 586 587 588 589 590 591 592  | Next Page >