Search Results

Search found 20201 results on 809 pages for 'more info needed'.

Page 381/809 | < Previous Page | 377 378 379 380 381 382 383 384 385 386 387 388  | Next Page >

  • Visual WebGui helps Dawsons put its Windows Forms HR system on the web

    - by Webgui
    Dawsons needed to upgrade their existing Windows Forms LAN human resources system to allow better and flexible access to the ever increasing data size as labor hire clients are demanding easy access to copies of tickets and licenses and etc. The company has some 30,000 applicants on file, but access to this data has been complex and limited with the current system. Therefore the IT department was asked to find a possible solutions that would allow creating a web based application while utilizing as much of the existing Windows Forms code as possible. Visual WebGui was found to be highly regarded in many frameworks comparisons so the team decided to give Visual WebGui a try. It didn’t take long for them to recognize that the Visual WebGui controls appear and react over the web the same as desktop controls. This and the fact that most of the code was directly ported which saved Dawsons hundreds of development hours are what make Visual WebGui so unique and productive. “My first impression of Visual WebGui was perhaps disbelief. Not being a seasoned Web programmer, I initially found it hard to accept so much functionality from a web based application. Also, the speed is exceptional” said John Sainsbury, Financial Controller of the Dawsons Group who added “Since working with Visual WebGui, I have showcased parts of the application to our major clients, some of whom use SAP portals, and they are amazed.” The result was so satisfying that the company is now looking to produce a mobile version for accessing the labor pool on the go. “By removing the barriers of the local network, Visual WebGui has changed how we can do business” said Lloyd Everist, General Manager Dawsons Group. Read more about this story

    Read the article

  • VHOST not working in Apache

    - by Starx
    I got a LAMP server working in Ubuntu 11.04. Now the problem is that the websites have to enabled and disabled from the terminal. All all of them have to be accessed from http://localhost which is not so much efficient. So I created a VHOSTS, using some tutorials off the net. Here is the coding for it <VirtualHost *:80> ServerAdmin webmaster@localhost Servername site.com ServerAlias www.site.com DocumentRoot /home/starx/public_html/site/public <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/starx/public_html/site/public> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> nGen ErrorLog ${APACHE_LOG_DIR}/site-error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/site-access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> Now, still I can't access the page in http://site.com but if i access using http://localhsot/ it is accessed. I have disabled all others site including default and have just enabled one site i.e. site How to fix this?

    Read the article

  • Error when I try to installth desktop integration features for Openoffice

    - by PENG TENG
    peng@peng-ThinkPad-SL410:~$ cd '/home/peng/Downloads/en-US/DEBS/desktop-integration' peng@peng-ThinkPad-SL410:~/Downloads/en-US/DEBS/desktop-integration$ sudo dpkg -i *.deb (Reading database ... 357248 files and directories currently installed.) Unpacking openoffice.org-debian-menus (from openoffice.org3.4-debian-menus_3.4-9593_all.deb) ... dpkg: error processing openoffice.org3.4-debian-menus_3.4-9593_all.deb (--install): trying to overwrite '/usr/bin/soffice', which is also in package libreoffice-common 1:3.6.2~rc2-0ubuntu3 /usr/bin/gtk-update-icon-cache gtk-update-icon-cache: Cache file created successfully. /usr/bin/gtk-update-icon-cache gtk-update-icon-cache: Cache file created successfully. Processing triggers for menu ... Processing triggers for hicolor-icon-theme ... Processing triggers for gnome-icon-theme ... Processing triggers for shared-mime-info ... Unknown media type in type 'all/all' Unknown media type in type 'all/allfiles' Unknown media type in type 'uri/mms' Unknown media type in type 'uri/mmst' Unknown media type in type 'uri/mmsu' Unknown media type in type 'uri/pnm' Unknown media type in type 'uri/rtspt' Unknown media type in type 'uri/rtspu' Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... Errors were encountered while processing: openoffice.org3.4-debian-menus_3.4-9593_all.deb Can anyone solve the problem?

    Read the article

  • Security vulnerability and nda's [closed]

    - by Chris
    I want to propose a situation and gain insight from the communities thoughts. A customer, call them Customer X has a contract with a vendor, Vendor Y to provide an application and services. Customer X discovers a serious authentication vulnerability in Vendor Y's software. Vendor Y and Customer X has a discussion. Vendor Y acknowledges/confirms flaw. Vendor Y confirms they will put effort to fix. Customer X requests Vendor Y to inform all customers impacted by this. Vendor agrees. Fast forward 2 months, and the flaw has not been fixed. Patches were applied to mitigate but the flaw still exists. However, no customers were informed of issue. At this point customer X contacts Vendor Y to determine the status and understand why customer's were not informed. The vendor nicely reminds the customer they are under an NDA and are still working on the issue. A few questions/discussion pieces out of this. By discussing a software flaw with a vendor, does this imply you have agreed to any type of NDA disclosure? Additionally, what rights as does Customer X have to inform other customers of this vulnerability if vendor does not appear willing to comply? I (the op) am under the impression that when this situation occurs, you are supposed to notify vendor of issue, provide them with ample time to respond and if no response you are able to do what you wish with the information. I am thinking back to the MIT/subway incident where they contacted transit authorities, transit authorities didn't respond in a timely fashion so the students disclosed the information publicly on their own. Few things to note about this: I am not the customer in above situation, also lets assume for purposes of keeping discussion inline that customer X has no intentions of disclosing information, they are merely concerned and interested in making sure other customers are aware until it is fixed so they do not expierence a major security breach. (More information can be supplied if needed to add context to question. )

    Read the article

  • How do I get my Lexmark x4650 printer working?

    - by Fallen Dohingy
    I think that my printer stopped working with the switch to gnome 3 or unity. Yes I have tried 32 and 64 bit os's. Here is the driver In order to actually install the driver, you need to extract it and then open up terminal and type sudo and then a space. Then drag the script into the terminal window. Here is what it said in the diver install window: Extracting file: printdriver.te Extracting file: lexmark-08z-series-driver-1.0-1.i386.deb Extracting file: launcher.c Extracting file: launcherfallendohingy@Ubuntu-Inspiron-15R:~$ sudo '/home/fallendohingy/Downloads/lexmark-08z-series-driver-1.0-1.i386.deb.sh' [sudo] password for fallendohingy: Verifying archive integrity... All good. Uncompressing nixstaller.............................................................. Collecting info for this system... Operating system: linux CPU Arch: x86_64 Warning: No installer for "x86_64" found, defaulting to x86... TRACKING IDENT = 170209 cpu speed = 2394 MHz ram size = 3762.69921875 MB hd avail = 74348 MB (gtk:17645): GdkPixbuf-WARNING **: Cannot open pixbuf loader module file '/usr/lib/i386-linux-gnu/gdk-pixbuf-2.0/2.10.0/loaders.cache': No such file or directory (gtk:17645): GdkPixbuf-WARNING **: Cannot open pixbuf loader module file '/usr/lib/i386-linux-gnu/gdk-pixbuf-2.0/2.10.0/loaders.cache': No such file or directory (gtk:17645): GdkPixbuf-WARNING **: Cannot open pixbuf loader module file '/usr/lib/i386-linux-gnu/gdk-pixbuf-2.0/2.10.0/loaders.cache': No such file or directory (gtk:17645): GdkPixbuf-WARNING **: Cannot open pixbuf loader module file '/usr/lib/i386-linux-gnu/gdk-pixbuf-2.0/2.10.0/loaders.cache': No such file or directory /usr/lib/gio/modules/libgvfsdbus.so: wrong ELF class: ELFCLASS64 Failed to load module: /usr/lib/gio/modules/libgvfsdbus.so Extracting file: lsbrowser Extracting file: lsusbdevice Using dpkg installation ============================= Execute: dpkg -i --force-architecture lexmark-08z-series-driver-1.0-1.i386.deb > /tmp/selfgz17540/pkg/files/dpkg_msgs ============================= ============================= Execute: rm lexmark-08z-series-driver-1.0-1.i386.deb ============================= ============================= Execute: /sbin/udevadm control --reload-rules ============================= Successfully installed the .deb Lexmark drivers.

    Read the article

  • displaying multi-section html documents - best practices

    - by ecpepper
    I work at a research organization and we publish a lot of large-ish documents, usually organized in sections. What I want to know is how best to present these multi-section documents on our website. Presently, what I do is load the entire document as a single page, with each section as its own div. Then I show and hide divs as needed via a table of contents and "next" and "prev" buttons. The advantages to this are mainly: 1) that you can move between sections very quickly, 2) it produces consistent analytics (when a page is loaded, I know a report is being read). The disadvantages, however, are real: Readers can't take advantage of browser back/forward buttons to move between sections. It's complicated to create direct links to individual sections (I can do it with javascript but it's not easy for other people to grab and share). For long reports, you have to wait for the full report to load before you can move around (and that can include hordes of images and charts). Do other people have thoughts on better ways to organize this? Here's an example of the current system: http://massbudget.org/825

    Read the article

  • Designing a Content-Based ETL Process with .NET and SFDC

    - by Patrick
    As my firm makes the transition to using SFDC as our main operational system, we've spun together a couple of SFDC portals where we can post customer-specific documents to be viewed at will. As such, we've had the need for pseudo-ETL applications to be implemented that are able to extract metadata from the documents our analysts generate internally (most are industry-standard PDFs, XML, or MS Office formats) and place in networked "queue" folders. From there, our applications scoop of the queued documents and upload them to the appropriate SFDC CRM Content Library along with some select pieces of metadata. I've mostly used DbAmp to broker communication with SFDC (DbAmp is a Linked Server provider that allows you to use SQL conventions to interact with your SFDC Org data). I've been able to create [console] applications in C# that work pretty well, and they're usually structured something like this: static void Main() { // Load parameters from app.config. // Get documents from queue. var files = someInterface.GetFiles(someFilterOrRegexPattern); foreach (var file in files) { // Extract metadata from the file. // Validate some attributes of the file; add any validation errors to an in-memory // structure (e.g. List<ValidationErrors>). if (isValid) { var fileData = File.ReadAllBytes(file); // Upload using some wrapper for an ORM or DAL someInterface.Upload(fileData, meta.Param1, meta.Param2, ...); } else { // Bounce the file } } // Report any validation errors (via message bus or SMTP or some such). } And that's pretty much it. Most of the time I wrap all these operations in a "Worker" class that takes the needed interfaces as constructor parameters. This approach has worked reasonably well, but I just get this feeling in my gut that there's something awful about it and would love some feedback. Is writing an ETL process as a C# Console app a bad idea? I'm also wondering if there are some design patterns that would be useful in this scenario that I'm clearly overlooking. Thanks in advance!

    Read the article

  • Using XML in a Flex Website to Improve SEO

    - by Laxmidi
    Hi, I've got a Flex 3 site called www.brainpinata.com that's a trivia game. Basically, everything in the site is pulled from a database-- the questions, choices, and answers. So, unfortunately, Google doesn't index my content. So, I'm trying to think of ways to improve the situation: A) If I took my database data and put it in an XML file which was in the website's root directory, would this work? Would it violate any Google policy? (The info would be the same as in the db-- so nothing shady.) Would I have to "wire" the XML into my site or would it be enough to just have the XML sitting in the root directory? B) Another idea is to use the noscript tag and load the XML content there. As I understand it Google indexes content that people who have Javascript turned off would see. I know Flex/Actionscript 3, and unfortunately, I don't know how to load XML content with HTML. Does anyone know of an example where a Flex site uses XML for the noscript content? Thank you. -Laxmidi

    Read the article

  • Developing professionally for iOS, Android and web - an insight

    - by Scott Roberts
    This is not really a question on how to develop all three, I know various cross platform ways and so on. But I more want to know from developer standpoint how hard it is to basically develop iOS, Android and web apps? I am currently in my first job as a mobile/web developer. I have already developed my first iPhone/iPad app and now I have to develop the app for android because the web version I tried just didn't perform as well as needed and web databases just did not seem to make the cut. But I am not sure it's possible to be good at developing all 3 in terms of remembering all the api's etc. I wouldn't say I have an issue with the programming languages just how to use the api's for the various platforms. Also, all the other languages I look at, in my spare time, just feel like I am spreading myself to thin. Is it feasible for one person to be developing ios, android and web apps? Should I think about reducing it to iOS and web based apps? I develop everything by myself, so I have no one to discuss what the best solutions are for everything and I am just trying to workout as I go along. So any cross platform developers out there? Do companies have different teams for different platforms? Any insight would just help me get my head together. Hopefully this question makes sense.

    Read the article

  • ODI 12c - Getting up and running fast

    - by David Allan
    Here's a quick A-B-C to show you how to quickly get up and running with ODI 12c, from getting the software to creating a repository via wizard or the command line, then installing an agent for running load plans and the like. A. Get the software from OTN and install studio. Check out this viewlet here for quickly doing this. B. Create a repository using the RCU, check out this viewlet here which uses the FMW Repository Creation Utility.  You can also silently create (and drop) a repository using the command line, this is really easy. .\rcu -silent -createRepository -connectString yourhost:1521:orcl.st-users.us.oracle.com -dbUser sys -dbRole sysdba -useSamePasswordForAllSchemaUsers true -schemaPrefix X -component ODI -component IAU  -component IAU_APPEND  -component IAU_VIEWER -component OPSS < passwords.txt where the passwords file contains info such as; sysdba_passwd newschema_passwd odi_user_passwd D workreposname workrepos_passwd  You can find details about the silent use of RCU here in the FMW documentation. C. Quickly create an agent for executing load plans and the like -  there is a great OBE for this, check it out here. If you are on your laptop and just wanting as minimal an agent as possible then this link is a must. With these three steps you are ready to get to the fun stuff! Check out more OBEs here - keep on the lookout for more!

    Read the article

  • Code Reuse is (Damn) Hard

    - by James Michael Hare
    Being a development team lead, the task of interviewing new candidates was part of my job.  Like any typical interview, we started with some easy questions to get them warmed up and help calm their nerves before hitting the hard stuff. One of those easier questions was almost always: “Name some benefits of object-oriented development.”  Nearly every time, the candidate would chime in with a plethora of canned answers which typically included: “it helps ease code reuse.”  Of course, this is a gross oversimplification.  Tools only ease reuse, its developers that ultimately can cause code to be reusable or not, regardless of the language or methodology. But it did get me thinking…  we always used to say that as part of our mantra as to why Object-Oriented Programming was so great.  With polymorphism, inheritance, encapsulation, etc. we in essence set up the concepts to help facilitate reuse as much as possible.  And yes, as a developer now of many years, I unquestionably held that belief for ages before it really struck me how my views on reuse have jaded over the years.  In fact, in many ways Agile rightly eschews reuse as taking a backseat to developing what's needed for the here and now.  It used to be I was in complete opposition to that view, but more and more I've come to see the logic in it.  Too many times I've seen developers (myself included) get lost in design paralysis trying to come up with the perfect abstraction that would stand all time.  Nearly without fail, all of these pieces of code become obsolete in a matter of months or years. It’s not that I don’t like reuse – it’s just that reuse is hard.  In fact, reuse is DAMN hard.  Many times it is just a distraction that eats up architect and developer time, and worse yet can be counter-productive and force wrong decisions.  Now don’t get me wrong, I love the idea of reusable code when it makes sense.  These are in the few cases where you are designing something that is inherently reusable.  The problem is, most business-class code is inherently unfit for reuse! Furthermore, the code that is reusable will often fail to be reused if you don’t have the proper framework in place for effective reuse that includes standardized versioning, building, releasing, and documenting the components.  That should always be standard across the board when promoting reusable code.  All of this is hard, and it should only be done when you have code that is truly reusable or you will be exerting a large amount of development effort for very little bang for your buck. But my goal here is not to get into how to reuse (that is a topic unto itself) but what should be reused.  First, let’s look at an extension method.  There’s many times where I want to kick off a thread to handle a task, then when I want to reign that thread in of course I want to do a Join on it.  But what if I only want to wait a limited amount of time and then Abort?  Well, I could of course write that logic out by hand each time, but it seemed like a great extension method: 1: public static class ThreadExtensions 2: { 3: public static bool JoinOrAbort(this Thread thread, TimeSpan timeToWait) 4: { 5: bool isJoined = false; 6:  7: if (thread != null) 8: { 9: isJoined = thread.Join(timeToWait); 10:  11: if (!isJoined) 12: { 13: thread.Abort(); 14: } 15: } 16: return isJoined; 17: } 18: } 19:  When I look at this code, I can immediately see things that jump out at me as reasons why this code is very reusable.  Some of them are standard OO principles, and some are kind-of home grown litmus tests: Single Responsibility Principle (SRP) – The only reason this extension method need change is if the Thread class itself changes (one responsibility). Stable Dependencies Principle (SDP) – This method only depends on classes that are more stable than it is (System.Threading.Thread), and in itself is very stable, hence other classes may safely depend on it. It is also not dependent on any business domain, and thus isn't subject to changes as the business itself changes. Open-Closed Principle (OCP) – This class is inherently closed to change. Small and Stable Problem Domain – This method only cares about System.Threading.Thread. All-or-None Usage – A user of a reusable class should want the functionality of that class, not parts of that functionality.  That’s not to say they most use every method, but they shouldn’t be using a method just to get half of its result. Cost of Reuse vs. Cost to Recreate – since this class is highly stable and minimally complex, we can offer it up for reuse very cheaply by promoting it as “ready-to-go” and already unit tested (important!) and available through a standard release cycle (very important!). Okay, all seems good there, now lets look at an entity and DAO.  I don’t know about you all, but there have been times I’ve been in organizations that get the grand idea that all DAOs and entities should be standardized and shared.  While this may work for small or static organizations, it’s near ludicrous for anything large or volatile. 1: namespace Shared.Entities 2: { 3: public class Account 4: { 5: public int Id { get; set; } 6:  7: public string Name { get; set; } 8:  9: public Address HomeAddress { get; set; } 10:  11: public int Age { get; set;} 12:  13: public DateTime LastUsed { get; set; } 14:  15: // etc, etc, etc... 16: } 17: } 18:  19: ... 20:  21: namespace Shared.DataAccess 22: { 23: public class AccountDao 24: { 25: public Account FindAccount(int id) 26: { 27: // dao logic to query and return account 28: } 29:  30: ... 31:  32: } 33: } Now to be fair, I’m not saying there doesn’t exist an organization where some entites may be extremely static and unchanging.  But at best such entities and DAOs will be problematic cases of reuse.  Let’s examine those same tests: Single Responsibility Principle (SRP) – The reasons to change for these classes will be strongly dependent on what the definition of the account is which can change over time and may have multiple influences depending on the number of systems an account can cover. Stable Dependencies Principle (SDP) – This method depends on the data model beneath itself which also is largely dependent on the business definition of an account which can be very inherently unstable. Open-Closed Principle (OCP) – This class is not really closed for modification.  Every time the account definition may change, you’d need to modify this class. Small and Stable Problem Domain – The definition of an account is inherently unstable and in fact may be very large.  What if you are designing a system that aggregates account information from several sources? All-or-None Usage – What if your view of the account encompasses data from 3 different sources but you only care about one of those sources or one piece of data?  Should you have to take the hit of looking up all the other data?  On the other hand, should you have ten different methods returning portions of data in chunks people tend to ask for?  Neither is really a great solution. Cost of Reuse vs. Cost to Recreate – DAOs are really trivial to rewrite, and unless your definition of an account is EXTREMELY stable, the cost to promote, support, and release a reusable account entity and DAO are usually far higher than the cost to recreate as needed. It’s no accident that my case for reuse was a utility class and my case for non-reuse was an entity/DAO.  In general, the smaller and more stable an abstraction is, the higher its level of reuse.  When I became the lead of the Shared Components Committee at my workplace, one of the original goals we looked at satisfying was to find (or create), version, release, and promote a shared library of common utility classes, frameworks, and data access objects.  Now, of course, many of you will point to nHibernate and Entity for the latter, but we were looking at larger, macro collections of data that span multiple data sources of varying types (databases, web services, etc). As we got deeper and deeper in the details of how to manage and release these items, it quickly became apparent that while the case for reuse was typically a slam dunk for utilities and frameworks, the data access objects just didn’t “smell” right.  We ended up having session after session of design meetings to try and find the right way to share these data access components. When someone asked me why it was taking so long to iron out the shared entities, my response was quite simple, “Reuse is hard...”  And that’s when I realized, that while reuse is an awesome goal and we should strive to make code maintainable, often times you end up creating far more work for yourself than necessary by trying to force code to be reusable that inherently isn’t. Think about classes the times you’ve worked in a company where in the design session people fight over the best way to implement a class to make it maximally reusable, extensible, and any other buzzwordable.  Then think about how quickly that design became obsolete.  Many times I set out to do a project and think, “yes, this is the best design, I can extend it easily!” only to find out the business requirements change COMPLETELY in such a way that the design is rendered invalid.  Code, in general, tends to rust and age over time.  As such, writing reusable code can often be difficult and many times ends up being a futile exercise and worse yet, sometimes makes the code harder to maintain because it obfuscates the design in the name of extensibility or reusability. So what do I think are reusable components? Generic Utility classes – these tend to be small classes that assist in a task and have no business context whatsoever. Implementation Abstraction Frameworks – home-grown frameworks that try to isolate changes to third party products you may be depending on (like writing a messaging abstraction layer for publishing/subscribing that is independent of whether you use JMS, MSMQ, etc). Simplification and Uniformity Frameworks – To some extent this is similar to an abstraction framework, but there may be one chosen provider but a development shop mandate to perform certain complex items in a certain way.  Or, perhaps to simplify and dumb-down a complex task for the average developer (such as implementing a particular development-shop’s method of encryption). And what are less reusable? Application and Business Layers – tend to fluctuate a lot as requirements change and new features are added, so tend to be an unstable dependency.  May be reused across applications but also very volatile. Entities and Data Access Layers – these tend to be tuned to the scope of the application, so reusing them can be hard unless the abstract is very stable. So what’s the big lesson?  Reuse is hard.  In fact it’s damn hard.  And much of the time I’m not convinced we should focus too hard on it. If you’re designing a utility or framework, then by all means design it for reuse.  But you most also really set down a good versioning, release, and documentation process to maximize your chances.  For anything else, design it to be maintainable and extendable, but don’t waste the effort on reusability for something that most likely will be obsolete in a year or two anyway.

    Read the article

  • Microsoft Silverlight Analytics Framework - Day 2 Part 2 of MIX 2010

    - by GeekAgilistMercenary
    I went to the session on Microsoft Silverlight Analytics Framework (MSAF) today while here at MIX 2010.  It was a great walk through the features, ideas, and what the end goal is.  Michael Scherotter did a great job of lining up the ideas, intentions, and the functional ideas behind the framework. The framework is built around the Silverlight Behaviors.  If you aren't sure what behaviors are, check out these entries from Nikhilk.net Silverlight Behaviors, Silverlight 3 Drag Behavior, An Introduction to Behaviors, Triggers, and Actions, and of course the MSDN Documentation on the matter. Some of the key features of the framework is to support out-of-browser scenarios, which works perfectly with out Webtrends DX Web Services.  Offline scenarios, which again, we have worked toward supporting at Webtrends DC Web Services via caching and other criteria as needed.  Another feature that I was really stoked about is the Microsoft Expression Blend integration that removes the need for coding, thus simplifying the addition of analytics components based on events or other actions within a Silverlight Application.  This framework also easily supports A/B Testing (again, something we do quit a bit of at Webtrends with Webtrends Optimize. The last thing I really wanted to point out was the control support that this framework has support in already from Telerik RadControls, Smooth Streaming Media Element, and Microsoft Silverlight Media Framework Player 1.0.  These are implemented with behaviors and handlers exposed via MEF (Managed Extensibility Framework). All in all, great second day, great analytics framework for Silverlight, and great presentation.  Until tomorrow, adieu. For this original entry and all sorts of other technical gibberish I write, check out my other blog Agilist Mercenary.

    Read the article

  • Netretail's online retail operation benefits from personal contact

    - by christopher.jones
    Hot on oracle.com is a snapshot of Netretail Holding B.V. profiling their use of PHP and Oracle technology such as Oracle RAC cluster database to become a leading online retailer across Central and Eastern Europe. We've also just refreshed our key PHP Scalability and High Availability whitepaper which talks about connection pooling (DRCP) and Fast Application Notification (FAN). We brought it up to date for 11gR2 and PHP 5.3. It now includes the new 11gR2 V$CPOOL_CONN_INFO view, the new columns for DBA_CPOOL_INFO, information about LOGOFF triggers, and information about the support for Client Result Caching with DRCP. Back to Netretail. Two of their secrets to success are keeping technically up to date, and networking. That is, networking in the business sense. I had the pleasure of meeting Michal Táborský (@whizz), the Chief System Architect, when he was in California for a Velocity conference. Michal took time to visit Oracle HQ and talk with our developers about his then current architecture and future needs. I also met his manager at last year's Oracle OpenWorld conference. Having built up a relationship with us, Netretail now has access to Oracle Development staff. While this will never bypass Oracle Support (which have tools, systems etc that are needed and useful for resolving issues), it makes communication easier for some classes of questions. It helps discussions that will let us improve Oracle products, and make Netretail stronger. I like this. And there's no reason why you can't talk with us too. You know where to email me.

    Read the article

  • Tyrus 1.3

    - by Pavel Bucek
    I’m pleased to announce that new version of Tyrus was released today. It contains some interesting features, like asynchronous handling of connectToServer method call, optimised broadcast support and lots of stability and performance improvements. As previously – I will follow with more blog posts about selected features later. Some of them are already described in updated User guide. Complete list of bugfixes and new featuresTYRUS-262: @OnOpen fails for programmatically deployed annotated endpoint.TYRUS-261: Text decoder disables Binary decoder if both are configured for the same server endpoint; endpoint cannot receive binary messages anymore.TYRUS-248: Consolidate Extension representationTYRUS-258: Tyrus always creates HTTP sessionTYRUS-251: DecodeException is not passed to @OnError methodTYRUS-252: encoder for primitive types cannot be overridden.TYRUS-250: Sec-WebSocket-Protocol header cannot be present when there is no negotiated subprotocolTYRUS-249: WebSocketContainer MaxSessionIdleTimeout is not propagated to Session (server side)TYRUS-257: Async timeout value set on WebSocketContainer is not propagated to RemoteEndpoints (Session#getAsyncRemote())TYRUS-253: DecodeException is not passed to @OnError method from session.getAsyncRemote().sendObjectTYRUS-243: WebSocketContainer.connectToServer can block for secondsTYRUS-227: Build failure : AsyncBinaryTestTYRUS-226: Tyrus build error : OnCloseTestTYRUS-247: Make all samples use TestContainer for testsTYRUS-238: Refactor WebSocketEngine (SPI and Impl)TYRUS-157: Submitting tasks to an injected ManagedExecutorService in a ServerEndpoint does not workTYRUS-137: Improve subprotocols/extensions headers parsingTYRUS-133: Some Broadcast(er) API is needed in order to opmtimize chat-like usecasesTYRUS-245: ServerEndpointConfig#getConfiigurator returns Configurator not used when getNegotiatedExtensions has been called.TYRUS-240: clean up duplicated static fields (strings)TYRUS-230: When Session is invalidated, the close reason is not 1006TYRUS-190: Remove TyrusServetServerContainer (merge with TyrusServerContainer)TYRUS-65: Implement common utilities for testingTYRUS-242: Tyrus does not run on JDK8 compact2 profileTYRUS-237: RemoteEndpoint.Async#sendBinary does not throw IllegalArgumentException when data is nullTYRUS-239: Improve WebSocketEngine ByteBuffer handlingTYRUS-232: Refactor grizzly/servlet container – Tyrus-SPITYRUS-235: Fix findbugs errors in tests/servletTYRUS-234: Remove support for older protocol versionTYRUS-229: Session#setMaxIdleTimeout() will kill the session whether or not the session actually timed outTYRUS-225: Invalidation of (Servlet) HttpSession does not invalidate WebSocket SessionTYRUS-153: Static map in TyrusRemotEndpointTYRUS-201: Wrong ServletInputStream#isReady() usage in TyrusHttpUpgradeHandlerTYRUS-224: Refactor Connection#write and ConnectionImpl to use CompletionHandler only (no Future)TYRUS-221: wss:// doesn’t appear to function correctly via a http proxy.TYRUS-146: Support request from client to secured services (“wss”)TYRUS-223: Message can be writen multiple times when running on Servlet container TYRUS-71: ErroCollector not properly used in AnnotatedEndpoint class. Tyrus 1.3 will be integrated in Glassfish trunk soon – you can download nightly build or upgrade to newer Tyrus manually (replace all Tyrus jars).

    Read the article

  • Almost at our first year anniversary!

    - by Vizioz Limited
    It has been a hectic first year at Vizioz and things are still going from strength to strength. 11 months ago I started Vizioz with zero capital investment in the middle of a recession, which to some may seem a daunting prospect but to others including myself it was the challenge I needed to make me want to get up in the morning :) I wanted to prove that even in the curent financial climate it is still possible to start a new business.We are still experiencing the normal growing pains of a small business but this is something we just need to work our way through, it is amazing how much paperwork and administration there is running a small business, office admin, insurance, vat and for the last few months PAYE.For the last 9 months we have shared an office with another small business called Little Big Ideas. They are a design agency working across a broad spectrum of design from branding, print and digital. Last month we decided to move offices to a larger office and now have room for 8 of us, so now we need a couple more clients to help produce enough work to fill the space and grow to the next level.As well as moving office 2 months ago I blogged about my first employee Colin starting work for me, he has picked up Umbraco very well and has mastered the art of good CSS design, as the majority of our clients are large multi-nationals they still require support for IE6 which as all web developers know is the nightmare of all web browsers.This month has seen the next step in the growth of Vizioz as I have taken on another PhD graduate called Pricilla, welcome to the team!This month we plan to launch our own website to enable us to showcase some of the sites we have built over the past 11 months and to allow potential clients to see what we can offer. We might still be relatively small but we have some great case studies to show and with two PhD graduates on the team we have great talent capable of producing complex and innovative solutions for our clients. As soon as we have launched out new website I will blog again about what the future holds for Vizioz and what we can offer our prospective clients as well as e obvious Umbraco CMS solutions.

    Read the article

  • Guide to the web development ecosystem

    - by acjohnson55
    I'm a long-time software developer, and I've been thrown in the deep, deep end of developing from the ground up what will hopefully be a highly scalable and interactive web application. I've been out of the web game for about 8 years, and even when I was last in it, I wasn't exactly on the cutting edge. I think I've made judicious design decisions and I'm quite happy with the progress I've been making so far, but new, hot web technologies keep crawling out of the woodwork and into my headspace, forcing me to continually revalidate my implementation decisions. Complicating things even further is the preponderance of out-of-date information and the difficulty of knowing what is out of date in the first place. What I'm wondering is, are there any comprehensive books or guides dedicated to compiling and comparing the technologies out there, end-to-end in the web application stack? I'm happy to learn new techs on demand, but I don't like learning about them after I've already spent time going in another direction. I'm looking for the sort of executive info a CTO might read to make sure the best architectural decisions are being made. And just to be clear, this is a question about resources, not about specific technology suggestions.

    Read the article

  • Geometry Shader: distortions

    - by Christophe Lionet
    This is a cross-question from Stack Overflow, I thought it would be more appropriate here. There is a lot of code I could be posting. To avoid overloading the page with code, I will post any part of the code if requested. I am working from the ParticleGS DirectX10 sample, to build a geometry shader based particle system in DirectX 11. Using the sample code, and changing it to my liking, I am able to draw a single quad (which is essentially one particle constantly recreating itself). However, I noticed a problem which was similar to one I once had: the rendered shape is distorted. Here is a video showcasing what is happening. http://youtu.be/6NY_hxjMfwY Now, I used to have this issue when using several effects together, when I realised that I needed to explicitely set the geometry shader to null for the other effects. I solved this problem, as you can see in the video, as the rest of the scene is drawing properly. Note that some sides are being culled somehow, although I turned off culling in my main render state. The texturing is fine too, the texture draws with appropriate proportions relative to the quad. I really don't see what I could be doing wrong here... what would cause the geometry shader to behave in such a way? Again, I will post any piece code you will request.

    Read the article

  • Is this kind of design - a class for Operations On Object - correct?

    - by Mithir
    In our system we have many complex operations which involve many validations and DB activities. One of the main Business functionality could have been designed better. In short, there were no separation of layers, and the code would only work from the scenario in which it was first designed at, and now there were more scenarios (like requests from an API or from other devices) So I had to redesign. I found myself moving all the DB code to objects which acts like Business to DB objects, and I've put all the business logic in an Operator kind of a class, which I've implemented like this: First, I created an object which will hold all the information needed for the operation let's call it InformationObject. Then I created an OperatorObject which will take the InformationObject as a parameter and act on it. The OperatorObject should activate different objects and validate or check for existence or any scenario in which the business logic is compromised and then make the operation according to the information on the InformationObject. So my question is - Is this kind of implementation correct? PS, this Operator only works on a single Business-wise Operation.

    Read the article

  • What are my options for sharing music between Windows & Ubuntu on the same network?

    - by jgbelacqua
    We have a few Windows(XP & 7) and Ubuntu machines in the house sharing a wireless connection, and want to share music between them. If possible, I would like to be able to serve music from both Windows and Ubuntu (but it doesn't have to be the same time). I don't know much about sharing folders or streaming, but I'm guessing both would be options (that is, using a local client to access a shared song or a local client to access a shared stream). I want to be able to share the music between the systems as simply as possible. Bonus points (but not requirements) for cross-platform -- same application on both Windows and Ubuntu? available on startup (via daemon or autostart or whatnot) open source More info: All systems have dynamic addresses (DHCP) supplied from the ISP-supplied wireless router. There are several Gigabytes of music on one Windows XP box and one Ubuntu 10.10 The music is not well-sorted (I'm thinking this might have an impact on UI usability). Only has to be available internally (private address space behind the wireless router) bandwidth is not a problem We don't have (legitimate) admin access to the wireless router

    Read the article

  • SD Card Reader not working in Ubuntu 12.04

    - by tripkane
    I have a read many other posts on this issue and believe that Ubuntu 12.04 is not even recognizing my SD Card Reader as just that: Computer Model: Metabox (Australian builder of Clevo laptops) / Clevo P150EM OS: Ubuntu 12.04 (64 Bit) CPU: Intel(R) Core(TM) i7-3720QM CPU @ 2.60GHz HD: 120GB Intel 550/520MB/s SSD According to the people who built my computer, the specs of the SD Card reader in my comp are as follows: Manufacture: Realtek Semiconduct Corp. Location: PCI bus 3 Hardware ID: PCI\Ven_10EC&DEV_5289&SUBSYS_51051558 Physical device object name: \Device\NTPNP_PCI0015 Here are the relevant outputs of the following commands run from the terminal: sudo lshw *-generic UNCLAIMED description: Unassigned class product: Realtek Semiconductor Co., Ltd. vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:03:00.0 version: 01 width: 32 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list configuration: latency=0 resources: memory:f6a00000-f6a0ffff sudo lspci -v -nn 03:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. Device [10ec:5289] (rev 01) Subsystem: CLEVO/KAPOK Computer Device [1558:5105] Flags: bus master, fast devsel, latency 0, IRQ 4 Memory at f6a00000 (32-bit, non-prefetchable) [size=64K] Capabilities: [40] Power Management version 3 Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit+ Capabilities: [70] Express Endpoint, MSI 00 Capabilities: [b0] MSI-X: Enable- Count=1 Masked- Capabilities: [d0] Vital Product Data Capabilities: [100] Advanced Error Reporting Capabilities: [140] Virtual Channel Capabilities: [160] Device Serial Number 00-00-00-00-00-00-00-00 Does the unassigned details of these outputs mean that Ubunutu desn't know that the SD Card Reader is one and what do with it? and if so how should I go about fixing it?? Cheers ;)

    Read the article

  • Windows Azure ASP.NET MVC 2 Role with Silverlight

    - by GeekAgilistMercenary
    I was working through some scenarios recently with Azure and Silverlight.  I immediately decided a quick walk through for setting up a Silverlight Application running in an ASP.NET MVC 2 Application would be a cool project. This walk through I have Visual Studio 2010, Silverlight 4, and the Azure SDK all installed.  If you need to download any of those go get em? now. Launch Visual Studio 2010 and start a new project.  Click on the section for cloud templates as shown below. After you name the project, the dialog for what type of Windows Azure Cloud Service Role will display.  I selected ASP.NET MVC 2 Web Role, which adds the MvcWebRole1 Project to the Cloud Service Solution. Since I selected the ASP.NET MVC 2 Project type, it immediately prompts for a unit test project.  Because I just want to get everything running first, I will probably be unit testing the Silverlight and just using the MVC Project as a host for the Silverlight for now, and because I would prefer to just add the unit test project later, I am going to select no here. Once you've created the ASP.NET MVC 2 project to host the Silverlight, then create another new project.  Select the Silverlight section under the Installed Templates in the Add New Project dialog.  Then select Silverlight Application. The next dialog that comes up will inquire about using the existing ASP.NET MVC Application I just created, which I do want it to use that so I leave it checked.  The options section however I do not want to check RIA Web Services, do not want a test page added to the project, and I want Silverlight debugging enabled so I leave that checked.  Once those options are appropriately set, just click on OK and the Silverlight Project will be added to the overall solution. The next steps now are to get the Silverlight object appropriately embedded in the web page.  First open up the Site.Master file in the ASP.NET MVC 2 Project located under the Veiws/Shared/ location.  After you open the file review the content of the <header></header> section.  In that section add another <contentplaceholder></contentplaceholder> tag as shown in the code snippet below. <head runat="server"> <title> <asp:ContentPlaceHolder ID="TitleContent" runat="server" /> </title> <link href="../../Content/Site.css" rel="stylesheet" type="text/css" /> <asp:ContentPlaceHolder ID="HeaderContent" runat="server" /> </head> I usually put it toward the bottom of the header section.  It just seems the <title></title> should be on the top of the section and I like to keep it that way. Now open up the Index.aspx page under the ASP.NET MVC 2 Project located in the Views/Home/ directory.  When you open up that file add a <asp:Content><asp:Content> tag as shown in the next snippet. <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> Home Page </asp:Content>   <asp:Content ID=headerContent ContentPlaceHolderID=HeaderContent runat=server>   </asp:Content>   <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2><%= Html.Encode(ViewData["Message"]) %></h2> <p> To learn more about ASP.NET MVC visit <a href="http://asp.net/mvc" title="ASP.NET MVC Website">http://asp.net/mvc</a>. </p> </asp:Content> In that center tag, I am now going to add what is needed to appropriately embed the Silverlight object into the page.  The first thing I needed is a reference to the Silverlight.js file. <script type="text/javascript" src="Silverlight.js"></script> After that comes a bit of nitty gritty Javascript.  I create another tag (and for those in the know, this is exactly like the generated code that is dumped into the *.html page generated with any Silverlight Project if you select to "add a test page that references the application".  The complete Javascript is below. function onSilverlightError(sender, args) { var appSource = ""; if (sender != null && sender != 0) { appSource = sender.getHost().Source; }   var errorType = args.ErrorType; var iErrorCode = args.ErrorCode;   if (errorType == "ImageError" || errorType == "MediaError") { return; }   var errMsg = "Unhandled Error in Silverlight Application " + appSource + "\n";   errMsg += "Code: " + iErrorCode + " \n"; errMsg += "Category: " + errorType + " \n"; errMsg += "Message: " + args.ErrorMessage + " \n";   if (errorType == "ParserError") { errMsg += "File: " + args.xamlFile + " \n"; errMsg += "Line: " + args.lineNumber + " \n"; errMsg += "Position: " + args.charPosition + " \n"; } else if (errorType == "RuntimeError") { if (args.lineNumber != 0) { errMsg += "Line: " + args.lineNumber + " \n"; errMsg += "Position: " + args.charPosition + " \n"; } errMsg += "MethodName: " + args.methodName + " \n"; }   throw new Error(errMsg); } I literally, since it seems to work fine, just use what is populated in the automatically generated page.  After getting the appropriate Javascript into place I put the actual Silverlight Object Embed code into the HTML itself.  Just so I know the positioning and for final verification when running the application I insert the embed code just below the Index.aspx page message.  As shown below. <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2> <%= Html.Encode(ViewData["Message"]) %></h2> <p> To learn more about ASP.NET MVC visit <a href="http://asp.net/mvc" title="ASP.NET MVC Website"> http://asp.net/mvc</a>. </p> <div id="silverlightControlHost"> <object data="data:application/x-silverlight-2," type="application/x-silverlight-2" width="100%" height="100%"> <param name="source" value="ClientBin/CloudySilverlight.xap" /> <param name="onError" value="onSilverlightError" /> <param name="background" value="white" /> <param name="minRuntimeVersion" value="4.0.50401.0" /> <param name="autoUpgrade" value="true" /> <a href="http://go.microsoft.com/fwlink/?LinkID=149156&v=4.0.50401.0" style="text-decoration: none"> <img src="http://go.microsoft.com/fwlink/?LinkId=161376" alt="Get Microsoft Silverlight" style="border-style: none" /> </a> </object> <iframe id="_sl_historyFrame" style="visibility: hidden; height: 0px; width: 0px; border: 0px"></iframe> </div> </asp:Content> I then open up the Silverlight Project MainPage.xaml.  Just to make it visibly obvious that the Silverlight Application is running in the page, I added a button as shown below. <UserControl x:Class="CloudySilverlight.MainPage" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" d:DesignHeight="300" d:DesignWidth="400">   <Grid x:Name="LayoutRoot" Background="White"> <Button Content="Button" Height="23" HorizontalAlignment="Left" Margin="48,40,0,0" Name="button1" VerticalAlignment="Top" Width="75" Click="button1_Click" /> </Grid> </UserControl> Just for kicks, I added a message box that would popup, just to show executing functionality also. private void button1_Click(object sender, RoutedEventArgs e) { MessageBox.Show("It runs in the cloud!"); } I then executed the ASP.NET MVC 2 and could see the Silverlight Application in page.  With a quick click of the button, I got a message box.  Success! Now the next step is getting the ASP.NET MVC 2 Project and Silverlight published to the cloud.  As of Visual Studio 2010, Silverlight 4, and the latest Azure SDK, this is actually a ridiculously easy process. Navigate to the Azure Cloud Services web site. Once that is open go back in Visual Studio and right click on the cloud project and select publish. This will publish two files into a directory.  Copy that directory so you can easily paste it into the Azure Cloud Services web site.  You'll have to click on the application role in the cloud (I will have another blog entry soon about where, how, and best practices in the cloud). In the text boxes shown, select the application package file and the configuration file and place them in the appropriate text boxes.  This is the part were it comes in handy to have copied the directory path of the file location.  That way when you click on browser you can just paste that in, then hit enter.  The two files will be listed and you can select the appropriate file. Once that is done, name the service deployment.  Then click on publish.  After a minute or so you will see the following screen. Now click on run.  Once the MvcWebRole1 goes green (the little light symbol to the left of the status) click on the Web Site URL.  Be patient during this process too, it could take a minute or two.  The Silverlight application should again come up just like you ran it on your local machine. Once staging is up and running, click on the circular icon with two arrows to move staging to production.  Once you are done make sure the green light is again go for the production deploy, then click on the Web Site URL to verify the site is working.  At this point I had a successful development, staging, and production deployment. Thanks for reading, hope this was helpful.  I have more Windows Azure and other cloud related material coming, so stay tuned. Original Entry

    Read the article

  • Checking if your SIMPLE databases need a log backup

    - by Fatherjack
    Hopefully you have read the blog by William Durkin explaining why your SIMPLE databases need a log backup in some cases. There is a SQL Server bug that means in some cases databases are marked as being in SIMPLE RECOVERY but have a log wait type that shows they are not properly configured. Please read his blog for the full explanation and a great description of how to reproduce the issue. As part of our (William happens to be my Boss) work to recover our affected databases I wrote this small PowerShell script to quickly check our servers for databases that needed the attention that William details.  cls $Servers = “Server01″,”Server02″,”etc”,”etc” foreach($Server in $Servers){ write-host “************” $server “****************”     $server = New-Object Microsoft.sqlserver.management.smo.server $Server     foreach($db in $Server.databases){         $db | where {$_.RecoveryModel -eq “Simple” -and $_.logreusewaitstatus -ne “nothing”} | select name, LogReuseWaitStatus     } } If you get any results from this query then you should consult Williams blog for the details on what action you should take. This script does give out false positives if in some circumstances depending on how busy your databases are. Hopefully this will let you check your servers quickly and if you find any problems you can reference Williams blog to understand what you need to do.

    Read the article

  • SharePoint Web Part Constructor Fires Twice When Adding it to the Page (and has a different security

    - by Damon
    We had some exciting times debugging an interesting issue with SharePoint 2007 Web Parts.  We had some code in staging that had been running just fine for weeks and had not been touched or changed in about the same amount of time.  However, when we tried to move the web part into a different staging environment, the part started throwing a security exception when we tried to add it to a page.  After a bit of debugging, we determined that the web part was throwing the exception while trying to access the SPGroups property on the SharePoint site.  This was pretty strange because we were logged in as an admin and the code was working perfectly fine before.  During the debugging process, however, we found out that the web part constructor was being fired twice.  On one request, the security context did not seem to have everything it needed in order to run.  On the other request, the security context was populated with the user context with the user making the request (like it normally is).  Moving the security code outside of the constructor seems to have fixed the issue. Why the discrepancy between the two staging environments?  Turns out we deployed the part originally, then deployed an update with the security code.  Since the part was never "added" to the page after the code updates were made (we just deployed a new assembly to make the updates), we never saw the problem.  It seems as though the constructor fires twice when you are adding the web part to the page, and when you run the web part from the web part gallery.  My only thought on why this would occur is that SharePoint is instantiating an instance to get some information from it - which is odd because you would think that would happen with reflection without requiring a new object.  Anyway, the work around is to just not put anything security related inside the constructor, or to do a good job accounting for the possibility of the security context not being present if you are adding the item to the page. Technorati Tags: SharePoint,.NET,Microsoft,ASP.NET

    Read the article

  • Firefox 4 crashing in VNC

    - by MacThePenguin
    On my desktop PC I've been using Kubuntu 10.04 for about a year. Recently I've updated Firefox to version 4 using aptitude and the mozilla-team/firefox-stable repository. Since then, I can't run it when I'm logged in through a VNC session. Firefox crashes immediately: when I try to run it from the console I get this error: ###!!! ABORT: X_ShmPutImage: BadShmSeg (invalid shared segment parameter); 3 requests ago: file /build/buildd/firefox-4.0.1+build1+nobinonly/build-tree/mozilla/toolkit/xre/nsX11ErrorHandler.cpp, line 203 ###!!! ABORT: X_ShmPutImage: BadShmSeg (invalid shared segment parameter); 3 requests ago: file /build/buildd/firefox-4.0.1+build1+nobinonly/build-tree/mozilla/toolkit/xre/nsX11ErrorHandler.cpp, line 203 Firefox works fine when I run it directly from the PC. Firefox 3.x worked fine also from a VNC session. I tried to turn off the hardware acceleration from the Firefox preferences, but that doesn't fix the problem. firefox --sync, firefox -safe-mode and firefox -ProfileManager also crash the same way. Any idea how to troubleshoot this? Thanks. Edit: additional info. I run vnc (RealVNC 4.1.1) from xinetd, this is the config I use: service Xvnc { type = UNLISTED disable = no socket_type = stream protocol = tcp wait = yes user = root server = /usr/bin/Xvnc4 server_args = -inetd :1 -desktop vnc5901 -query localhost -geometry 1160x675 -depth 16 -once -DisconnectClients=0 -NeverShared passwordFile=/path/to/vnc/password -render port = 5901 }

    Read the article

  • ArchBeat Link-o-Rama for December 5, 2012

    - by Bob Rhubart
    On the Cultural-Linguistic Turn | Richard Veryard "When an architect chooses to label something as a 'silo' or 'legacy,' or uses words like 'integrated' and 'standardized,', these may not always be objectively verifiable categories but subjective judgements, around which the architect may then weave an appropriate story." -- Richard Veryard Advanced Oracle SOA Suite presentations from Open World 2012 | Juergen Kress Oracle SOA and BPM Partner Community blogger Juergen Kress shares a list of 13 SOA presentations delivered or moderated by Oracle SOA Product Management at OOW12 in San Francisco. Coherence 101, Beware of cache listeners | Alexey Ragozin Alexey Ragozin's technical post will help you avoid trouble when working with the cache events facility in Oracle Coherence. 3 Key Cloud Insights for 2013 | CTO Blog Capgemini CTO blogger Ron Tolido highlights three "standout" insights from a recent Capgemini report on the business cloud. Access Control Lists for Roles | Kyle Hatlestad Oracle Fusion Middleware A-Team member Kyle Hatlestad shares background info and instructions for activating access control lists for roles in Oracle WebCenter UCM 11g PS5. Thought for the Day "If it ain’t broke, fix it anyway. You must invest least 20% of your maintenance budget in refreshing your architecture to prevent good software from becoming spaghetti code." — Larry Bernstein Source: SoftwareQuotes.com

    Read the article

< Previous Page | 377 378 379 380 381 382 383 384 385 386 387 388  | Next Page >