Search Results

Search found 2898 results on 116 pages for 'solid'.

Page 78/116 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • how to start LXDE session automatically after tightvncserver starts to make me able see desktop when connecting to the host via vncclient?

    - by Oleksandr Dudchenko
    I have system which is equipped with Intel Celeron processor 1.1 GHz s370 with 384 Mb of RAM on Intel d815egew motherboard which supports wake-on-lan function. I want to use such a PC for Internet sharing to the local network. Also this PC is a DHCP+DNS server as well as router/gateway. Based on above I decided to install Lubuntu as it is lightweight system. I installed Lubuntu 10.04.4 LTS from alternate ISO. System has no auto login. System boots and has acceptable performance. Host PC has onboard 4 network adapters: eth0 – ethernet controller which is used for Local Network connections. Has static address 10.0.0.1 eth1 – ethernet controller which is not used and not configured so far, I plan to connect printer here later on. eth2 - ethernet controller which is used to connect to Internet, which we plan to share for the local network wlan0 – wireless controller, it is used in role of access poit for local Network and has address 10.0.0.2 We want to control our gateway remotely. So, we need to be able to power it on remotely. To allow this I’ve done the following things: $ cd /etc/init.d/ made a new file with command $ sudo vim wakeonlanconfig Wrote the following lines to the newly created file, saved and closed it #!/bin/bash ethtool -s eth0 wol g ethtool -s eth2 wol g exit Made the abovementioned file executable $ sudo chmod a+x wakeonlanconfig Then included it into autostart sequence during boot. $ sudo update-rc.d -f wakeonlanconfig defaults after system reboot we will be able to poweron system remotely. Than we need to have a possibility to connect remotely to the host via SSH and VNC. So, I installed following packets with the following commands: $ sudo apt-get update $ sudo apt-get install openssh-server tightvncserver Add ssh daemon into autostart sequence during boot. $ sudo update-rc.d -f ssh defaults Power off the host PC $ sudo halt Then I went to remote place, send magic paket and powered the Host up. System started... And I connected to the host via Putty from remote system under Windows. Than logged in and run the command to start vnc server. $ tightvncserver -geometry 800x600 -depth 16 :2 VNC server successfully started and I got message like follows. New 'X' desktop is gateway:2 Starting applications specified in /home/dolv/.vnc/xstartup Log file is /home/dolv/.vnc/gateway:2.log Using UltraVNC Viewer programm under windows I connected to the host's vnc server, enterd the password and.... sow only mouse cursor in form of cross on a grey background of 800x600 dots, no desktop. Here is my .vnc/xstartup file #!/bin/sh xrdb $HOME/.Xresources xsetroot -solid grey #x-terminal-emulator -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" & #x-window-manager & # Fix to make GNOME work export XKL_XMODMAP_DISABLE=1 /etc/X11/Xsession The Question: What I have to change and where to make LXDE session start automatically after tightvncserver starts?

    Read the article

  • Windows Desktop Virtualization Gets Easier

    - by andrewbrust
    This past Thursday, Microsoft announced that Windows (7) Virtual PC (WVPC) and its XP Mode feature would no longer require hardware assisted virtualization (HAV).  That means any PC running Windows 7 Pro, or higher, can now run this software.  And that’s a great thing because, as I noted in a post almost five month ago, determining whether a given PC you might be planning to buy actually offers HAV can be extremely difficult.  That meant even dedicated, sophisticated PC users, with a budget for new hardware, might be blocked from using this technology.  And that was just plain silly. One of the features offered by WVPC, and utilized heavily by XP Mode, is the concept of virtual applications: apps within a guest VM that can actually run within the host’s desktop environment.  I find this feature so powerful that my February Redmond Review column entertained the notion of a future version of Windows that runs all applications in this manner. The elimination of the HAV requirement for XP Mode and WVPC was just one of many virtualization-related announcements Microsoft made on Thursday.  And, interestingly, most of the others were also desktop-related, rather than server-related.  This is a welcome change from the multi-year period in which Microsoft enhanced its server virtualization lineup (in Hyper-V) and let the desktop platform fester.  Microsoft now seems to understand desktop virtualization is in high-demand and strengthens the Windows franchise.  As I explained in the column, even cloud computing can have a desktop spin if desktop virtualization is part of the equation. One company that knows this well is Citrix, and a closer alliance between Microsoft and Citrix was one of the many announcements from Thursday.  In fact, there’s a whole Web site dedicated to the alliance at http://www.citrixandmicrosoft.com/. I’d love to see virtual applications and entire virtual desktops offered as Azure-branded services.  This could allow me to run, for example, the full Office client on a variety of desktops I might use, and for large organizations it could easily reduce the expense, burden and duration of the deployment cycle for new versions of Office.  Business Intelligence providers, including my own firm, twentysix New York, would find great relief in enabling their customers to run the newest version of Excel, with the latest BI capabilities, instead of having to wait the requisite two to three years it takes for many Fortune 500 customers to upgrade. Microsoft should do more, and faster.  WVPC still does not support 64-bit guest images, even on 64-bit hosts.  That needs to be fixed.  File access from the guest to the host needs to be improved (right now, it’s done through Terminal Services/Remote Desktop file sharing, and it’s slow) and VM load times need to be significantly reduced before virtualized apps can become the norm.  (I suppose the advance of solid state drive technology will help there.) I do think these improvements will come, because Microsoft is focused on the virtual desktop now.  And that’s a smart focus to have.

    Read the article

  • Enterprise 2.0 Conference recap

    - by kellsey.ruppel
    We had a great week in Boston attending the Enterprise 2.0 Conference. We learned a lot from industry thought leaders and had a chance to speak with a lot of different folks about social and collaboration technologies and trends.  Of all the conferences we attend, this one definitely has a different “feel”. It seems like the attendees are younger, they dress hipper, and there is much more livelihood all around. A few of the sessions addressed this, as the "millenials" or Generation Y, have been using Web 2.0 tools, such as Facebook and Twitter for many years now, and as they are entering the workforce they are expecting similar tools to be a part of how they accomplish their job tasks. It's important to note that it's not just Millenials that are expecting these technologies, as workers young and old alike benefit from social and collaboration tools. I’ve highlighted some of the takeaways I had, as well as a reaction from John Brunswick, who helped us in staffing the booth. Giving your employees choices is empowering, but if there is no course of action or plan, it’s useless. There is no such thing as collaboration without a goal. In a few years, social will become a feature in the “platform”, a component of collaboration. Social will become part of the norm – just like email is expected when you start a job at a company, Social will be too. 1 in 3 of your employees are using tools your company doesn't sanction (how scary is this?!) 25,000 pieces of content are created every second. Context is king. Social tools help us navigate and manage the complexities we face with information overload. We need to design products for the way people work. Consumerization of the enterprise - bringing social tools like Facebook to the organization. From John Brunswick: "The conference had solid attendance, standing as a testament to organizations making a concerted effort to understand what social tools exist to support their businesses.  Many vendors were narrowly focused and people we pleasantly surprised at the breadth of capability provided by Oracle WebCenter.  People seemed to feel that it just made sense that social technology provides the most benefit when presented in the context of key business data." Did you attend the conference? What were some of your key takeaways?

    Read the article

  • Career Advice: finding challenging work in software and web development

    - by dianovich
    Having left my physics degree early, I started out in the realm of web design / front end web development and was able to get work quite quickly. I moved on to spend a chunk of my time on servers and gained experience with frameworks like Wordpress and Drupal, then the likes of Codeigniter and CakePHP and became comfortable in Debian-based and RHEL/CentOS environments. I ventured in to iOS development and published a couple of native apps to the app store too! I have started to spend a good deal of my time writing Python and have invested a little time in Django. The problem is, I still spend a fair chunk of my time doing more front end web development (writing markup and CSS for site themes, design-lead JavaScript, small applications for which application architecture and software engineering are relatively unimportant or too time consuming to invest in) in my job. What I want to do is really exercise the systematic/logical portion of my brain and tackle challenging problems on a daily basis. I want to have to care about big-oh running times, modularity in software, DRY, performance tuning and development methodologies. I want to work for a firm whose clients say: "Yes, these things are important to us and we'll pay you to get them right." But it is difficult: I have no formal training and am potentially becoming a jack of all trades. Not that being a jack of many trades is necessarily a bad thing, but the scope of work I find myself involved in is far too broad. And, there are only so many hours in a day outside of work! My question is: where do I go from here? I am starting to work on a few open source projects and have started to publish content to my blog. But this isn't likely to make it past the recruitment consultants and HR departments of many-a-firm. And I do not, for example, work in a team that practices agile methodologies, so how do I get work in such a team to gain experience? While I have been responsible for implementing version control and some solid working practices into our current environment, there is only so far I can go in this context. What would convince you that i'm worth taking a risk? What would convince you that i'll have caught up the other guys in your employ in next to no time?

    Read the article

  • Getting away from a customized Magento 1.4 installation - Magento 1.6, OpenCart, or others?

    - by Phil
    I'm dealing with a Magento 1.4.0.0 Community Edition installation with various undocumented changes to the core (mostly integration with an ERP system), an outdated Sweet Tooth Points & Rewards module and some custom payment providers. It also doubles as a mediocre blogging/CMS system. It has one store each for 3 different languages, with about 40 product categories for a few hundred products. [rant] With no prior experience with any PHP e-commerce systems, I find it very difficult to work with. I attempted to install Magento 1.4.0.0 on my local WAMP dev machine, it installs fine, but the main page or search do not show any products no matter what I do in the backend admin panel. I don't know what's wrong with it, and whatever information I googled is either too old or too new from Magento 1.4. Later I'm given FTP access to the testing server, which neither my manager or I have permission to install XDebug on, as apparantly it runs on the same server as the production server (yikes). Trying to learn how Magento works is torture. I spent a week trying to add some fields into the Onepage Checkout before giving up and went to work on something else. The template system, just like the rest of Magento, is a bloated mishmash of overcomplicated directory structures, weird config xml files and EAV databases. I went into 6 different models and several content blocks in the backend just to change what the front page looks like. With little-to-none helpful and clear documentation (unlike CodeIgniter) and various breaking changes between minor point revisions which makes it hard to find useful information, Magento 1.4 is a developer killer. [/rant] The client is planning to redesign the site and has decided it might as well as move on from this unsustainable, hacky, upgrade-unfriendly, developer-unfriendly mess. Magento 1.4 is starting to show its age, with Magento 1.7 coming soon, the client is considering upgrading to Magento 1.6 or 1.7 if it has improved from 1.4. The customizations done to the current Magento 1.4 installation will have to be redone, and a new license for the Sweet Tooth Points & Rewards module will have to be bought. The client is also open to other e-commerce systems. I've looked at OpenCart and it seems to be quite developer friendly with a fairly simple structure. I found some complaints regarding its performance when the shop has thousands of categories or products, but this is not an issue with the current number of products my client has. It seems to be solid ground for easy customization to bring the rewards system and ERP integration over. What should the client upgrade to in this case?

    Read the article

  • What type of interview questions should you ask for "legacy" programmers?

    - by Marcus Swope
    We have recently been receiving lots of applicants for our open developer positions from people who I like to refer to as "legacy" programmers. I don't like the term "old" because it seems a little prejudiced (especially to HR!) and it doesn't accurately reflect what I mean. We are a company that does primarily .NET development using TDD in an Agile environment, we use Git as a source control system, we make heavy use of OSS tools and projects and we contribute to them as well, we have a strong bias towards adhering to strong Object-Oriented principles, SOLID, etc, etc, etc... Now, the normal list of questions that we ask doesn't really seem to apply to applicants that are fresh out of school, nor does it seem to apply to these "legacy" programmers. Here is how I (loosely) define a "legacy" programmer. Spent a significant amount of their career working almost exclusively with Assembly/Machine Languages. Primary accomplishments include work done with TANDEM systems. Has extensive experience with technologies like FoxPro and ColdFusion It's not that we somehow think that what we do is "better" than what they do, on the contrary, we respect these types of applicants and we are scared that we may be missing a good candidate. It is just very difficult to get a good read on someone who is essentially speaking a different language than you. To someone like this, it seems a little strange to ask a question like: What is the difference between an abstract class and an interface? Because, I would think that they would almost never know the answer or even what I'm talking about. However, I don't want to eliminate someone who could be a very good candidate in their own right and could be able to eventually learn the stuff that we do. But, I also don't want to just ask a bunch of behavioral questions, because I want to know about their technical background as well. Am I being too naive? Should "legacy" programmers like this already know about things like TDD, source control strategies, and best practices for object-oriented programming? If not, what questions should we ask to get a good representation about whether or not they are still able to learn them and be able to keep up in our fast-paced environment? EDIT: I'm not concerned with whether or not applicants that meet these criteria are in general capable or incapable, as I have already stated that I believe that they can be 100% capable. I am more interested in figuring out how to evaluate their talents, as I am having a hard time figuring out how to determine if they are an A+ "legacy" programmer or if they are a D- "legacy" programmer. I've worked with both.

    Read the article

  • How to implement efficient Fog of War?

    - by Cambrano
    I've asked a question how to implement Fog Of War(FOW) with shaders. Well I've got this working. I use the vertex color to identify the alpha of a single vertex. I guess the most of you know what the FOW of Age of Empires was like, anyway I'll shortly explain it: You have a map. Everything is unexplored(solid black / 100% transparency) at the beginning. When your NPC's / other game units explore the world (by moving around mostly) they unshadow the map. That means. Everything in a specific radius (viewrange) around a NPC is visible (0%transparency). Anything that is out of viewrange but already explored is visible but shadowed (50% transparency). So yeah, AoE had relatively huge maps. Requirements was something around 100mhz etc. So it should be relatively easy to implement something to solve this problem - actually. Okay. I'm currently adding planes above my world and set the color per vertex. Why do I use many planes ? Unity has a vertex limit of 65.000 per mesh. According to the size of my tiles and the size of my map I need more than one plane. So I actually need a lot of planes. This is obviously pita for my FPS. Well so my question is, what are simple (in sense of performance) techniques to implement a FOW shader? Okay some simplified code what I'm doin so far: // Setup for (int x = 0; x < (Map.Dimension/planeSize); x++) { for (int z = 0; z < (Map.Dimension/planeSize); z++) { CreateMeshAt(x*planeSize, 3, z*planeSize) } } // Explore (is called from NPCs when walking for example) for (int x = ((int) from.x - radius); x < from.x + radius; x ++) { for (int z = ((int) from.z - radius); z < from.z + radius; z ++) { if (from.Distance(x, 1, z) > radius) continue; _transparency[x/tileSize, z/tileSize] = 0.5f; } } // Update foreach(GameObject plane in planes){ foreach(Vector3 vertex in vertices){ Vector3 worldPos = GetWorldPos(vertex); vertex.Color = new Color(0,0,0, _transparency[worldPos.x/tileSize, worldPos.z/tileSize]); } } My shader just sets the transparency of the vertex now, which comes from the vertex color channel

    Read the article

  • Web.NET: A Brief Retrospective

    - by Chris Massey
    It’s been several weeks since I had the pleasure of visiting Milan, and joining 150 enthusiastic web developers for a day of server-side frameworks and JavaScript. Lucky for me, I keep good notes. Overall the day went smoothly, with some solid logistics and very attentiveorganizerss, and an impressively diverse audience drawn by the fact that the event was ambitiously run in English. This was great in that it drew a truly pan-European audience (11 countries were represented on the day, and at least 1 visa had to be procured to get someone there!) It was trouble because, in some cases, it pushed speakers outside their comfort zone. Thankfully, despite a slightly rocky start, every session I attended was very well presented, and the consensus on the day was that the speakers were excellent. While I felt that a lot of the speakers had more that they wanted to cover, the topics were well-chosen, every room constantly had a stack of people in it, and all the sessions were pleasingly focused on code & demos. For all that the language barriers occasionally made networking a little challenging,organizerss Simone & Ugo nailed the logistics. Registration was slick, lunch was plentiful, and session management was great. The very generous Rui was kind enough to showcase a short video about Glimpse in his session, which seemed to go down well (Although the audio in the rooms was a little under-powered). Because I think you might need a mid-week chuckle, here are some out-takes.: And lets not forget the Hackathon. The idea was what having just learned about a stack of interesting technologies, attendees could spend an evening (fuelled by pizza and some good Github beer) hacking something together using them. Unfortunately, after a (great)10-hour day, and in many cases facing international travel in the morning, many of the attendees headed straight for their hotel rooms. This idea could work so beautifully, and I’m excited to see how it pans out in 2013. On top of the slick sessions, getting to finally meet Ugo and Simone in the flesh as a pleasure, as was the serendipitous introduction to the most excellent Rui. They’re all fantastic guys who are passionate about the web, and I’m looking forward to finding opportunities to work with them. Simone & Ugo put on a great event, and I’m excited to see what they do next year.

    Read the article

  • WPF Control Toolkits Comparison for LOB Apps

    In preparation for a new WPF project Ive been researching options for WPF Control toolkits.  While we want a lot of the benefits of WPF, the application is a fairly typical line of business application (LOB).  So were not focused on things like media and animations, but instead a simple, solid, intuitive, and modern user interface that allows for well architected separation of business logic and presentation layers. While WPF is mature, it hasnt lived the long life that Winforms has yet, so there is still a lot of room for third party and community control toolkits to fill the gaps between the controls that ship with the Framework.  There are two such gaps I was concerned about.  As this is an LOB app, we have needs for presenting lots of data and not surprisingly much of it is in grid format with the need for high performance, grouping, inline editing, aggregation, printing and exporting and things that weve been doing with LOB apps for a long time.  In addition we want a dashboard style for the UI in which the user can rearrange and shrink and grow tiles that house the content and functionality.  From a cost perspective, building these types of well performing controls from scratch doesnt make sense.  So I evaluated what you get from the .NET Framework along with a few different options for control toolkits.  I tried to be fairly thorough, but know that this isnt a detailed benchmarking comparison or intense evaluation.  Its just meant to be a feature set comparison to be used when thinking about building an LOB app in WPF.  I tried to list important feature differences and notes based on my experience with the trial versions and what I found in documentation and reference materials and samples.  Ive also listed the importance of the controls based on how I think they are needed in LOB apps.  There are several toolkits available, but given I dont have unlimited time, I picked just a few.  Maybe Ill add on more later.  The toolkits I compared are: Teleriks RadControls for WPF since I had heard some good things about Telerik Infragistics NetAdvantage WPF since both I and the customer have some experience with the vendors tools WPF Toolkit on codeplex since many of my colleagues have used it Blacklight codeplex project which had WPF support for the Tile View control  (with Release 4.3 WPF is not going to be supported in favor of focusing only on SilverLight controls, so I dropped that from the comparison) Click Here to Download the WPF Control Toolkits Comparison Hopefully this helps someone out there.  Feel free to post a comment on your experiences or if you think something I listed is incorrect or missing.  Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Ubuntu 10.10 forgets desktop theme.

    - by Marcelo Cantos
    (I posed this question on superuser.com and haven't received any answers or comments, then I came across this site, so my apologies to anyone who has seen this already.) I am running Ubuntu in VirtualBox (on a Windows 7 host). Several times now, the top-level menu bar, the task bar — and seemingly every system dialog — have forgotten the out-of-the-box "Ambiance" theme they conform to when I first installed the system. Window captions still preserve the theme, but pretty much nothing else does. I have searched high and low on Google for assistance with this problem. Everything I've found suggests either running some gconf reset or deleting .gconf* .gnome* and other similar directories. I have followed all this advice and nothing works. I still get a boring Windows-95-style gray 3D look and feel. On previous occasions, after much messing around I've given up and rebooted the VM instance, and been pleasantly suprised to see the original "Ambience" theme restored throughout the UI, but invariably it disappears again some time later, usually after a reboot, so I can never figure out what I did that broke it. Here's a sample from Ubuntu's site of what I want it to look like. And here's a screenshot of my system as it currently looks. Also note that my GNOME Terminals normally have a nice purple semi-translucent look, and as can be seen from the screenshot, they are now just a solid matt white. This last time (just yesterday), trying numerous combinations all the usual tricks and rebooting several times hasn't fixed it, so here I am on SU wondering: How do I recover the out-of-the-box theme for my Gnome/Ubuntu desktop, noting that blowing away all config files — as suggested in many places online — fails to achieve this? It might help to know that it seems to fail either after I resize the VM instance, forcing the Ubuntu desktop to resize itself, or after I play around with Compiz settings. I haven't been able to figure out which of these it is, and it could be neither. Given the amount of pain I have had to go through to get things back to normal (and given that I am at a loss as to how to do so), it has proven difficult to definitively isolate the cause.

    Read the article

  • Library to fake intermittent failures according to tester-defined policy?

    - by crosstalk
    I'm looking for a library that I can use to help mock a program component that works only intermittently - usually, it works fine, but sometimes it fails. For example, suppose I need to read data from a file, and my program has to avoid crashing or hanging when a read fails due to a disk head crash. I'd like to model that by having a mock data reader function that returns mock data 90% of the time, but hangs or returns garbage otherwise. Or, if I'm stress-testing my full program, I could turn on debugging code in my real data reader module to make it return real data 90% of the time and hang otherwise. Now, obviously, in this particular example I could just code up my mock manually to test against a random() routine. However, I was looking for a system that allows implementing any failure policy I want, including: Fail randomly 10% of the time Succeed 10 times, fail 4 times, repeat Fail semi-randomly, such that one failure tends to be followed by a burst of more failures Any policy the tester wants to define Furthermore, I'd like to be able to change the failure policy at runtime, using either code internal to the program under test, or external knobs or switches (though the latter can be implemented with the former). In pig-Java, I'd envision a FailureFaker interface like so: interface FailureFaker { /** Return true if and only if the mocked operation succeeded. Implementors should override this method with versions consistent with their failure policy. */ public boolean attempt(); } And each failure policy would be a class implementing FailureFaker; for example there would be a PatternFailureFaker that would succeed N times, then fail M times, then repeat, and a AlwaysFailFailureFaker that I'd use temporarily when I need to simulate, say, someone removing the external hard drive my data was on. The policy could then be used (and changed) in my mock object code like so: class MyMockComponent { FailureFaker faker; public void doSomething() { if (faker.attempt()) { // ... } else { throw new RuntimeException(); } } void setFailurePolicy (FailureFaker policy) { this.faker = policy; } } Now, this seems like something that would be part of a mocking library, so I wouldn't be surprised if it's been done before. (In fact, I got the idea from Steve Maguire's Writing Solid Code, where he discusses this exact idea on pages 228-231, saying that such facilities were common in Microsoft code of that early-90's era.) However, I'm only familiar with EasyMock and jMockit for Java, and neither AFAIK have this function, or something similar with different syntax. Hence, the question: Do such libraries as I've described above exist? If they do, where have you found them useful? If you haven't found them useful, why not?

    Read the article

  • links for 2011-03-17

    - by Bob Rhubart
    Siba Prasad: Oracle Database on Amazon RDSg Siba Prasad share an analysis of the pros and cons. (tags: oracle database cloud amazon) LIVE WEBCAST March 24 2pm PT- Why Switch from Red Hat and SUSE Linux to Oracle Linux? (Oracle's Linux Blog) Featuring Oracle's Monica Kumar, Sr.Director of Linux, Oracle VM and MySQL and Avi Miller, Principal Sales Consultant, Linux and Virtualization. (tags: oracle linux) Webcast: IBM SOA vs. Oracle SOA, March 24, 1pm ET / 10am PT Maneesh Joshi and Bruce Tierney guide you to a solid understanding of the differences between the Oracle and IBM approach to comprehensive SOA. (tags: oracle soa bpm) Finding the Right Solution to Source and Manage Your Contractors (PeopleSoft Apps Strategy) "Talent has become a primary competitive advantage for most organizations. Contingent labor offers talent on flexible terms; it offers the ability to scale up operations, close skill gaps, and manage risk in the process of delivering services." - Mark Rosenberg (tags: oracle peoplesoft enterprisearchitecture) Oracle Business Intelligence Customers: Have Your Voice Heard in the "2011Wisdom of the Crowds Business Intelligence Market Survey" (BI & Analytics Pulse) "The Wisdom of the Crowds survey combines social media, crowd sourcing, and good old fashioned market research to provide vendors and customers alike an unvarnished and insightful snap shot of what's top of mind with business intelligence professionals." (tags: oracle businessintelligence) Martin Bach: Troubleshooting Grid Infrastructure startup Martin Bach hunts down the problem that caused one of his blades to reboot after an EXT3 journal error. (tags: oracle grid rac) Oracle WebCenter: Social Networking & Collaboration (Oracle Enterprise 2.0 Blog) Kelley Ruppel with information on "how the new release of Oracle WebCenter provides unprecedented Social Networking and Collaboration." (tags: oracle webcenter enterprise2.0 collaboration) VirtaThon: 100% Virtual Java/Oracle/MySQL Conference! | Bex Huff "The goal is simple," says Oracle ACE Director Bex Huff. "Because it's all online, the conference is very cheap. Pricing is not yet announced... but it should be around $300. Also, unlike other conferences, every speaker gets paid a small fee depending on the popularity of his or her session." (tags: oracle oracleace java mysqql) Griffiths Waite Blog: BPM 11g PS3 GW's Ian Heathcock shares a link to "a most interesting article on Oracle's recent release discussing the new features and how PS3 adds value  to the whole SOA message." (tags: oracle soa) The Buttso Blathers: Tutorial: JSF 2.0 and JPA 2.0 with WebLogic Server using NetBeans Should you take application architecture advice from a man named Buttso? In this case, yes. (tags: oracle jsf jpa weblogic) Setting-up a High Available Tuned SOA Environment Middleware Magic (tags: ping.fm) How to Configure Weblogic Messaging Bridge with JBoss Middleware Magic (tags: ping.fm Weblogic JBoss) Richard Veryard on Architecture: Emergent Architecture (tags: ping.fm entarch emergence)

    Read the article

  • What does it mean to treat data as an asset?

    What does it mean to treat data as an asset? When considering this concept, we must define what data is and how it can be considered an asset. Data can easily be defined as a collection of stored truths that are open to interpretation and manipulation.  Expanding on this definition, data can be viewed as a set of captured facts, measurements, and ideas used to make decisions. Furthermore, InvestorsWords.com defines asset as any item of economic value owned by an individual or corporation. Now let’s apply this definition of asset to our definition of data, and ask the following question. Can facts, measurements and ideas be items that are of economic value owned by an individual or corporation? The obvious answer is yes; data can be bought and sold like commodities or analyzed to make smarter business decisions.  We can look at the economic value of data in one of two ways. First, data can be sold as a commodity that can take the form of goods like eBooks, Training, Music, Movies, and so on. Customers are willing to pay to gain access to this data for their consumption. This directly implies that there is an economic value for data in the form of a commodity because customers see a value in obtaining it.  Secondly data can be used in making smarter business decisions that allow for companies to become more profitable and/or reduce their potential for risk in regards to how they operate.  In the past I have worked at companies where we had to analyze previous sales activities in conjunction with current activities to determine how the company was preforming for the quarter.  In addition trends can be formulated based on existing data that allow companies to forecast data so that they can make strategic business decisions based sound forecasted data. Companies that truly value their data are constantly trying to grow and upgrade their data and supporting applications because it is the life blood of a company. If we look at an eBook retailer for example, imagine if they lost all of their data. They would be in essence forced out of business because they would have nothing to sell. In turn, if we look at a company that was using data to facilitate better decision making processes and they lost all of their data then they could be losing potential revenue and/ or increasing the company’s losses by making important business decisions virtually in the dark compared to when they were made on solid data.

    Read the article

  • What are the arguments against parsing the Cthulhu way?

    - by smarmy53
    I have been assigned the task of implementing a Domain Specific Language for a tool that may become quite important for the company. The language is simple but not trivial, it already allows nested loops, string concatenation, etc. and it is practically sure that other constructs will be added as the project advances. I know by experience that writing a lexer/parser by hand -unless the grammar is trivial- is a time consuming and error prone process. So I was left with two options: a parser generator à la yacc or a combinator library like Parsec. The former was good as well but I picked the latter for various reasons, and implemented the solution in a functional language. The result is pretty spectacular to my eyes, the code is very concise, elegant and readable/fluent. I concede it may look a bit weird if you never programmed in anything other than java/c#, but then this would be true of anything not written in java/c#. At some point however, I've been literally attacked by a co-worker. After a quick glance at my screen he declared that the code is uncomprehensible and that I should not reinvent parsing but just use a stack and String.Split like everybody does. He made a lot of noise, and I could not convince him, partially because I've been taken by surprise and had no clear explanation, partially because his opinion was immutable (no pun intended). I even offered to explain him the language, but to no avail. I'm positive the discussion is going to re-surface in front of management, so I'm preparing some solid arguments. These are the first few reasons that come to my mind to avoid a String.Split-based solution: you need lot of ifs to handle special cases and things quickly spiral out of control lots of hardcoded array indexes makes maintenance painful extremely difficult to handle things like a function call as a method argument (ex. add( (add a, b), c) very difficult to provide meaningful error messages in case of syntax errors (very likely to happen) I'm all for simplicity, clarity and avoiding unnecessary smart-cryptic stuff, but I also believe it's a mistake to dumb down every part of the codebase so that even a burger flipper can understand it. It's the same argument I hear for not using interfaces, not adopting separation of concerns, copying-pasting code around, etc. A minimum of technical competence and willingness to learn is required to work on a software project after all. (I won't use this argument as it will probably sound offensive, and starting a war is not going to help anybody) What are your favorite arguments against parsing the Cthulhu way?* *of course if you can convince me he's right I'll be perfectly happy as well

    Read the article

  • Software monetization that is not evil

    - by t0x1n
    I have a free open-source project with around 800K downloads to date. I've been contacted by some monetization companies from time to time and turned them down, since I didn't want toolbar malware associated with my software. I was wondering however, is there a non-evil way to monetize software ? Here are the options as I know them: Add a donation button. I don't feel comfortable with that as I really don't need "donations" - I'm paid quite well. Donating users may feel entitled to support etc. (see the second to last bullet) Add ads inside your application. In the web that may be acceptable, but in a desktop program it looks incredibly lame. Charge a small amount for each download. This model works well in the mobile world, but I suspect no one will go for it on the desktop. It doesn't mix well with open source, though I suppose I could charge only for the binaries (most users won't go to the hassle of compiling the sources). People may expect support etc. after having explicitly paid (see next bullet). Make money off a service / community / support associated with the program. This is one route I definitely don't want to take, I don't want any sort of hassle beyond coding. I assure you, the program is top notch (albeit simple) and I'm not aware of any bugs as of yet (there are support forums and blog comments where users may report them). It is also very simple, documented, and discoverable so I do think I have a case for supplying it "as is". Add affiliate suggestions to your installer. If you use a monetization company, you lose control over what they propose. Unless you can establish some sort of strong trust with the company to supply quality suggestions (I sincerely doubt it), I can't have that. Choosing your own affiliate (e.g. directly suggesting Google Toolbar) is possibly the only viable solution to my mind. Problem is, where do I find a solid affiliate that could actually give value to the user rather than infect his computer with crapware? I thought maybe Babylon (not the toolbar of course, I hate toolbars)?

    Read the article

  • Walmart's Mobile Self-Checkout

    - by David Dorf
    Reuters recently reported that Walmart was testing an iPhone-based self-checkout at a store near its headquarters.  Consumers scan items as they're placed in the physical basket, then the virtual basket is transferred to an existing self-checkout station where payment is tendered.  A very solid solution, but not exactly original. Before we go further, let's look at the possible cost savings for Walmart.  According to the article: Pushing more shoppers to scan their own items and make payments without the help of a cashier could save Wal-Mart millions of dollars, Chief Financial Officer Charles Holley said on March 7. The company spends about $12 million in cashier wages every second at its Walmart U.S. stores. Um, yeah. Using back-of-the-napkin math, I calculated Walmart's cashiers are making $157k per hour.  A more accurate statement would be saving $12M per year for each second saved on the average transaction time.  So if this self-checkout approach saves 2 seconds per transaction on average, Walmart would save $24M per year on labor.  Maybe.  Sometimes that savings will be used to do other tasks in the store, so it may not directly translate to less employees. When I saw this approach demonstrated in Sweden, there were a few differences, which may or may not be in Walmart's plans.  First, the consumers were identified based on their loyalty card.  In order to offset the inevitable shrink, retailers need to save on labor but also increase basket size, typically via in-aisle promotions.  As they scan items, retailers should target promos, and that's easier to do if you know some shopping history.  Last I checked, Walmart had no loyalty program. Second, at the self-checkout station consumers were randomly selected for an audit in which they must re-scan all the items just like you do at a typical self-checkout.  If you were found to be stealing, your ability to use the system can be revoked.  That's a tough one in the US, especially when the system goes wrong, either by mistake or by lying.  At least in my view, the Swedes are bit more trustworthy than the people of Walmart. So while I think the idea of mobile self-checkout has merit, perhaps its not right for Walmart.

    Read the article

  • Architects, Leadership, and Influence

    - by Bob Rhubart
    Technical expertise is a given for architects. In addition to solid development experience, extensive knowledge of technical trends, tools, standards, and methodolgies (not to mention business accumen) provides the foundation for the decisions the architect must make in the effort to get all the pieces to work together. But even superior technical chops can't overcome a lack of leadership. Leadership is about influence: the ability to effectively communicate — to sell your ideas and defend your decisions in a manner that affects the decisions of the people around you. Leadership and influence are especially important in situations in which the architect may not have the authority to simply tell people what to do. And even when the architect has that kind of authority, influential leadership can mean the difference between gaining real buy-in and support from colleagues and stakeholders, and settling for their grudging acceptance (or worse). Guess which outcome is likely to produce the best results. In a previous post I presented some examples of the kind of criticism that is leveled at architects, a great deal of which can be attributed to a lack of leadership and influence on the part of the targets of that criticism. So it was serendipitous that I recently ran across a post on the Harvard Business Review blog written by Chris Musselwhite and Tammie Plouffe. That post, When Your Influence Is Ineffective, includes this: [I]nfluence becomes ineffective when individuals become so focused on the desired outcome that they fail to fully consider the situation. While the influencer may still gain the short-term desired outcome, he or she can do long-term damage to personal effectiveness and the organization, as it creates an atmosphere of distrust where people stop listening, and the potential for innovation or progress is diminished. The need to "see the big picture" is a grossly reductive assesement of the architect's responsibilities — but that doesn't mean it's not true. That big picture perspective must encompass both the technological elements of the architecture and the elements responsible for implementing those technologies in compliance with the prescribed architecture. Technologies may be tempermental, but they don't have personalities or egos, and they are unlikely to carry a grudge — not yet, anyway (Hello, Skynet!).  Effective leadership and the ability to influence people can help to ensure that all the pieces fit and that they work together, today and tomorrow.

    Read the article

  • Lubuntu wireless issue with Broadcom chipset

    - by Variant Web Solutions
    I'm a web dev that just started a new venture in buying wiped laptops in bulk and selling them at a low cost to lower income families that want a laptop that will simply preform and not become virus ridden and require constant maintenance, so naturally I opted for a linux distro and after some research, lubuntu was my top pick. I'm not a stranger to linux as all of my servers for my web dev business are linux, but I am however new to L/Ubuntu and I'm having some issues with both wifi (broadcom chipsets) on the 20 or so dells that I have right now, D800's, D810's and E5400's. Not sure if you can point me in a solid direction, I've scoured (and implemented) the suggestions on ask ubuntu and still coming up short. On one of the e5400's ( though they all seem to suffer the same errors) I got the following: [code]dell-latitude-e5400@dell-Latitude-E5400:~$ iwconfig lo no wireless extensions. eth0 no wireless extensions. dell-latitude-e5400@dell-Latitude-E5400:~$ lspci 00:00.0 Host bridge: Intel Corporation Mobile 4 Series Chipset Memory Controller Hub (rev 07) 00:02.0 VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) 00:02.1 Display controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) 00:1a.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 02) 00:1a.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 02) 00:1a.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 02) 00:1a.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 02) 00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 02) 00:1c.0 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 1 (rev 02) 00:1c.1 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 2 (rev 02) 00:1c.4 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 5 (rev 02) 00:1d.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 02) 00:1d.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 02) 00:1d.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 02) 00:1d.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 02) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev 92) 00:1f.0 ISA bridge: Intel Corporation ICH9M LPC Interface Controller (rev 02) 00:1f.2 IDE interface: Intel Corporation 82801IBM/IEM (ICH9M/ICH9M-E) 2 port SATA Controller [IDE mode] (rev 02) 00:1f.3 SMBus: Intel Corporation 82801I (ICH9 Family) SMBus Controller (rev 02) 00:1f.5 IDE interface: Intel Corporation 82801IBM/IEM (ICH9M/ICH9M-E) 2 port SATA Controller [IDE mode] (rev 02) 02:01.0 CardBus bridge: Ricoh Co Ltd RL5c476 II (rev ba) 02:01.1 FireWire (IEEE 1394): Ricoh Co Ltd R5C832 IEEE 1394 Controller (rev 04) 02:01.2 SD Host controller: Ricoh Co Ltd R5C822 SD/SDIO/MMC/MS/MSPro Host Adapter (rev 21) 09:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5761e Gigabit Ethernet PCIe (rev 10) dell-latitude-e5400@dell-Latitude-E5400:~$ rfkill Usage: rfkill [options] command Options: --version show version (0.5-1ubuntu1 (Ubuntu)) Commands: help event list [IDENTIFIER] block IDENTIFIER unblock IDENTIFIER where IDENTIFIER is the index no. of an rfkill switch or one of: all wifi wlan bluetooth uwb ultrawideband wimax wwan gps fm nfc dell-latitude-e5400@dell-Latitude-E5400:~$ [/code]

    Read the article

  • WebCenter Customer Spotlight: Guizhou Power Grid Company

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryGuizhou Power Grid Company is responsible for power grid planning, construction, management, and power distribution in Guizhou Province, serving 39 million people. Giuzhou has 49,823 employees and an annual revenue of over $5 Billion. The business objectives were to consolidate information contained in disparate systems into a single knowledge repository and provide a safe and efficient way for staff and managers to access, query, share, manage, and store business information. Guizhou Power Grid Company saved more than US$693,000 in storage costs, reduced  average search times from 180 seconds to 5 seconds and solved 80% to 90% of technology and maintenance issues by searching the Oracle WebCenter Content management system. Company OverviewA wholly owned subsidiary of China Southern Power Grid Company Limited, Guizhou Power Grid Company is responsible for power grid planning, construction, management, and power distribution in Guizhou Province, serving 39 million people. Giuzhou has 49,823 employees and an annual revenue of over $5 Billion. Business ChallengesThe business objectives were to consolidate information contained in disparate systems, such as the customer relationship management and power grid management systems, into a single knowledge repository and provide a safe and efficient way for staff and managers to access, query, share, manage, and store business information. Solution DeployedGuizhou Power Grid Company  implemented Oracle WebCenter Content to build a content management system that enabled the secure, integrated management and storage of information, such as documents, records, images, Web content, and digital assets. The content management solution was integrated with the power grid, customer service, maintenance, and other business systems, as well as the corporate Web site. Business Results Saved more than US$693,000 in storage costs and shortened the material distribution time by integrating the knowledge management solution with the power grid, customer service, maintenance, and other business systems, as well as the corporate Web site Enabled staff to search 31,650 documents using catalogs, multidimensional attributes, and knowledge maps, reducing average search times from 180 seconds to 5 seconds and saving approximately 1,539 hours in annual search time Gained comprehensive document management, format transformation, security, and auditing capabilities Enabled users to upload new documents and supervisors to check the accuracy of these documents online, resulting in improved information quality control Solved 80% to 90% of technology and maintenance issues by searching the Oracle content management system for information, ensuring IT staff can respond quickly to users’ technical problems Improved security by using role-based access controls to restrict access to confidential documents and information Supported the efficient classification of corporate knowledge by using Oracle’s metadata functions to collect, tag, and archive documents, images, Web content, and digital assets “We chose Oracle WebCenter Content, as it is an outstanding integrated content management platform. It has allowed us to establish a system to access, query, share, manage, and store our corporate assets. This has laid a solid foundation for Guizhou Power Grid Company to improve management practices.” Luo Sixi, Senior Information Consultant, Guizhou Power Grid Company Additional Information Guizhou Power Grid Company Customer Snapshot Oracle WebCenter Content

    Read the article

  • The Use-Case Driven Approach to Change Management

    - by Lauren Clark
    In the third entry of the series on OUM and PMI’s Pulse of the Profession, we took a look at the continued importance of change management and risk management. The topic of change management and OUM’s use-case driven approach has come up in few recent conversations. So I thought I would jot down a few thoughts on how the use-case driven approach aids a project team in managing the project’s scope. The use-case model is one of several tools in OUM that is used to establish and manage the project's scope.  Because a use-case model can be understood by both business and IT project team members, it can serve as a bridge for ongoing collaboration as well as a visual diagram that encapsulates all agreed-upon functionality. This makes it a vital artifact in identifying changes to the project’s scope. Here are some of the primary benefits of using the use-case model as part of the effort for establishing and managing project scope: The use-case model quickly communicates scope in a straightforward manner. All project stakeholders can have a common foundation for the decisions regarding architecture and design and how they relate to the project's objectives. Once agreed upon, the model can be put under change control and any updates to the model can then be quickly identified as potentially affecting the project’s scope.  Changes requested or discovered later in the project can be analyzed objectively for their impact on project's budget, resources and schedule. A modular foundation for the design of the software solution can be established in Elaboration.  This permits work to be divided up effectively and executed in so that the most important and riskiest use-cases can be tackled early in the project. The use-case model helps the team make informed decisions about implementation priorities, which allows effective allocation of limited project resources.  This is very helpful in not only managing scope, but in doing iterative and incremental planning which relies heavily on the ability to identify project priorities. Bottom line is that the use-case model gives the project team solid understanding of scope early in the project.  Combine this understanding with effective project management and communication and you have an effective tool for reducing the risk of overruns in budget and/or time due to out of control scope changes. Now that you’ve had a chance to read these thoughts on the use-case model and project scope, please let me know your feedback based on your experience.

    Read the article

  • My Automated NuGet Workflow

    - by Wes McClure
    When we develop libraries (whether internal or public), it helps to have a rapid ability to make changes and test them in a consuming application. Building Setup the library with automatic versioning and a nuspec Setup library assembly version to auto increment build and revision AssemblyInfo –> [assembly: AssemblyVersion("1.0.*")] This autoincrements build and revision based on time of build Major & Minor Major should be changed when you have breaking changes Minor should be changed once you have a solid new release During development I don’t increment these Create a nuspec, version this with the code nuspec - set version to <version>$version$</version> This uses the assembly’s version, which is auto-incrementing Make changes to code Run automated build (ruby/rake) run “rake nuget” nuget task builds nuget package and copies it to a local nuget feed I use an environment variable to point at this so I can change it on a machine level! The nuget command below assumes a nuspec is checked in called Library.nuspec next to the csproj file $projectSolution = 'src\\Library.sln' $nugetFeedPath = ENV["NuGetDevFeed"] msbuild :build => [:clean] do |msb| msb.properties :configuration => :Release msb.targets :Build msb.solution = $projectSolution end task :nuget => [:build] do sh "nuget pack src\\Library\\Library.csproj /OutputDirectory " + $nugetFeedPath end Setup the local nuget feed as a nuget package source (this is only required once per machine) Go to the consuming project Update the package Update-Package Library or Install-Package TLDR change library code run “rake nuget” run “Update-Package library” in the consuming application build/test! If you manually execute any of this process, especially copying files, you will find it a burden to develop the library and will find yourself dreading it, and even worse, making changes downstream instead of updating the shared library for everyone’s sake. Publishing Once you have a set of changes that you want to release, consider versioning and possibly increment the minor version if needed. Pick the package out of your local feed, and copy it to a public / shared feed! I have a script to do this where I can drop the package on a batch file Replace apikey with your nuget feed's apikey Take out the confirm(s) if you don't want them @ECHO off echo Upload %1? set /P anykey="Hit enter to continue " nuget push %1 apikey set /P anykey="Done " Note: helps to prune all the unnecessary versions during testing from your local feed once you are done and ready to publish TLDR consider version number run command to copy to public feed

    Read the article

  • How to resolve concurrent ramp collisions in 2d platformer?

    - by Shaun Inman
    A bit about the physics engine: Bodies are all rectangles. Bodies are sorted at the beginning of every update loop based on the body-in-motion's horizontal and vertical velocity (to avoid sticky walls/floors). Solid bodies are resolved by testing the body-in-motion's new X with the old Y and adjusting if necessary before testing the new X with the new Y, again adjusting if necessary. Works great. Ramps (rectangles with a flag set indicating bottom-left, bottom-right, etc) are resolved by calculating the ratio of penetration along the x-axis and setting a new Y accordingly (with some checks to make sure the body-in-motion isn't attacking from the tall or flat side, in which case the ramp is treated as a normal rectangle). This also works great. Side-by-side ramps, eg. \/ and /\, work fine but things get jittery and unpredictable when a top-down ramp is directly above a bottom-up ramp, eg. < or > or when a bottom-up ramp runs right up to the ceiling/top-down ramp runs right down to the floor. I've been able to lock it down somewhat by detecting whether the body-in-motion hadFloor when also colliding with a top-down ramp or hadCeiling when also colliding with a bottom-up ramp then resolving by calculating the ratio of penetration along the y-axis and setting the new X accordingly (the opposite of the normal behavior). But as soon as the body-in-motion jumps the hasFloor flag becomes false, the first ramp resolution pushes the body into collision with the second ramp and collision resolution becomes jittery again for a few frames. I'm sure I'm making this more complicated than it needs to be. Can anyone recommend a good resource that outlines the best way to address this problem? (Please don't recommend I use something like Box2d or Chipmunk. Also, "redesign your levels" isn't an answer; the body-in-motion may at times be riding another body-in-motion, eg. a platform, that pushes it into a ramp so I'd like to be able to resolve this properly.) Thanks!

    Read the article

  • Writing or extending existing emacs packages: is it worth or should I move to Netbeans/Eclipse?

    - by Andrea
    I'm finishing my master degree course in CS and I've almost become addicted to Emacs. I've used it to write in C, Latex, Java, JSP,XML, CommonLisp, Ada and other languages no other editor supported, like AMPL. I'd like to improve the packages I've been using the most or create new ones, but, in practice, I find that the implementation of Emacs leaves a lot to be desired. There are a lot of poorly-featured/poorly-maintained packages with either overlapping functionalities or obscure incompatibilities, and Elisp just seems to foster the situation by lacking the common features modern lisps have. In contrast Eclipse and Netbeans are actively improved and it does seem they can be effective for non-mainstream languages. I tried Hibachi for Ada in Eclipse and it worked well, there's CUPS for Lisp in Eclipse and LambdaBeans built using NetBeans components. On the other hand those plugins seem to be less active than their Emacs' counterparts, for example Hibachi was archived last year. What's your opinion on this? Which editor should I write extension for? EDIT: To answer Larry Coleman (see comment below): I like Emacs as a user because it is efficient both for me and the computer I'm using. It's fast and the textual interface (i.e. minibuffer) allows for quick interaction. It's solid and packages are usually small and easy to manage. If I need to correct or remove something I usually just have to change a row in my .emacs or an elisp file, or delete a directory. Eclipse plugins rely on a more complicated process that screwed my Eclipse configuration a couple of times, forcing me to do a clean reinstall. Emacs works as long as I use the basic packages. If I need something more complicated the situation gets pretty hairy. As a "power user" I think that the best I can hope for is to write a severely crippled version of the extensions I'd actually like to have; in other words, that it's not worth the trouble. I'd like to write extensions for the things I'd like to have automated in Emacs, for example project support with automated tag-table update on file writing. There are a few projects on this that lack integration, documentation, extensibility and so forth. The best one is probably CEDET, for which I believe the Greenspun's 10th rule can be applied. EDIT: To comment Larry Coleman's answer I'm pretty sure I can pick elisp programming but the extensions I have in mind don't exist yet despite their relative simplicity and the effort more knowledgeable people poured into related projects.This makes me wonder whether it is so because of the way emacs is developed, i.e. people tend to write their own little extensions without coordination, or its implementation, its extension language not being able to keep up with the growing complexity.

    Read the article

  • Design pattern to handle queries using multiple models

    - by coderkane
    I am presented with a dilemma while trying to re-designing the class structure for my PHP/MySQL application to make it more elegant and conform it to the SOLID principle. The problem goes like this: Let as assume, there is an abstract class called person which has certain properties to define a generic person, such as name, age, date of birth etc. There are two classes, student, and teacher, that implements this abstract class. They add their own unique properties to it. I have designed all the three classes to include all the operational logic (details of which are not relevant in context of the question). Now, I need to create views/reports/data grids which contain details from multiple classes, for example, say, a list of all students doing projects in Chemistry mentored by a teacher whose name is the parameter to the query. This is just one example of a view, there are many different views in the application, which uses data from 3-4 tables, and each of them have multiple input parameters to generate them. Considering this particular example, I have written the relevant query using JOIN and the results are as expected and proper, now here is the dilemma: Keeping in mind the single responsibility principle, where should I keep this query? It does not belong to either Student class, or Teacher class or any other classes currently present. a) Should I create a new class, say dataView class, and design it as a MVC pattern and keep the query there? What about the other views? how do they fit in this architecture? b) Should I not keep the query in code at all, and make it DB View ? c) Am I completely wrong in the approach? If so what is the right approach? My considerations are as follows: a) should be easy to add new views later on if requirement comes, without having to copy-paste-modify code b) would like to make it as loosely coupled as possible so that if minor db structure changes happen, it does not break I did google searches on report design and OOP report generators, but all the result seem to focus on the visual design of the report rather than fetching the data. I have already taken care of the visual aspect of the report using MVC with html templates. I am sure this is a very fundamental problem with known solution, but I am somehow not able to find it (maybe searching with wrong keyword). Edit1: Modified the title to make it more relevant Edit2: The accepted answer got me thinking in the right direction and identify my design flaws, which eventually led me to find this question and the solution in Stack Overflow which gave me the detailed answer to clear the confusion.

    Read the article

  • Oracle Social Network Developer Challenge: HarQen Nodal

    - by Kellsey Ruppel
    Originally posted by Jake Kuramoto on The Apps Lab blog. We wrapped the Oracle Social Network Developer Challenge last week at OpenWorld, and this week, I’ll be sharing all the entries. All the teams that entered our challenge did a ton of work and built really interesting integrations with Oracle Social Network, and I want to showcase their hard work and innovative ideas. Today, I give you Nodal from the HarQen (@harqen) team, Kris Gösser (@krisgosser), Jesse Vogt (@jesse_vogt) and Matt Stockton (@mstockton). The guys from HarQen built Nodal to provide a visual way to navigate your connections and conversations in Oracle Social Network and view relationships. Using Nodal, you can: Search through names and profiles in Oracle Social Network. Choose people and view their social graphs in a visually useful way. Expand nodes in the social graph and add that person’s social graph to the Nodal view for comparison. Move nodes around and lock them in place for easier viewing, using a physics engine for movement. Adjust the physics engine properties according to your viewing preferences. Select nodes in the social graph and create a conversation directly based on the selection. Here are some shots of Nodal. They really don’t do the physics engine justice, but maybe the guys at Harqen will post a video of what they did for your viewing pleasure. #gallery-1 { margin: auto; } #gallery-1 .gallery-item { float: left; margin-top: 10px; text-align: center; width: 33%; } #gallery-1 img { border: 2px solid #cfcfcf; } #gallery-1 .gallery-caption { margin-left: 0; }   Nodal’s visuals wowed the judges and the audience, and anyone with a decent-sized social network presence understands the need for good network visualization. Tools like Nodal allow you to discover hidden connections in your network and maximize the value of your weak ties and find mavens, a very important key to getting work done. Thanks to the HarQen team for participating in our challenge. We hope they had a good experience. Look for the details of the other entries this week.

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >