Search Results

Search found 13351 results on 535 pages for 'standard edition'.

Page 301/535 | < Previous Page | 297 298 299 300 301 302 303 304 305 306 307 308  | Next Page >

  • Unity Launcher only runs once - requires lightdm restart before it runs again

    - by Don
    I have an intermittent problem that just started showing up several days ago. I am running 11.10 and all updates are current. I first saw the symptom with a custom version of the "Home" nautilus-home.desktop file I created in ~/.local.share/applications. I added a few static shortcuts to specific folders. What I found was, clikcing the icon once would open up my home folder, but after closing that nautilus window, clicking the icon again did nothing (did not even show icon backlight animation). However, I could right click on the same icon and access my short cuts as many times as I want. Symptom persisted until restarting lightdm. Just yesterday I saw the same sort of symptom happen with a custom launcher I created for a chromium-borwser to open a specific URL (with a few short cuts to other URLs). Click the icon - it works once. Then never again. Right click the icon and I can use the short cuts over and over - no problem. Note - at one point I assumed I might have a problem with my custom .desktop file, so I did a test by removing my custom nautilus-home.desktop. However, even after restarting lightdm, and verifying the home icon was the standard one from /opt/share/applications (all my custom shortcuts were gone) I saw the same symptom re-appear - it runs once and then not again until restarting lightdm. It seems to be intermittent and seems to move between various launchers. Not sure what to do or even what background data to gather. Attempt to improve question after the first answer: I tried the following: 1) remove all custom launchers 2) reboot 3) add custom lauchers back 4) reboot 5) attempt to use .... still have "runs once and never again" symptom with several launchers

    Read the article

  • Biggest mistake you've ever made

    - by Rogue Coder
    Similar to the question I read on Server Fault, what is the biggest mistake you've ever made in an IT related position. Some examples from friends: I needed to do some work on a production site so I decided to copy over the live database to the beta site. Pretty standard, but when I went to the beta site it was still pulling out-of-date info. OOPS! I had copied the beta database over to the live site! Thank god for backups. And for me, I created a form for an event that was to be held during a specific time range. Participants would fill out the form for a chance to win, and we would send the event organizers a CSV from the database. I went into the database, and found ONLY 1 ENTRY, MINE. Upon investigating, it appears as though I forgot an auto increment key, and because of the server setup there was no way to recover the lost data. I am aware this question is similar to ones on Stack Overflow but the ones I found seemed to receive generic answers instead of actual stories :) What is the biggest coding error/mistake ever…

    Read the article

  • Standards Matter: The Battle For Interoperability Continues

    - by michael.rowell
    Great Article, although it is a little dated at this point. Information Week Article Standards Matter: The Battle for Interoperability goes on Summary If you're guilty of relegating standards support to a "nice to have" feature rather than a requirement, you're part of the problem. If you want products to interoperate, be prepared to walk away if a vendor can't prove compliance. Don't be brushed off with promises of standards support "on the road map." The alternative is vendor lock-in and higher costs, including the cost of maintaining systems that don't work together. Standards bodies are imperfect and must do better. The alternative: splintered networks and broken promises. The point: "The secret sauce to a successful 'working standard' isn't necessarily IETF or another longstanding body," says Jonathan Feldman, director of IT services for the city of Asheville, N.C., and an InformationWeek Analytics contributor. "Rather, an earnest and honest effort by a group that has governance outside of a single corporation's control is what's important." In order to have true interoperability vendors as well as customers must be actively engaged in the standards process. Vendors must be willing to truly work together and not be protecting an existing product. Customers must also be willing to truly to work together and not be demanding a solution that only meets their needs but instead meets the needs of all participants. Ultimately, customers must be willing to reward vendor compliance by requiring compliance in products and services that they purchase and deploy. Managers that deploy systems without compliance to standards are only hurting themselves. Standards do matter. When developed openly and deployed compliantly standards deliver interoperability which provides solid business value.

    Read the article

  • The Fantastic New WebLogic on Oracle Database Appliance 2.9 Release is Here!

    - by JuergenKress
    Last week was a big day in virtualised ODA-land as it saw the launch of WebLogic on ODA 2.9. Admittedly it doesn't sound like a very exciting release but it is one that we at O-box have been looking forward to for quite some time. Let me explain why, then we'll look into the details... The ODA X4-2 has 48 Intel Xeon cores. That is a lot of compute power. Whilst the largest O-box SOA Appliance single environment configuration can in theory use all those cores (currently with 40 vCPU of SOA!) the vast majority of O-box users will want smaller configurations. Prior to 2.9 the Oracle WebLogic implementation only supported one domain per ODA, so the conundrum O-box development faced last year was either: offer customers only one SOA environment on their O-box for now (but have the benefit of a standard, easily supportable WebLogic installation), or build our own WebLogic/OTD OVM templates from scratch. One of our driving goals with O-box is to give the best possible experience and make the appliance as supportable as possible. Therefore we took the gamble that we would stick with the Oracle's one-domain WebLogic configuration initially, and just hope that it would deliver multi-domain support for us in a timely manner (note: this is probably not a strategy that business textbooks would recommend!). Anyway, we've been working closely with Oracle Product Management for a few months now and I'm delighted to see 2.9 as the fruits of their labour. This also neatly ties in with several recent requests for O-box to include OSB as well as SOA/BPEL (which we have always wanted to have in separate domains). The diagram below is the neatest way to summarise what the new 2.9 release will allow us to deliver, i.e. previously only one 3D box was possible: Read the complete article here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: oBox,WebLogic on ODA,ODA,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • ASMLib

    - by wcoekaer
    Oracle ASMlib on Linux has been a topic of discussion a number of times since it was released way back when in 2004. There is a lot of confusion around it and certainly a lot of misinformation out there for no good reason. Let me try to give a bit of history around Oracle ASMLib. Oracle ASMLib was introduced at the time Oracle released Oracle Database 10g R1. 10gR1 introduced a very cool important new features called Oracle ASM (Automatic Storage Management). A very simplistic description would be that this is a very sophisticated volume manager for Oracle data. Give your devices directly to the ASM instance and we manage the storage for you, clustered, highly available, redundant, performance, etc, etc... We recommend using Oracle ASM for all database deployments, single instance or clustered (RAC). The ASM instance manages the storage and every Oracle server process opens and operates on the storage devices like it would open and operate on regular datafiles or raw devices. So by default since 10gR1 up to today, we do not interact differently with ASM managed block devices than we did before with a datafile being mapped to a raw device. All of this is without ASMLib, so ignore that one for now. Standard Oracle on any platform that we support (Linux, Windows, Solaris, AIX, ...) does it the exact same way. You start an ASM instance, it handles storage management, all the database instances use and open that storage and read/write from/to it. There are no extra pieces of software needed, including on Linux. ASM is fully functional and selfcontained without any other components. In order for the admin to provide a raw device to ASM or to the database, it has to have persistent device naming. If you booted up a server where a raw disk was named /dev/sdf and you give it to ASM (or even just creating a tablespace without asm on that device with datafile '/dev/sdf') and next time you boot up and that device is now /dev/sdg, you end up with an error. Just like you can't just change datafile names, you can't change device filenames without telling the database, or ASM. persistent device naming on Linux, especially back in those days ways to say it bluntly, a nightmare. In fact there were a number of issues (dating back to 2004) : Linux async IO wasn't pretty persistent device naming including permissions (had to be owned by oracle and the dba group) was very, very difficult to manage system resource usage in terms of open file descriptors So given the above, we tried to find a way to make this easier on the admins, in many ways, similar to why we started working on OCFS a few years earlier - how can we make life easier for the admins on Linux. A feature of Oracle ASM is the ability for third parties to write an extension using what's called ASMLib. It is possible for any third party OS or storage vendor to write a library using a specific Oracle defined interface that gets used by the ASM instance and by the database instance when available. This interface offered 2 components : Define an IO interface - allow any IO to the devices to go through ASMLib Define device discovery - implement an external way of discovering, labeling devices to provide to ASM and the Oracle database instance This is similar to a library that a number of companies have implemented over many years called libODM (Oracle Disk Manager). ODM was specified many years before we introduced ASM and allowed third party vendors to implement their own IO routines so that the database would use this library if installed and make use of the library open/read/write/close,.. routines instead of the standard OS interfaces. PolyServe back in the day used this to optimize their storage solution, Veritas used (and I believe still uses) this for their filesystem. It basically allowed, in particular, filesystem vendors to write libraries that could optimize access to their storage or filesystem.. so ASMLib was not something new, it was basically based on the same model. You have libodm for just database access, you have libasm for asm/database access. Since this library interface existed, we decided to do a reference implementation on Linux. We wrote an ASMLib for Linux that could be used on any Linux platform and other vendors could see how this worked and potentially implement their own solution. As I mentioned earlier, ASMLib and ODMLib are libraries for third party extensions. ASMLib for Linux, since it was a reference implementation implemented both interfaces, the storage discovery part and the IO part. There are 2 components : Oracle ASMLib - the userspace library with config tools (a shared object and some scripts) oracleasm.ko - a kernel module that implements the asm device for /dev/oracleasm/* The userspace library is a binary-only module since it links with and contains Oracle header files but is generic, we only have one asm library for the various Linux platforms. This library is opened by Oracle ASM and by Oracle database processes and this library interacts with the OS through the asm device (/dev/asm). It can install on Oracle Linux, on SuSE SLES, on Red Hat RHEL,.. The library itself doesn't actually care much about the OS version, the kernel module and device cares. The support tools are simple scripts that allow the admin to label devices and scan for disks and devices. This way you can say create an ASM disk label foo on, currently /dev/sdf... So if /dev/sdf disappears and next time is /dev/sdg, we just scan for the label foo and we discover it as /dev/sdg and life goes on without any worry. Also, when the database needs access to the device, we don't have to worry about file permissions or anything it will be taken care of. So it's a convenience thing. The kernel module oracleasm.ko is a Linux kernel module/device driver. It implements a device /dev/oracleasm/* and any and all IO goes through ASMLib - /dev/oracleasm. This kernel module is obviously a very specific Oracle related device driver but it was released under the GPL v2 so anyone could easily build it for their Linux distribution kernels. Advantages for using ASMLib : A good async IO interface for the database, the entire IO interface is based on an optimal ASYNC model for performance A single file descriptor per Oracle process, not one per device or datafile per process reducing # of open filehandles overhead Device scanning and labeling built-in so you do not have to worry about messing with udev or devlabel, permissions or the likes which can be very complex and error prone. Just like with OCFS and OCFS2, each kernel version (major or minor) has to get a new version of the device drivers. We started out building the oracleasm kernel module rpms for many distributions, SLES (in fact in the early days still even for this thing called United Linux) and RHEL. The driver didn't make sense to get pushed into upstream Linux because it's unique and specific to the Oracle database. As it takes a huge effort in terms of build infrastructure and QA and release management to build kernel modules for every architecture, every linux distribution and every major and minor version we worked with the vendors to get them to add this tiny kernel module to their infrastructure. (60k source code file). The folks at SuSE understood this was good for them and their customers and us and added it to SLES. So every build coming from SuSE for SLES contains the oracleasm.ko module. We weren't as successful with other vendors so for quite some time we continued to build it for RHEL and of course as we introduced Oracle Linux end of 2006 also for Oracle Linux. With Oracle Linux it became easy for us because we just added the code to our build system and as we churned out Oracle Linux kernels whether it was for a public release or for customers that needed a one off fix where they also used asmlib, we didn't have to do any extra work it was just all nicely integrated. With the introduction of Oracle Linux's Unbreakable Enterprise Kernel and our interest in being able to exploit ASMLib more, we started working on a very exciting project called Data Integrity. Oracle (Martin Petersen in particular) worked for many years with the T10 standards committee and storage vendors and implemented Linux kernel support for DIF/DIX, data protection in the Linux kernel, note to those that wonder, yes it's all in mainline Linux and under the GPL. This basically gave us all the features in the Linux kernel to checksum a data block, send it to the storage adapter, which can then validate that block and checksum in firmware before it sends it over the wire to the storage array, which can then do another checksum and to the actual DISK which does a final validation before writing the block to the physical media. So what was missing was the ability for a userspace application (read: Oracle RDBMS) to write a block which then has a checksum and validation all the way down to the disk. application to disk. Because we have ASMLib we had an entry into the Linux kernel and Martin added support in ASMLib (kernel driver + userspace) for this functionality. Now, this is all based on relatively current Linux kernels, the oracleasm kernel module depends on the main kernel to have support for it so we can make use of it. Thanks to UEK and us having the ability to ship a more modern, current version of the Linux kernel we were able to introduce this feature into ASMLib for Linux from Oracle. This combined with the fact that we build the asm kernel module when we build every single UEK kernel allowed us to continue improving ASMLib and provide it to our customers. So today, we (Oracle) provide Oracle ASMLib for Oracle Linux and in particular on the Unbreakable Enterprise Kernel. We did the build/testing/delivery of ASMLib for RHEL until RHEL5 but since RHEL6 decided that it was too much effort for us to also maintain all the build and test environments for RHEL and we did not have the ability to use the latest kernel features to introduce the Data Integrity features and we didn't want to end up with multiple versions of asmlib as maintained by us. SuSE SLES still builds and comes with the oracleasm module and they do all the work and RHAT it certainly welcome to do the same. They don't have to rebuild the userspace library, it's really about the kernel module. And finally to re-iterate a few important things : Oracle ASM does not in any way require ASMLib to function completely. ASMlib is a small set of extensions, in particular to make device management easier but there are no extra features exposed through Oracle ASM with ASMLib enabled or disabled. Often customers confuse ASMLib with ASM. again, ASM exists on every Oracle supported OS and on every supported Linux OS, SLES, RHEL, OL withoutASMLib Oracle ASMLib userspace is available for OTN and the kernel module is shipped along with OL/UEK for every build and by SuSE for SLES for every of their builds ASMLib kernel module was built by us for RHEL4 and RHEL5 but we do not build it for RHEL6, nor for the OL6 RHCK kernel. Only for UEK ASMLib for Linux is/was a reference implementation for any third party vendor to be able to offer, if they want to, their own version for their own OS or storage ASMLib as provided by Oracle for Linux continues to be enhanced and evolve and for the kernel module we use UEK as the base OS kernel hope this helps.

    Read the article

  • Socket.io v.9 with Actionscript

    - by funseiki
    I'm attempting to develop an online multiplayer game using Node.js for the server and Flash to display the client. I've been reading up a bit and have found quite a few recommendations for the socket.io library. I've also found a github project which exposes code to help facilitate communication between an Actionscript 3.0 client and a server using socket.io. The project I mentioned is a bit dated and doesn't seem to have support for the latest version of socket.io, so I was wondering if leveraging this framework (socket.io, that is) would be the most ideal way to go. I have found a simple project that uses the standard 'net' module for node.js, but because there a few options available, I'm a little lost as to which one to go with. I'm currently leaning towards just using the regular 'net' module as it is already familiar to me. Since much of the client is already coded up, I'd really like to not switch over to using the HTML5 canvas just yet (but using socket.io would make a transition in the future more friendly, I think?). Any advice/direction on this matter would be much appreciated, though I do realize that there may be no one right answer. Edit: To be more specific, are there any client-side socket.io frameworks available that allow for communication between an Actionscript 3.0 client and a socket.io server and are robust enough to support current/future versions of socket.io? If not, what are the alternatives?

    Read the article

  • Configuring WS-Security with PeopleSoft Web Services

    - by Dave Bain
    I was speaking with a customer a few days ago about PeopleSoft Web Services.  The customer created a web service but when they went to deploy it, they had so many problems configuring ws-security, they pulled the service.  They spent several days trying to get it working but never got it working so they've put it on hold until they have time to work through the issues. Having gone through the process of configuring ws-security myself, I understand the complexity.  There is no magic 'easy' button to push.  If you are not familiar with all the moving parts like policies, certificates, public and private keys, credential stores, and so on, it can be a daunting task.  PeopleBooks documentation is good but does not offer a step-by-step example to follow.  Fear not, for those that want more help, there is a place to go. PeopleSoft released a Mobile Inventory Management application over a year ago.  It is a mobile app built with Oracle Fusion Application Development Framework (ADF) that accesses PeopleSoft content through standard web services.  Part of the installation of this app is configuring ws-security for the web services used in the application.  Appendix A of the PeopleSoft FSCM91 Mobile Inventory Management Installation Guide is called Configuring WS-Security for Mobile Inventory Management.  It is a step-by-step guide to configure ws-security between a server running Oracle Web Server Management (OWSM) and PeopleSoft Integration Broker.  Your environment might be different, but the steps will be similar, and on the PeopleSoft side, Integration Broker will remain a constant. You can find the installation guide on Oracle Suport.  Sign in to https://support.us.oracle.com and search for document 1290972.1.  Read through Appendix A for more details about how to set up ws-security with PeopleSoft web services.

    Read the article

  • Finding a way to simplify complex queries on legacy application

    - by glenatron
    I am working with an existing application built on Rails 3.1/MySql with much of the work taking place in a JavaScript interface, although the actual platforms are not tremendously relevant here, except in that they give context. The application is powerful, handles a reasonable amount of data and works well. As the number of customers using it and the complexity of the projects they create increases, however, we are starting to run into a few performance problems. As far as I can tell, the source of these problems is that the data represents a tree and it is very hard for ActiveRecord to deterministically know what data it should be retrieving. My model has many relationships like this: Project has_many Nodes has_many GlobalConditions Node has_one Parent has_many Nodes has_many WeightingFactors through NodeFactors has_many Tags through NodeTags GlobalCondition has_many Nodes ( referenced by Id, rather than replicating tree ) WeightingFactor has_many Nodes through NodeFactors Tag has_many Nodes through NodeTags The whole system has something in the region of thirty types which optionally hang off one or many nodes in the tree. My question is: What can I do to retrieve and construct this data faster? Having worked a lot with .Net, if I was in a similar situation there, I would look at building up a Stored Procedure to pull everything out of the database in one go but I would prefer to keep my logic in the application and from what I can tell it would be hard to take the queried data and build ActiveRecord objects from it without losing their integrity, which would cause more problems than it solves. It has also occurred to me that I could bunch the data up and send some of it across asynchronously, which would not improve performance but would improve the user perception of performance. However if sections of the data appeared after page load that could also be quite confusing. I am wondering whether it would be a useful strategy to make everything aware of it's parent project, so that one could pull all the records for that project and then build up the relationships later, but given the ubiquity of complex trees in day to day programming life I wouldn't be surprised if there were some better design patterns or standard approaches to this type of situation that I am not well versed in.

    Read the article

  • Do programmers need a union? [closed]

    - by James A. Rosen
    In light of the acrid responses to the intellectual property clause discussed in my previous question, I have to ask: why don't we have a programmers' union? There are many issues we face as employees, and we have very little ability to organize and negotiate. Could we band together with the writers', directors', or musicians' guilds, or are our needs unique? Has anyone ever tried to start one? If so, why did it fail? (Or, alternatively, why have I never heard of it, despite its success?) later: Keith has my idea basically right. I would also imagine the union being involved in many other topics, including: legal liability for others' use/misuse of our work, especially unintended uses evaluating the quality of computer science and software engineering higher education programs -- unlike many other engineering disciplines, we are not required to be certified on receiving our Bachelor's degrees evangelism and outreach -- especially to elementary school students certification -- not doing it, but working with the companies like ISC(2) and others to make certifications meaningful and useful continuing education -- similar to previous conferences -- maintain a go-to list of organizers and other resources our members can use I would see it less so as a traditional trade union, with little emphasis on: pay -- we tend to command fairly good salaries outsourcing and free trade -- most of use tend to be pretty free-market oriented working conditions -- we're the only industry with Aeron chairs being considered anything like "standard"

    Read the article

  • Would it be a good idea to work on letting people add arrays of numbers in javascript?

    - by OneThreeSeven
    I am a very mathematically oriented programmer, and I happen to be doing a lot of java script these days. I am really disappointed in the math aspects of javascript: the Math object is almost a joke because it has so few methods you can't use ^ for exponentiation the + operator is very limited, you cant add array's of numbers or do scalar multiplication on arrays Now I have written some pretty basic extensions to the Math object and have considered writing a library of advanced Math features, amazingly there doesn't seem to be any sort of standard library already out even for calculus, although there is one for vectors and matricies I was able find. The notation for working with vectors and matricies is really bad when you can't use the + operator on arrays, and you cant do scalar multiplication. For example, here is a hideous expression for subtracting two vectors, A - B: Math.vectorAddition(A,Math.scalarMultiplication(-1,B)); I have been looking for some kind of open-source project to contribute to for awhile, and even though my C++ is a bit rusty I would very much like to get into the code for V8 engine and extend the + operator to work on arrays, to get scalar multiplication to work, and possibly to get the ^ operator to work for exponentiation. These things would greatly enhance the utility of any mathematical javascript framework. I really don't know how to get involved in something like the V8 engine other than download the code and start working on it. Of course I'm afraid that since V8 is chrome specific, that without browser cross-compatibility a fundamental change of this type is likely to be rejected for V8. I was hoping someone could either tell me why this is a bad idea, or else give me some pointers about how to proceed at this point to get some kind of approval to add these features. Thanks!

    Read the article

  • Dell XPS 15 L502x and Ubuntu 11.04 - HDMI output

    - by Jones
    Recently I've bought my dream's notebook, a Dell XPS 15 but since then this dream became a kind of endless nightmare. I'm almost getting crazy to make my graphic card driver work properly, but it seems to be just impossible. Yes, I have a 2GB NVIDIA GeForce GT 540m (Optimus) in it! It simply doesn't work. Every time I generate the xorg.conf Ubuntu hangs on while starting up, which forces me to remove this file to be able to start the notebook with the standard graphic settings. Another problem is that the Dell XPS 15 does NOT have a VGA output, but a HDMI. So, to be able to use a second monitor I have to configure it by the NVIDIA X Server Settings, which just works if the driver is properly initialized with the xorg.conf. I've also tried to make it work with the Bumblebee, but unfortunately it didn't help me much with the HDMI output. Do you guys have any idea to solve this deadlock? Is there any way for me to use my second monitor?

    Read the article

  • PeopleSoft 8.52 iPad Certification

    - by Dave Bain
    One of the real gems in the PeopleTools 8.52 release is the certification of PeopleSoft applications running in the Safari Browser on an iPad.  It is nice that PeopleSoft is not constrained by technology like Adobe Flex/Flash, so announcements like this "Adobe drops plans for mobile Flash support" do not limit our mobile solution to custom mobile development. There are parts of PeopleSoft applications that operate better on iPads than others.  One of the best is Workcenters.  Workcenters were new to PeopleTools 8.51 and we are starting to see more and more adoption of them.  Workcenters are roll based landing pages that eliminate difficult navigation by providing access to most links, pages, and reports a user in a role needs.  One of the nicest I’ve seen is the Supply Manager Workspace.  Here are some links to screenshots of what a WorkCenter looks like on an iPad: Here's the standard PeopleSoft Login Page The Supply Manager Workspace looks great full screen on an iPad iPad has a great user interface to zoom, here's a screenshot of an upclose view of an analytic. Touch one of the analytics and it drills into the details. Go ahead and give it a try.  WorkCenters and Dashboards are starting to show up across applications.  For a quick one to try, navigate to the PeopleTools->Integration Broker->Integration Network WorkCenter.  It’s new in PeopleTools 8.52.

    Read the article

  • Where do I start in regards to making a Gnome/Unity Form Application

    - by JMK
    Ok so I am familiar with developing Form and Console applications on Windows using Visual Studio .Net with C#, but where do I start when it comes to Linux distro's like Ubuntu, is there an equivalent? How would one go about matching what they can do in a Windows environment with .Net and C# in a Linux environment without .Net coding in something like Java or C/C++? I am aware of Eclipse, does eclipse have a form designer or do you have to code the design of any Gnome/Unity forms manually? Can I use eclipse to write the Linux equivalent of a console application, that you just double click on to run? I also know about Mono, but the idea is that I want to learn how to develop software without using anything in the Microsoft stack and am not sure where to start. What is the standard language/framework used to develop these types of applications on Linux? As I become more proficient with Visual Studio, C# and .Net, it has struck me that without these Microsoft tools, I am nothing. I am only capable of developing for the Microsoft OS and this scares me. This isn't some anti Microsoft thing, Microsoft makes some incredible Software/Hardware/Operating Systems/IDE's, but it is generally a bad idea to put all of your eggs in one basket so if I want to learn how to develop Terminal and Gnome/Unity form applications where in the world do I start? I have used Linux on and off for years, but Windows has been my primary OS. However I have watched Linux get better and better and as much as I love Windows 7, I am dubious about Windows 8 (I for one will sorely miss my start menu)! Obviously MS aren't going anywhere anytime soon and I could spend the the next couple of decades developing for .Net without any issues but just because you can get away with something doesn't always mean it's a good idea. Thanks

    Read the article

  • JMX Monitoring of GlassFish Servers

    - by tjquinn
    Did you ever wonder what this message in your GlassFish server.log file means? JMXStartupService has started JMXConnector on JMXService URL service:jmx:rmi://192.168.2.102:8686/jndi/rmi://192.168.2.102:8686/jmxrmi It means you can monitor any GlassFish server process, remotely or locally, using any standard Java Management Extensions (JMX) client.  Examples: jconsole or jvisualvm.   Copy the part of the log message that starts with "service:" into the Add JMX Connection dialog of jvisualvm:  or into the New Connection dialog of jconsole: (The full string is truncated in the on-screen display, but if you copied from the server.log and pasted into the form it should all be there.) The examples above are for a DAS, and your host will probably be different.   The server.log files for other GlassFish servers (instances) will have similar log entries giving the JMX connection string to use for those processes.  Look for the host and/or port to be different. Note a few things about security: Here we've assumed you are using the default admin username and password.  If you are not, just enter a valid admin username and password for your installation.  Once connected, you have normal access to all the JVM statistics and controls. You can use JMX clients that support MBeans to view the GlassFish configuration.  When you connect to the DAS, you can also change that configuration, but you can only view configuration when you connect to an instance. To use a JMX client on one system to connect to a GlassFish server running on another system, you need to enable secure admin if you have not already done so: asadmin change-admin-password (respond to the prompts) asadmin enable-secure-admin asadmin restart-domain (as prompted in the output from enable-secure-admin)

    Read the article

  • Sending changes to a terrain heightmap over UDP

    - by Floomi
    This is a more conceptual, thinking-out-loud question than a technical one. I have a 3D heightmapped terrain as part of a multiplayer RTS that I would like to terraform over a network. The terraforming will be done by units within the gameworld; the player will paint a "target heightmap" that they'd like the current terrain to resemble and units will deform towards that on their own (a la Perimeter). Given my terrain is 257x257 vertices, the naive approach of sending heights when they change will flood the bandwidth very quickly - updating a quarter of the terrain every second will hit ~66kB/s. This is clearly way too much. My next thought was to move to a brush-based system, where you send e.g. the centre of a circle, its radius, and some function defining the influence of the brush from the centre going outwards. But even with reliable UDP the "start" and "stop" messages could still be delayed. I guess I could compare timestamps and compensate for this, although it'd likely mean that clients would deform verts too much on their local simulations and then have to smooth them back to the correct heights. I could also send absolute vert heights in the "start" and "stop" messages to guarantee correct data on the clients. Alternatively I could treat brushes in a similar way to units, and do the standard position + velocity + client-side prediction jazz on them, with the added stipulation that they deform terrain within a certain radius around them. The server could then intermittently do a pass and send (a subset of) recently updated verts to clients as and when there's bandwidth to spare. Any other suggestions, or indications that I'm on the right (or wrong!) track with any of these ideas would be greatly appreciated.

    Read the article

  • Testcase runner for parametrized testcases

    - by Razer
    Let me explain my situation. I'm planning a kind of test case runner for doing testcases on external devices, which are microcontroller based. Lets consider the devices: Device 1 Device 2 There exist a lot of test cases which can be run with one of the devices above. For example: Testcase 1 Testcase 2 The main reason that all the testcases can be run with any device is, that the testcases validates some standard and this software should be extensible for future devices. The testcases itself must be runnable with changing parameters. For example Testcase 1 does some Timing Verification the testcase needs as input parameter the datarate: 4800, 9600, 19200. Now hoping you understand the situation, let me explain my design questions. For implementing the test cases I thought about an Attribute based approach, like nunit does it. The more complicated problem is, how to define the parametrized testcases? Like this: Device 1: Testcase 1: datarate: 4800, 9600, 19200 Testcase 2: supply: 1, 2, 3 Device 2: Testcase 1: datarate: 9600, 19200, 38400 Testcase 2: supply: 3, 4, 5 How would you design such a framework? I've done a similar desin in python where I had for every device a XML containing the testcase definitions like: <Testcase="Testcase 1" datarate=4800/> <Testcase="Testcase 1" datarate=9600/> <Testcase="Testcase 1" datarate=19200/>

    Read the article

  • Wessty: Live with HTML 5 (2011 Speaker Tour)

    - by David Wesst
    That’s right: Wessty is on tour. Okay, the banner and the tour is a little over the top, but I am really excited about my upcoming speaking engagements to spread the word about HTML 5! I have already kicked off the tour with the Winnipeg Code Camp last weekend with the world premiere of HTML 5 for .NET Pro presentation, and the turn out fantastic. It was the last presentation of the day, but we still had some great questions about the new standard and got to see how HTML 5 can fit into .NET web applications today. In any case, above you can see the confirmed presentations that I will be doing so far in 2011, but there are a few more events that I have heard about that I hope to add to that list. Ultimately, expect that list to be updated over the course of the year as the year is young and there are plenty of conferences coming up! Presentation Resources As the tour continues, I will be posting the slides and the source code for the demonstrations up here on my site. They will be free of charge and give you the chance to review the demos and hopefully take advantage of some of the cool things you see in the presentations. Become part of the Tour If you are considering hosting an event where you think that HTML 5 could use a voice, drop me a line and let me know. I am always looking for opportunities to grow the tour to talk not just about HTML 5, but a variety of topics that relate to user interface and user experience development. This post also appears at http://david.wes.st

    Read the article

  • Designing Content-Based ETL Process with .NET and SFDC

    - by Patrick
    As my firm makes the transition to using SFDC as our main operational system, we've spun together a couple of SFDC portals where we can post customer-specific documents to be viewed at will. As such, we've had the need for pseudo-ETL applications to be implemented that are able to extract metadata from the documents our analysts generate internally (most are industry-standard PDFs, XML, or MS Office formats) and place in networked "queue" folders. From there, our applications scoop of the queued documents and upload them to the appropriate SFDC CRM Content Library along with some select pieces of metadata. I've mostly used DbAmp to broker communication with SFDC (DbAmp is a Linked Server provider that allows you to use SQL conventions to interact with your SFDC Org data). I've been able to create [console] applications in C# that work pretty well, and they're usually structured something like this: static void Main() { // Load parameters from app.config. // Get documents from queue. var files = someInterface.GetFiles(someFilterOrRegexPattern); foreach (var file in files) { // Extract metadata from the file. // Validate some attributes of the file; add any validation errors to an in-memory // structure (e.g. List<ValidationErrors>). if (isValid) { // Upload using some wrapper for an ORM an someInterface.Upload(meta.Param1, meta.Param2, ...); } else { // Bounce the file } } // Report any validation errors (via message bus or SMTP or some such). } And that's pretty much it. Most of the time I wrap all these operations in a "Worker" class that takes the needed interfaces as constructor parameters. This approach has worked reasonably well, but I just get this feeling in my gut that there's something awful about it and would love some feedback. Is writing an ETL process as a C# Console app a bad idea? I'm also wondering if there are some design patterns that would be useful in this scenario that I'm clearly overlooking. Thanks in advance!

    Read the article

  • Do you have a contract between the Product Owner and the Team?

    - by Martin Hinshelwood
    Working in Scrum it is useful to define a Sprint Contract between the Product Owner (PO) and the implementation Team. Doing this helps to improve common understanding in, and sometimes to enforce, the relationship between the PO and the Team. This is simply an agreement between the PO for one Sprint and is not really a commercial contract and should be confirmed via an e-mail at the beginning of every Sprint. “The implementation team agrees to do its best to deliver an agreed on set of features (scope) to a defined quality standard by the end of the sprint. (Ideally they deliver what they promised, or even a bit more.) The Product Owner agrees not to change his instructions before the end of the Sprint.” - Agile Project management (http://agilesoftwaredevelopment.com/blog/peterstev/10-agile-contracts#Sprint) Each of the Sprints in a Scrum project can be considered a mini-project that has Time (Sprint Length), Scope (Sprint Backlog), Quality (Definition of Done) and Cost (Team Size*Sprint Length). Only the scope can vary and this is measured every sprint. Figure: Good Example, the product owner should reply to the team and commit to the contract This Rule has been added to SSW’s Rules to better Scrum with TFS   Technorati Tags: SSW,Scrum,SSW Rules

    Read the article

  • Restful WebAPI VS Regular Controllers

    - by Rohan Büchner
    I'm doing some R&D on what seems like a very confusing topic, I've also read quite a few of the other SO questions, but I feel my question might be unique enough to warrant me asking. We've never developed an app using pure WebAPI. We're trying to write a SPA style app, where the back end is fully decoupled from the front end code Assuming our service does not know anything about who is accessing/consuming it: WebAPI seems like the logical route to serve data, as opposed to using the standard MVC controllers, and serving our data via an action result and converting it to JSON. This to me at least seems like an MC design... which seems odd, and not what MVC was meant for. (look mom... no view) What would be considered normal convention in terms of performing action(y) calls? My sense is that my understanding of WebAPI is incorrect. The way I perceive WebAPI, is that its meant to be used in a CRUD sense, but what if I want to do something like: "InitialiseMonthEndPayment".... Would I need to create a WebAPI controller, called InitialiseMonthEndPaymentController, and then perform a POST... Seems a bit weird, as opposed to a MVC controller where i can just add a new action on the MonthEnd controller called InitialisePayment. Or does this require a mindset shift in terms of design? Any further links on this topic will be really useful, as my fear is we implement something that might be weird an could turn into a coding/maintenance concern later on?

    Read the article

  • Class Design for special business rules

    - by Samuel Front
    I'm developing an application that allows people to place custom manufacturing orders. However, while most require similar paperwork, some of them have custom paperwork that only they require. My current class design has a Manufacturer class, of which of one of the member variables is an array of RequiredSubmission objects. However, there are two issues that I am somewhat concerned about. First, some manufacturers are willing to accept either a standard form or their own custom form. I'm thinking of storing this in the RequiredSubmission object, with an array of alternate forms that are a valid substitute. I'm not sure that this is ideal, however. The major issue, however, is that some manufacturers have deadline cycles. For example, forms A, B and C have to be delivered by January 1, while payment must be rendered by January 10. If you miss those, you'll have to wait until the next cycle. I'm not exactly sure how I can get this to work with my existing classes—how can I say "this set of dates all belong to the same cycle, with date A for form A, date B for form B, etc." I would greatly appreciate any insights on how to best design these classes.

    Read the article

  • Friday Tips #5

    - by Chris Kawalek
    Happy Friday, everyone! Following up on yesterday's post about Oracle VM VirtualBox being selected as the best virtualization solution for 2012 by the readers of Linux Journal, our Friday tip is about that very cool piece of software: Question: How do I move a VM from one machine to another with Oracle VM VirtualBox? Answer by Andy Hall, Product Management Director, Oracle Desktop Virtualization: There are a number of ways to do this, with pros and cons for each. The most reliable approach is to Export and Import virtual machines: From the VirtualBox manager, simply use the File…Export appliance menu and follow the wizard's lead. Move the resulting file(s) to the destination machine; and Import the VM into VirtualBox. This method will take longer and use more disk space than other methods because the configuration files and virtual hard drives are converted into an industry standard format (.ova or .ovf). But an advantage of this approach is that the creator of the virtual appliance can add a license which the importer will see and click-to-accept at import time. This is especially useful for ISVs looking to deliver pre-built, configured and tested appliances to their customers and prospects. Thanks Andy! Remember, if you have a question for us, use Twitter hashtag #AskOracleVirtualization. We'll see you next week! -Chris 

    Read the article

  • How do I install Ubuntu 12.04 PPC on a PowerPC G4 from commands line or fix the graphical mode?

    - by Gerardo Rodríguez
    Good afternoon. I'm new to this Linux world so I hope someone can help me . I recently got a Mac as a gift, a PowerPC G4 , which has 1GB of RAM but came with no optical drive or hard drive. So I put a dvd burner and a hard drive of 40 GB I got. Then download an iso of Ubuntu 12.04 for PowerPC and burned onto a CD. I'm trying to install from Open Firmware ( as I haven't a Mac keyboard, I use one standard ) . Finally, the Installation CD boots , but on live mode and after the Ubuntu 12.04 screen I get a message that there is a problem with my graphics adapter but I can continue with minimal graphics and give me the command line . My question is that how do I install through a text or if there is any way to fix this problem to run the graphical mode and so can continue the installation of Ubuntu, and if once installed Ubuntu the problem will be fixed or what? I would appreciate if you help me , as I mentioned before, there is almost nothing of Ubuntu but I think it will be easier than trying to get the Mac OS X proper . Spend good.

    Read the article

  • Seeking solution for printing-reporting .NET

    - by Parhs
    I am developing an application that prints in separate threads in extreme cases about 20-25 pages per minute to various thermal printers. Currently templates for these are XAML xps documents. All printers have graphics drivers that support EMF/GDI printing. So GDI-EMF is done by operating system resulting in slower performance. Sending raw text for printing is another good solution but doesnt work always , because some clients have old chinese thermal printer that nobody support thus impossible to change codepage / emulation. So it doesnt work always. Also most computers running my software are low end ATOM CPU. So I am thinking to return to GDI, EMF printing and have both Text-Only reports and EMF reports. Another reason i want EMF is because here receipts are signed by Electronic Fiscal Memory device.Most of these dont do good job extracting text from XPS as they dont follow the standard but how windows convert GDI to XPS.Even with text-only mode some of them dont support all character encodings and are impossible to send paper cut command after the sign. I know that using a reporting engine would solve rendering problem but I dont want to buy one. All I want is to be able to show tabular data and insert an image and replaced text.I know there is StringTemplate that could do the generation of template but the problem is i should parse somehow the template and render it using GDI commands. Is there any other solution/approach for this ? Or is there anything ready ?

    Read the article

  • How do I add unmet dependencies for unity-lens-music autogen.sh?

    - by nickform
    I would like to build unity-lens-music on my newly-upgraded Ubuntu 12.10 machine. I followed these instructions from the unity website to get the code. The README was empty but I guessed that ./autogen.sh would be a sensible place to start. Unfortunately it exits with the following error: checking for LENS_DAEMON... no configure: error: Package requirements (glib-2.0 >= 2.27 gobject-2.0 >= 2.27 gio-2.0 >= 2.27 gio-unix-2.0 >= 2.27 dee-1.0 >= 1.0.7 sqlite3 >= 3.7.7 gee-1.0 json-glib-1.0 unity >= 6.90.0 unity-extras >= 6.90.0 tdb >= 1.2.6) were not met: No package 'dee-1.0' found No package 'sqlite3' found No package 'gee-1.0' found No package 'json-glib-1.0' found No package 'unity' found No package 'unity-extras' found No package 'tdb' found When I attempt to satisfy the dependencies that aren't found using apt-get install I either find that there is no exact match (e.g. 'dee-1.0' which matches several packages), I already have the latest version (e.g. sqlite3, unity) or there is no match at all (e.g. unity-extras and tdb). There is a later suggestion to modify PKG_CONFIG_PATH if I have installed software in a non-standard location but, to my knowledge, I have not. How should I proceed?

    Read the article

< Previous Page | 297 298 299 300 301 302 303 304 305 306 307 308  | Next Page >