Search Results

Search found 8759 results on 351 pages for 'grind core'.

Page 259/351 | < Previous Page | 255 256 257 258 259 260 261 262 263 264 265 266  | Next Page >

  • Brightness Crash and Fan issues in 12.04.1

    - by S.A. McIntosh
    I would just like to state beforehand that I am a total novice in using Ubuntu when it comes to the more complex issues. So I thought it would be best to finally come here and ask for help before being re-directed or closed out for a solution. I have already looked high and low on this board for one but nothing came up for my particular case, so I might as well take a shot asking for the first time here. This is what I have at the moment: -Dell Insprion 1764 w/ 64-bit Intel i5 Core -Dual Boot: Windows 7/Ubuntu 12.04.1 32-bit (from 12.04 install) -Unity shell -Linux kernel: 3.2.0-32 generic-pae ...and this is my fglrxinfo: OpenGL vendor string: Advanced Micro Devices, Inc. OpenGL renderer string: ATI Mobility Radeon HD 5000 Series OpenGL version string: 4.2.11627 Compatibility Profile Context The one issue I have with using Ubuntu is brightness. With the driver in every time I use the slider in the brightness and lock settings or use the keyboard function, it freezes, goes black and comes up with a scrambled colors page like this in the video. So I have looked all over this board and the web for answers looking for a solution that might have an answer. This is what I have done so far to fix this: -First Solution: Looking around, I found this small fix using terminal: sudo gedit /etc/rc.local followed by adding this into "rc.local" echo # > /sys/class/backlight/acpi_video0/brightness This works rarely with the graphics driver still in and I often get lucky say during restart but reboot would only snap back the brightness at max. -Second Solution Simply remove the graphics driver while leaving the solution of first behind. This solves the issue but results in having the monitor flicker and flash at startup which in itself is not a problem to me but maybe not so good for monitor health. Also it causes the fan to speed up throughout the session and render any program that needs the driver useless. -Third Solution This is the most obvious. Just simply use the brightness on AMD Catalyst Control Center software that came with the driver, and I can say that it's form of brightness is HORRIBLE compared to the actual settings. Which leads up to where I am now, back to the driver to stop the fan speed-up and seems that the only solution to the brightness crash is to use the keyboard-controlled brightness at the login screen NOT the desktop if I want the issued effect but will just snap at max bright again if I restart. Fan speed problem is dealt with but now run the risk of crashing my computer if I so much touch the brightness settings. Speaking of which I found this on launchpad and it seems that the issue has been going far since June of 2012. Any help, redirect link or reference would be greatly appreciated. Thank you.

    Read the article

  • Getting away from a customized Magento 1.4 installation - Magento 1.6, OpenCart, or others?

    - by Phil
    I'm dealing with a Magento 1.4.0.0 Community Edition installation with various undocumented changes to the core (mostly integration with an ERP system), an outdated Sweet Tooth Points & Rewards module and some custom payment providers. It also doubles as a mediocre blogging/CMS system. It has one store each for 3 different languages, with about 40 product categories for a few hundred products. [rant] With no prior experience with any PHP e-commerce systems, I find it very difficult to work with. I attempted to install Magento 1.4.0.0 on my local WAMP dev machine, it installs fine, but the main page or search do not show any products no matter what I do in the backend admin panel. I don't know what's wrong with it, and whatever information I googled is either too old or too new from Magento 1.4. Later I'm given FTP access to the testing server, which neither my manager or I have permission to install XDebug on, as apparantly it runs on the same server as the production server (yikes). Trying to learn how Magento works is torture. I spent a week trying to add some fields into the Onepage Checkout before giving up and went to work on something else. The template system, just like the rest of Magento, is a bloated mishmash of overcomplicated directory structures, weird config xml files and EAV databases. I went into 6 different models and several content blocks in the backend just to change what the front page looks like. With little-to-none helpful and clear documentation (unlike CodeIgniter) and various breaking changes between minor point revisions which makes it hard to find useful information, Magento 1.4 is a developer killer. [/rant] The client is planning to redesign the site and has decided it might as well as move on from this unsustainable, hacky, upgrade-unfriendly, developer-unfriendly mess. Magento 1.4 is starting to show its age, with Magento 1.7 coming soon, the client is considering upgrading to Magento 1.6 or 1.7 if it has improved from 1.4. The customizations done to the current Magento 1.4 installation will have to be redone, and a new license for the Sweet Tooth Points & Rewards module will have to be bought. The client is also open to other e-commerce systems. I've looked at OpenCart and it seems to be quite developer friendly with a fairly simple structure. I found some complaints regarding its performance when the shop has thousands of categories or products, but this is not an issue with the current number of products my client has. It seems to be solid ground for easy customization to bring the rewards system and ERP integration over. What should the client upgrade to in this case?

    Read the article

  • What's Old is New Again

    - by David Dorf
    Last night I told my son he could stream music to his tablet "from the cloud" (in this case, the Amazon Cloud).  He paused, then said, "what is the cloud?"  I replied, "a bunch of servers connected to the internet."  Apparently he had visions of something much more magnificent.  Another similar term is "big data."  These marketing terms help to quickly convey topics but are oversimplifications that are open to many interpretations.  At their core, those terms a shiny packages holding recycled ideas. I see many headlines declaring big data changes everything, but it doesn't.  Savvy retailers have been dealing with large volumes of data since the electronic cash register was invented.  But the there have a been a few changes to the landscape that make big data a topic of conversation: 1. Computing power has caught up to storage volumes. Its now possible to more thoroughly analyze the copious volumes of data retailers have been squirreling away.  CPUs are faster, sold state drives more plentiful, and new ways to store and search data are available.  My iPhone is more power than the computer used in the Apollo mission to the moon. 2. Unstructured data is everywhere.  The Web used to be where retailers published product information, but now users are generating the bulk of the content in the form of comments, videos, and "likes."  The variety of information available to retailers is huge, and it meaning difficult to discern. 3. Everything is connected.  Looking at a report from my router, there are no less than 20 active devices on my home network.  We can track the location of mobile phones, tag products with RFID, and set our thermostats (I love my Nest) from a thousand miles away.  Not only is there more data, but its arriving at higher velocity. Careful readers will note the three Vs that help define so-called big data: volume, variety, and velocity. We now have more volume, more variety, and more velocity and different technologies to deal with them.  But at the heart, the objectives are still the same: Informed decisions Accurate forecasts Improved optimizations So don't let the term "big data" throw you off the scent.  Retailers still need to execute on the basics.  But do take a fresh look at the data that's available and the new technologies to process it.  The landscape will continue to change and agile organizations will always be reevaluating their approaches.  You can just add some more weapons to the arsenal.

    Read the article

  • ADF Code Guidelines

    - by Chris Muir
    During Oracle Open World 2012 the ADF Product Management team announced a new OTN website, the ADF Architecture Square.  While OOW represents a great opportunity to let customers know about new and exciting developments, the problem with making announcements during OOW however is customers are bombarded with so many messages that it's easy to miss something important. So in this blog post I'd like to highlight as part of the ADF Architecture Square website, one of the initial core offerings is a new document entitled ADF Code Guidelines. Now the title of this document should hopefully make it obvious what the document contains, but what's the purpose of the document, why did Oracle create it? Personally having worked as an ADF consultant before joining Oracle, one thing I noted amongst ADF customers who had successfully deployed production systems, that they all approached software development in a professional and engineered way, and all of these customers had their own guideline documents on ADF best practices, conventions and recommendations.  These documents designed to be consumed by their own staff to ensure ADF applications were "built right", typically sourced their guidelines from their team's own expert learnings, and the huge amount of ADF technical collateral that is publicly available.  Maybe from manuals and whitepapers, presentations and blog posts, some written by Oracle and some written by independent sources. Now this is all good and well for the teams that have gone through this effort, gathering all the information and putting it into structured documents, kudos to them.  But for new customers who want to break into the ADF space, who have project pressures to deliver ADF solutions without necessarily working on assembling best practices, creating such a document is understandably (regrettably?) a low priority.  So in recognising this hurdle, at Oracle we've devised the ADF Code Guidelines.  This document sets out ADF code guidelines, practices and conventions for applications built using ADF Business Components and ADF Faces Rich Client (release 11g and greater).  The guidelines are summarized from a number of Oracle documents and other 3rd party collateral, with the goal of giving developers and development teams a short circuit on producing their own best practices collateral. The document is not a final production, but a living document that will be extended to cover new information as discovered or as the ADF framework changes. Readers are encouraged to discuss the guidelines on the ADF EMG and provide constructive feedback to me (Chris Muir) via the ADF EMG Issue Tracker. We hope you'll find the ADF Code Guidelines useful and look forward to providing updates in the near future. Image courtesy of paytai / FreeDigitalPhotos.net

    Read the article

  • Using lookahead assertions in regular expressions

    - by Greg Jackson
    I use regular expressions on a daily basis, as my daily work is 90% in Perl (legacy codebase, but that's a different issue). Despite this, I still find lookahead and lookbehind to be terribly confusing and often unreadable. Right now, if I were to get a code review with a lookahead or lookbehind, I would immediately send it back to see if the problem can be solved by using multiple regular expressions or a different approach. The following are the main reasons I tend not to like them: They can be terribly unreadable. Lookahead assertions, for example, start from the beginning of the string no matter where they are placed. That, among other things, can cause some very "interesting" and non-obvious behaviors. It used to be the case that many languages didn't support lookahead/lookbehind (or supported them as "experimental features"). This isn't the case quite as much, but there's still always the question as to how well it's supported. Quite frankly, they feel like a dirty hack. Regexps often already are, but they can also be quite elegant, and have gained widespread acceptance. I've gotten by without any need for them at all... sometimes I think that they're extraneous. Now, I'll freely admit that especially the last two reasons aren't really good ones, but I felt that I should enumerate what goes through my mind when I see one. I'm more than willing to change my mind about them, but I feel that they violate some of my core tenets of programming, including: Code should be as readable as possible without sacrificing functionality -- this may include doing something in a less efficient, but clearer was as long as the difference is negligible or unimportant to the application as a whole. Code should be maintainable -- if another programmer comes along to fix my code, non-obvious behavior can hide bugs or make functional code appear buggy (see readability) "The right tool for the right job" -- I'm sure you can come up with contrived examples that could use lookahead, but I've never come across something that really needs them in my real-world development work. Is there anything that they're really the best tool for, as opposed to, say, multiple regexps (or, alternatively, are they the best tool for most cases they're used for today). My question is this: Is it good practice to use lookahead/lookbehind in regular expressions, or are they simply a hack that have found their way into modern production code? I'd be perfectly happy to be convinced that I'm wrong about this, and simple examples are useful for examples or illustration, but by themselves, won't be enough to convince me.

    Read the article

  • The challenge of giving a positive No

    - by MarkPearl
    I find it ironic that the more I am involved in the software industry, the more apparent it becomes that soft skills are just as if not more important than the technical abilities of a developer. One of the biggest challenges I have faced in my career is in managing client expectations to what one can deliver and being able to work with multiple clients. If I look at where things commonly go pear shaped, one area features a lot is where I should have said "No" to a request, but because of the way the request was made I ended up saying yes. Time and time again this has caused immense pain. Thus, when I saw on Amazon that they had a book titled "The power of a positive no" by William Ury I had to buy it and read it. In William's book he explains an approach to saying No that while extremely simple does change the way a No is presented. In essence he talks of a pattern the Yes! > No > Yes? Pattern. 1. Yes! -> positively and concretely describing your core interests and values 2. No. -> explicitly link your no to this YES! 3. Yes? -> suggest another positive outcome or agreement to the other person Let me explain how I understood it. If you are working on a really important project and someone asks you to do add a quick feature to another project, your Yes! would be to the more important project, which would mean a No to the quick feature, and an option for your Yes? may be an alternative time when you can look at it.. An example of an appropriate response would be... It is really important that I keep to the commitment that I made to this customer to finish his project on time so I cannot work on your feature right now but I am available to help you in a weeks time. William then goes on to explain the type of behaviour a person may display when the no is received. He illustrates this with a diagram called the curve of acceptance. William points out that if you are aware of the type of behaviour you can expect it empowers you to stay true to your no. Personally I think reading and having an understanding of the “soft” side of things like saying no is invaluable to a developer.

    Read the article

  • asus n550jv audio problem: no sound from notebook' speakers

    - by skywalker
    Ubuntu 13.10. The problem is: the internal speakers don't work. I have no problem when I'm using the headphones. There is no hardware issue since in windows 8 everything works perfectly(external subwoofer included). I'm trying to modify /etc/modprobe.d/alsa-base.conf but I can't find the correct model to put into: options snd-hda-intel model= The file HD-Audio-Models.txt doesn't contain the model for ALC668. Some info: :~sudo aplay -l **** List of PLAYBACK Hardware Devices **** card 0: MID [HDA Intel MID], device 3: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: MID [HDA Intel MID], device 7: HDMI 1 [HDMI 1] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: MID [HDA Intel MID], device 8: HDMI 2 [HDMI 2] Subdevices: 1/1 Subdevice #0: subdevice #0 card 1: PCH [HDA Intel PCH], device 0: ALC668 Analog [ALC668 Analog] Subdevices: 0/1 Subdevice #0: subdevice #0 :~$ sudo lspci -v | grep -A7 -i "audio" 00:03.0 Audio device: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor HD Audio Controller (rev 06) Subsystem: Intel Corporation Device 2010 Flags: bus master, fast devsel, latency 0, IRQ 52 Memory at f7a14000 (64-bit, non-prefetchable) [size=16K] Capabilities: [50] Power Management version 2 Capabilities: [60] MSI: Enable+ Count=1/1 Maskable- 64bit- Capabilities: [70] Express Root Complex Integrated Endpoint, MSI 00 Kernel driver in use: snd_hda_intel -- 00:1b.0 Audio device: Intel Corporation 8 Series/C220 Series Chipset High Definition Audio Controller (rev 04) Subsystem: ASUSTeK Computer Inc. Device 11cd Flags: bus master, fast devsel, latency 0, IRQ 53 Memory at f7a10000 (64-bit, non-prefetchable) [size=16K] Capabilities: [50] Power Management version 2 Capabilities: [60] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [70] Express Root Complex Integrated Endpoint, MSI 00 Capabilities: [100] Virtual Channel PS info :~$ amixer -c 0 Simple mixer control 'IEC958',0 Capabilities: pswitch pswitch-joined Playback channels: Mono Mono: Playback [on] Simple mixer control 'IEC958',1 Capabilities: pswitch pswitch-joined Playback channels: Mono Mono: Playback [on] Simple mixer control 'IEC958',2 Capabilities: pswitch pswitch-joined Playback channels: Mono Mono: Playback [on] :~$ pacmd dump-volumes Welcome to PulseAudio! Use "help" for usage information. Sink 0: reference = 0: 76% 1: 76%, real = 0: 76% 1: 76%, soft = 0: 100% 1: 100%, current_hw = 0: 76% 1: 76%, save = yes Input 8: volume = 0: 100% 1: 100%, reference_ratio = 0: 100% 1: 100%, real_ratio = 0: 100% 1: 100%, soft = 0: 100% 1: 100%, volume_factor = 0: 100% 1: 100%, volume_factor_sink = 0: 100% 1: 100%, save = no Source 0: reference = 0: 100% 1: 100%, real = 0: 100% 1: 100%, soft = 0: 100% 1: 100%, current_hw = 0: 100% 1: 100%, save = no Source 1: reference = 0: 16% 1: 16%, real = 0: 16% 1: 16%, soft = 0: 100% 1: 100%, current_hw = 0: 16% 1: 16%, save = yes

    Read the article

  • Lesser Known NHibernate Session Methods

    - by Ricardo Peres
    The NHibernate ISession, the core of NHibernate usage, has some methods which are quite misunderstood and underused, to name a few, Merge, Persist, Replicate and SaveOrUpdateCopy. Their purpose is: Merge: copies properties from a transient entity to an eventually loaded entity with the same id in the first level cache; if there is no loaded entity with the same id, one will be loaded and placed in the first level cache first; if using version, the transient entity must have the same version as in the database; Persist: similar to Save or SaveOrUpdate, attaches a maybe new entity to the session, but does not generate an INSERT or UPDATE immediately and thus the entity does not get a database-generated id, it will only get it at flush time; Replicate: copies an instance from one session to another session, perhaps from a different session factory; SaveOrUpdateCopy: attaches a transient entity to the session and tries to save it. Here are some samples of its use. ISession session = ...; AuthorDetails existingDetails = session.Get<AuthorDetails>(1); //loads an entity and places it in the first level cache AuthorDetails detachedDetails = new AuthorDetails { ID = existingDetails.ID, Name = "Changed Name" }; //a detached entity with the same ID as the existing one Object mergedDetails = session.Merge(detachedDetails); //merges the Name property from the detached entity into the existing one; the detached entity does not get attached session.Flush(); //saves the existingDetails entity, since it is now dirty, due to the change in the Name property AuthorDetails details = ...; ISession session = ...; session.Persist(details); //details.ID is still 0 session.Flush(); //saves the details entity now and fetches its id ISessionFactory factory1 = ...; ISessionFactory factory2 = ...; ISession session1 = factory1.OpenSession(); ISession session2 = factory2.OpenSession(); AuthorDetails existingDetails = session1.Get<AuthorDetails>(1); //loads an entity session2.Replicate(existingDetails, ReplicationMode.Overwrite); //saves it into another session, overwriting any possibly existing one with the same id; other options are Ignore, where any existing record with the same id is left untouched, Exception, where an exception is thrown if there is a record with the same id and LatestVersion, where the latest version wins SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/2.0.320/scripts/clipboard.swf'; SyntaxHighlighter.brushes.CSharp.aliases = ['c#', 'c-sharp', 'csharp']; SyntaxHighlighter.all();

    Read the article

  • Bluetooth firmware problem in Ubuntu 13.04

    - by chanzerre
    I have a [Dell Inspiron][1] 15R 5520 laptop. Bluetooth is not working at all. rfkill list all gives 0: hci0: Bluetooth Soft blocked: no Hard blocked: no 1: phy0: Wireless LAN Soft blocked: no Hard blocked: no 2: brcmwl-0: Wireless LAN Soft blocked: no Hard blocked: no dmesg|grep -i bluetooth gives [ 13.644428] Bluetooth: Core ver 2.16 [ 13.644445] Bluetooth: HCI device and connection manager initialized [ 13.644453] Bluetooth: HCI socket layer initialized [ 13.644455] Bluetooth: L2CAP socket layer initialized [ 13.644461] Bluetooth: SCO socket layer initialized [ 15.861363] Bluetooth: hci0 command 0x1003 tx timeout [ 15.903443] Bluetooth: can't load firmware, may not work correctly [ 17.332535] Bluetooth: BNEP (Ethernet Emulation) ver 1.3 [ 17.332538] Bluetooth: BNEP filters: protocol multicast [ 17.332544] Bluetooth: BNEP socket layer initialized [ 17.393768] Bluetooth: RFCOMM TTY layer initialized [ 17.393781] Bluetooth: RFCOMM socket layer initialized [ 17.393783] Bluetooth: RFCOMM ver 1.11 hciconfig gives hci0: Type: BR/EDR Bus: USB BD Address: E0:06:E6:D5:DB:46 ACL MTU: 1021:8 SCO MTU: 64:1 UP RUNNING PSCAN ISCAN RX bytes:687 acl:0 sco:0 events:56 errors:0 TX bytes:2024 acl:0 sco:0 commands:52 errors:0 I have visited the site http://wireless.kernel.org/en/users/Drivers/b43 and according to it lspci -vnn -d 14e4: gives 08:00.0 Network controller [0280]: Broadcom Corporation BCM43142 802.11b/g/n [14e4:4365] (rev 01) Subsystem: Dell Wireless 1704 802.11n + BT 4.0 [1028:0016] Flags: bus master, fast devsel, latency 0, IRQ 17 Memory at c1500000 (64-bit, non-prefetchable) [size=32K] Capabilities: <access denied> Kernel driver in use: wl So I got my PCI-ID as 14e4:4365 which it says is not supported. The alternative is wl. What should I do? My Wi-Fi is working normally without any problems, but Bluetooth is not working. sudo dpkg -i wireless-bcm43142-dkms_6.20.55.19-1_amd64.deb gives following error (Reading database ... 208543 files and directories currently installed.) Unpacking wireless-bcm43142-dkms (from wireless-bcm43142-dkms_6.20.55.19-1_amd64.deb) ... Setting up wireless-bcm43142-dkms (6.20.55.19-1) ... Loading new wireless-bcm43142-6.20.55.19 DKMS files... Building only for 3.8.0-23-generic Building initial module for 3.8.0-23-generic Traceback (most recent call last): File "/usr/share/apport/package-hooks/dkms_packages.py", line 22, in <module> import apport ImportError: No module named apport Error! Bad return status for module build on kernel: 3.8.0-23-generic (x86_64) Consult /var/lib/dkms/wireless-bcm43142/6.20.55.19/build/make.log for more information.

    Read the article

  • Getting Started with Kinect for Windows.

    - by Vishal
    Hello folks,      Recently I got involved in a project for building a demo application for one of our customers with Kinect for Windows. Yes, something similar what Tom Cruise did in the movie Minority Report. Waving arms, moving stuff around, swipes, speech recognition, manipulating computer screens without even touching it. Pretty cool!!! The idea in the movie showed us how technology would be after 50 years from that day.   Minority Report Movie clip.           Well, that 50 years of time frame got squeezed and recently on Feb 1st 2012, Microsoft released the official Kinect for Windows SDK. That’s just 10 years from the movie release. Although, the product is in it early stages but with developer creativity and continuously improving hardware, those features shown in the movie are not very far away from becoming a reality. Soon after releasing the SDK, Microsoft again announced in March the release of its new Kinect for Windows SDK version 1.5 which is coming out in sometime May. More history about Kinect. Anyways, so for a newbie with Kinect, where would you start. Here is what I would suggest you can do. Watch the Kinect for Windows Quick start Series by Den Fernandez. Download the Kinect for Windows SDK and start playing around with the demos in it. It also comes with some basic Kinect documentation. Coding4Fun Kinect Projects | Lot many more videos and open sources projects information. Kinect for Windows Session at Techdays NL demo by Jesus Rodriguez. Book: Beginning Kinect Programming with the Microsoft Kinect SDK.  | I did go through few of the chapters in this book and based on that, it does talk deeply about core Kinect concepts but in very easy to understand way. I would definitely suggest this book for any Kinect developers. I liked the way it explained the Gestures recognition in Chapter 6. Buy your Kinect device from either Amazon or NewEgg. You will get it cheaper then buying it from Microsoft Store. Personally, I love Newegg.com as I never had any order related or shipping issues with them. I always hate developing UI application but well, you would need to get your hands dirty with WPF too in order to work with Kinect. So get started with WPF concepts too. I will keep adding stuff to the list once I come across them but so far the above list would definitely get you started building your first Kinect apps. Till then Happy Kinecting…!!!!! Thanks, Vishal Mody

    Read the article

  • WebLogic Server Weekly for March 26th, 2012: WLS 1211 Update, Java 7 Certification, Galleria, WebLogic for DBAs, REST and Enterprise Architecture, Singleton Services

    - by Steve Button
    WebLogic Server 12c Certified with Java 7 for Production Use WebLogic Server 12c (12.1.1) has been certified with JDK 7 for development usage since December and we have now completed JDK 7 certification for use with production systems. In doing so, we have updated the WebLogic Server 12c (12.1.1) distributions incorporating fixes associated with JDK 7 support as well as some bundled patches that address several issues that have been discovered since the initial release. These updated distributions are available for download from OTN and will be beneficial for all WebLogic Server 12c (12.1.1) users in general. What's New Release Notes Download Here! Updated Oracle WebLogic Server 12.1.1.0 distribution Never one to miss a trick, Markus Eisele was one of the first to notice the WebLogic Server 12c update and post a blog about it. Sources told me that as of Friday last week you have an updated version of WebLogic Server 12c on OTN. http://blog.eisele.net/2012/03/updated-oracle-weblogic-server-12110.html Using WebLogic Server 12c with Java 7 - Video To illustrate the use of Java 7 with WebLogic Server 12c, I put together a screen cam showing the creation of a domain using Java 7 and then build and deploy a simple web application that uses Java 7 syntax to show it working. Ireland OUG Presentation: WebLogic for DBAs Simon Haslam posted his slides from a presentation he gave Dublin on 21/3/12 at the OUG Ireland conference. In this presentation, he explains the core concepts and ideas behind WebLogic Server, walks through an installation and offers some tips and common gotcha's to avoid. Simon also covers some aspects of installing and use Enterprise Manager 12c. Note: I usually install the JVM and use the generic .jar installer rather than using an installer bundled with a JVM. http://www.slideshare.net/Veriton/weblogic-for-dbas-10h Slightly Retro: Jeff West on Enterprise Architecure and REST In this weeks flashback, we look at Jeff West's blog from early 2011 where he provides some thoughtful opinions on enterprise architecture and innovation, then jumps into his views on REST. After I progressed in my career and did more team-leading and architecture type roles I was ‘educated’ on what it meant to have Asynchronous and Long-Running processes as part of your Enterprise Application architecture. If I had a synchronous process then I needed a thread available to service the request and then provide the response. https://blogs.oracle.com/jeffwest/entry/weblogic_integration_wli_web_services_and_soap_and_rest_part_1 Starting Managed Servers without an Administration Server using Node Manager and WLST Blogger weblogic-tips shows how to start a managed server without going through the Administration Server, using the Node Manager and WLST. Connect WLST to a Node Manager by entering the nmConnect command. http://www.weblogic-tips.com/2012/02/18/starting-managed-servers-without-an-administration-server-using-node-manager-and-wlst/ Using WebLogic Server Singleton Services WebLogic Server has supported the notion of a Singleton Service for a number of releases, in which WebLogic Server will maintain a single instance of a configured singleton service on one managed server within a cluster. This blog demonstrates how the singleton service can be accessed and used from applications deployed on the cluster. http://buttso.blogspot.com.au/2012/03/weblogic-server-singleton-services.html

    Read the article

  • MPEG-2 playback inconsistent

    - by DustByte
    Many years ago I gave up on Linux because video playback was choppy. Now I'm back, and video playback is still playing up... I have two MPEG files: good.mpg bad.mpg. Here is some information about the two files, using avprobe: My machine is Intel Core 2 Duo E8400 @ 3.00GHz x 2, 64-bit. I do not know what graphics card I have. I run Ubuntu 12.04. So far I have had no problems with YouTube and playback of various video files, including playback of the file good.mpg, included in the avprobe snapshot above. However, the file bad.mpg gives me headache! The file bad.mpg is produced by a respectable "Old-video-tapes-to-DVD" company. I converted over 10 Video-8 tapes to MPEG through them, and today I collected my hard drive containing the MPEG files. Unfortunately I have problem watching them! Here are some details: Using Totem Movie Player 3.0.1 works well for several seconds, then it gets choppy and the playback is not at all smooth. Also the player easily freezes for a while when trying to jump to another position in the file. Most strangely though, the total time is shown as 0:42 (42 seconds) instead of the true 00:39:11: The VLC media player is doing a better job. It shows the correct total length, but as soon as I jump in the video to a new position, it stalls. Playback also stalls after 30 seconds if I press play and leave it. Using Handbrake and choosing bad.mpg as the source, gives me: There is only one title to choose, and it is 6 min 53 seconds. I would have guessed the full 39 minutes of the video should have shown. Lastly, putting the file bad.mpg in Dropbox and viewing it on my iPad with the Dropbox app seems fine (disregard the lack of easy jumping forward due to real-time encoding when streaming it). My question is simple: What is going on?! Why do I have problem to play the MPEG-2 files I just paid good money for (the issue with bad.mpg applies to all files I had encoded)? Is it an issue with my particular Linux machine? The graphics card? But why has everything worked fine so far, and why does not the good.mpg file cause any problems?

    Read the article

  • Gradle for NetBeans RCP

    - by Geertjan
    Start with the NetBeans Paint Application and do the following to build it via Gradle (i.e., no Gradle/NetBeans plugin is needed for the following steps), assuming you've set up Gradle. Do everything below in the Files or Favorites window, not in the Projects window. In the application directory "Paint Application". Create a file named "settings.gradle", with this content: include 'ColorChooser', 'Paint' Create another file in the same location, named "build.gradle", with this content: subprojects { apply plugin: "announce" apply plugin: "java" sourceSets { main { java { srcDir 'src' } resources { srcDir 'src' } } } } In the module directory "Paint". Create a file named "build.gradle", with this content: dependencies { compile fileTree("$rootDir/build/public-package-jars").matching { include '**/*.jar' } } task show << { configurations.compile.each { dep -> println "$dep ${dep.isFile()}" } } Note: The above is a temporary solution, as you can see, the expectation is that the JARs are in the 'build/public-packages-jars' folder, which assumes an Ant build has been done prior to the Gradle build. Now run 'gradle classes' in the "Paint Application" folder and everything will compile correctly. So, this is how the Paint Application now looks: Preferable to the second 'build.gradle' would be this, which uses the JARs found in the NetBeans Platform... netbeansHome = '/home/geertjan/netbeans-dev-201111110600' dependencies { compile files("$rootDir/ColorChooser/release/modules/ext/ColorChooser.jar") def projectXml = new XmlParser().parse("nbproject/project.xml") projectXml.configuration.data."module-dependencies".dependency."code-name-base".each { if (it.text().equals('org.openide.filesystems')) { def dep = "$netbeansHome/platform/core/"+it.text().replace('.','-')+'.jar' compile files(dep) } else if (it.text().equals('org.openide.util.lookup') || it.text().equals('org.openide.util')) { def dep = "$netbeansHome/platform/lib/"+it.text().replace('.','-')+'.jar' compile files(dep) } else { def dep = "$netbeansHome/platform/modules/"+it.text().replace('.','-')+'.jar' compile files(dep) } } } task show << { configurations.compile.each { dep -> println "$dep ${dep.isFile()}" } } However, when you run 'gradle classes' with the above, you get an error like this: geertjan@geertjan:~/NetBeansProjects/PaintApp1/Paint$ gradle classes :Paint:compileJava [ant:javac] Note: Attempting to workaround javac bug #6512707 [ant:javac] [ant:javac] [ant:javac] An annotation processor threw an uncaught exception. [ant:javac] Consult the following stack trace for details. [ant:javac] java.lang.NullPointerException [ant:javac] at com.sun.tools.javac.util.DefaultFileManager.getFileForOutput(DefaultFileManager.java:1058) No idea why the above happens, still trying to figure it out. Once the above works, we can start figuring out how to use the NetBeans Maven repo instead and then the user of the plugin will be able to select whether to use local JARs or JARs from the NetBeans Maven repo. Many thanks to Hans Dockter who put the above together with me today, via Skype!

    Read the article

  • Getting WLAN on my Laptop to work (Medion MD98300)

    - by Anand Böhmer
    Dear Ubuntu Community, I am having difficulties to get the WLAN Adapter on my Medion 98300 Laptop to work. The WLAN Card seems to be connected through an internal USB Interface and the Card itself had shown up as a wirelles Network while installing Ubuntu. I have tried a few things earlier, but none of my google reasearches have brought me to a working solution... I am quite new to the Linux System and only knoew a couple of terminal commands so far, so I probably have missed out on a few possible solutions. Maybe you can help me? Thank you very much in advance! A fre minimal technical Details: AMD Turion 64 X2 Dual-Core Mobile Technolgie TL-50 NVIDIA GeForce Go 6150 SanDisk 64GB SSD 2GB RAM DDR2 667 nForce Chipset (I forgt the Version, but deductable from the GPU I guess) WiFi: ZyDAS ZD1211B 802.11g Thank's a lot again! :) UPDATE: I tried around myself a little and found a guide on the Linux Mint forums that helped! I already had tried to install the linux backport modules etc. What I finally did was update the linux firmware and run the following command: echo "options acer_wmi wirelles=1" sudo tee /etc/modprobe.d/acer_wmi.conf and rebooted now I found and could connect to networks, but unfortunately I found, that the link quality was very bad, around 40 to 50. Eventhough my Router is running at high power and is only 6 Meters away! I then switched a few channels, but that did not improve much. Before, under Windows, I had a very good link quality and had the entire 16mbit/s internet connection at disposal, now I can only get about 3-5Mbit/s. better then nothing, but still pretty bad! The "TX power" is fixed on 20dBm and iwconfig says, that the "Power Management" is off... Maybe the power of the module is set too low?... UPDATE2: I figured that 20dBm is a normal power output. I even tried to change the power using iwconfig wlan0 txpower INTEGERHERE but, obviously my "Card" does not support more then 20. More would probably be illegal as well, so I won't even use more then 20. I guess that I will have to figure out a way, or maybe just switch cards. Are the Mailboard-USB-Connectors on a laptop of the same property as the standard external ones? If so, I could simply solder a micro Wirelless N Adapter onto the board :)

    Read the article

  • Additional new material WebLogic Community

    - by JuergenKress
    Oracle Cloud Application Foundation 12c Helps Customers Deliver Next-Generation Applications on a Mission-Critical Cloud Platform In a recent online event, Oracle and industry speakers introduced Oracle Cloud Application Foundation 12c, including Oracle WebLogic 12.1.2 and Oracle Coherence 12.1.2.  Read More Team Spotlight: Mike Lehmann, Vice President of Product Management Meet the team behind Oracle Fusion Middleware. In this edition, we speak to Mike Lehmann, Oracle’s vice president of product management for Oracle Cloud Application Foundation, Oracle WebLogic Server, Oracle Coherence, Java Cloud Services, and Java Platform, Enterprise Edition. Read More New and Free: Learn Oracle Application Development Framework Mobile Online at Your Convenience Are you ready to go mobile? Check out this new tutorial from Oracle’s ADF Academy - Developing Applications with Oracle Application Development Framework Mobile. New: Oracle JDeveloper 12c and Oracle Application Development Framework 12c Announcing Oracle JDeveloper 12c and Oracle Application Development Framework 12c. New capabilities include HTML5, better Maven support, Git support, new Oracle ADF Faces components, improved REST support, Enterprise JavaBeans/Java Persistence API, and the latest support for Oracle WebLogic Server 12.1.2. Get more details and download. New: Oracle Enterprise Pack for Eclipse 12c The best Eclipse-based tools for Oracle WebLogic and Oracle Coherence continue to get better. Check out the latest Oracle WebLogic and Oracle Coherence support, improved Oracle Application Development Framework support, Maven, and more. Register: Oracle WebLogic Devcast Series Join us for the upcoming Oracle WebLogic Devcast webcast. Oracle GlassFish Server 3.1.2 and 2.1.1 updates  & An Overview of JSON-P & Comprehensive Free Java EE 6 Video Tutorial! Java ME Embedded 3.3 and Java ME Software Development Kit (SDK) 3.3 Now Available - Optimized for microcontrollers and other resource-constrained devices, this release reduces "core plumbing" for an app, and includes more information about memory and network usage critical for low-power apps. JDK 8 Early Access Releases now available JDK 8 Early Access Developer Documentation - Get the latest documentation changes to the Java Developer Guides and the Java Tutorials - Blog NetBeans IDE 7.4 Beta - This release extends HTML5 features to Java EE and PHP application development, introduces new support for Hybrid HTML5 development on Android and iOS platforms, and preview support for JDK 8. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • How i can fix " E: Internal Error, No file name for libc6 "

    - by SMAOUH
    Hello all please i need your help to fix this problem i have 2 broken packages system and i can't reinstall them or make any other option : update , upgrade , install & remove app .... Ubuntu 12.04.3 I have not found any solutions please help me sudo apt-get install -f smaouh@Linux:~$ sudo apt-get install -f [sudo] password for smaouh: Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: libopenal1 libpam-winbind libao-common gnome-exe-thumbnailer libqca2-plugin-ossl gir1.2-champlain-0.12 libmagickcore4 libmagickwand4 libmagickcore4-extra libcapi20-3 python-unidecode libopenal-data liblqr-1-0 gir1.2-gtkchamplain-0.12 unixodbc wine-gecko2.21 libchamplain-0.12-0 python-glade2 imagemagick-common libosmesa6 oss-compat gimp-help-common esound-common gimp-help-en libmpg123-0 ttf-mscorefonts-installer imagemagick winbind libodbc1 fonts-droid fonts-unfonts-core libchamplain-gtk-0.12-0 libclutter-gtk-1.0-0 gir1.2-gtkclutter-1.0 Use 'apt-get autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 386 not upgraded. 4 not fully installed or removed. After this operation, 0B of additional disk space will be used. dpkg: error processing libc6 (--configure): libc6:amd64 2.15-0ubuntu10.5 cannot be configured because libc6:i386 is in a different version (2.15-0ubuntu10.4) dpkg: dependency problems prevent configuration of libc-dev-bin: libc-dev-bin depends on libc6 (>> 2.15); however: Package libc6 is not configured yet. libc-dev-bin depends on libc6 (<< 2.16); however: Package libc6 is not configured yet. dpkg: error processing libc-dev-bin (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libc6-dev: libc6-dev depends on libc6 (= 2.15-0ubuntu10.5); however: Package libc6 is not configured yet. libc6-dev depends on libc-dev-bin (= 2.15-0ubuntu10.5); however: Package libc-dev-bin is not configured yet. dpkg: error processing libc6-dev (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libc6-i386: libc6-i386 depends on libc6 (= 2.15-0ubuntu10.5); however: Package libc6 is not configured yet. dpkg: error processing libc6-i386 (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because MaxReports is reached already Errors were encountered while processing: libc6 libc-dev-bin libc6-dev libc6-i386 E: Sub-process /usr/bin/dpkg returned an error code (1) smaouh@Linux:~$

    Read the article

  • Developing for 2005 using VS2008!

    - by Vincent Grondin
    I joined a fairly large project recently and it has a particularity… Once finished, everything has to be sent to the client under VS2005 using VB.Net and can target either framework 2.0 or 3.0… A long time ago, the decision to use VS2008 and to target framework 3.0 was taken but people knew they would need to establish a few rules to ensure that each dev would use VS2008 as if it was VS2005… Why is that so? Well simply because the compiler in VS2005 is different from the compiler inside VS2008…  I thought it might be a good idea to note the things that you cannot use in VS2008 if you plan on going back to VS2005. Who knows, this might save someone the headache of going over all their code to fix errors… -        Do not use LinQ keywords (from, in, select, orderby…).   -        Do not use LinQ standard operators under the form of extension methods.   -        Do not use type inference (in VB.Net you can switch it OFF in each project properties). o   This means you cannot use XML Literals.   -        Do not use nullable types under the following declarative form:    Dim myInt as Integer? But using:   Dim myInt as Nullable(Of Integer)     is perfectly fine.   -        Do not test nullable types with     Is Nothing    use    myInt.HasValue     instead.   -        Do not use Lambda expressions (there is no Lambda statements in VB9) so you cannot use the keyword “Function”.   -        Pay attention not to use relaxed delegates because this one is easy to miss in VS2008   -        Do not use Object Initializers   -        Do not use the “ternary If operator” … not the IIf method but this one     If(confition, truepart, falsepart).   As a side note, I talked about not using LinQ keyword nor the extension methods but, this doesn’t mean not to use LinQ in this scenario. LinQ is perfectly accessible from inside VS2005. All you need to do is reference System.Core, use namespace System.Linq and use class “Enumerable” as a helper class… This is one of the many classes containing various methods that VS2008 sees as extensions. The trick is you can use them too! Simply remember that the first parameter of the method is the object you want to query on and then pass in the other parameters needed… That’s pretty much all I see but I could have missed a few… If you know other things that are specific to the VS2008 compiler and which do not work under VS2005, feel free to leave a comment and I’ll modify my list accordingly (and notify our team here…) ! Happy coding all!

    Read the article

  • GPU hung when switching graphic card

    - by Lie Ryan
    I have a laptop (Dell Inspiron N4110) with a switchable graphic. $ lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 01:00.0 VGA compatible controller: ATI Technologies Inc NI Whistler [AMD Radeon HD 6600M Series] (rev ff) Normally, my laptop starts with both graphic cards enabled, which caused the laptop to turn very hot and the fan to become very noisy. I have been using a small script to disable the Radeon card. For some time, I'm quite happy with this arrangement. However, I have been having some issues with the Intel card (IGD), the Intel card often randomly hang when running OpenGL apps; and so I want to give the Radeon card (DIS) another chance. I have never been able to switch to the Radeon card, but recently, I found out that if I do a "delayed switching" (DDIS): # echo "DDIS" > /sys/kernel/debug/vgaswitcheroo/switch root@lieryan-dell-ubuntu:/sys/kernel/debug/vgaswitcheroo# cat switch 0:IGD:+:Pwr:0000:00:02.0 1:DIS: :Pwr:0000:01:00.0 then I logoff (i.e. to restart X), the screen switch to pseudo-tty and then it stuck there freezing. At this situation, mouse and keyboard stops working so I can't switch to another ptty. I tried ssh-ing from another computer to salvage logs (dmesg at that point) and whatnot; I found out that when freezing, the active graphic card is the AMD card: -- this is from ssh -- # cat switch 0:IGD: :Off:0000:00:02.0 1:DIS:+:Pwr:0000:01:00.0 but the GPU is apparently hung, looking at dmesg gives: ... [ 1411.649974] vga_switcheroo: client 0 refused switch [ 1411.649985] vga_switcheroo: setting delayed switch to client 1 [ 1423.911759] vga_switcheroo: processing delayed switch to 1 [ 1424.006564] fbcon: Remapping primary device, fb1, to tty 1-63 [ 1424.006799] i915: switched off [ 1424.840351] [drm:drm_mode_getfb] *ERROR* invalid framebuffer id [ 1425.718088] [drm:drm_mode_getfb] *ERROR* invalid framebuffer id [ 1426.622377] [drm:drm_mode_getfb] *ERROR* invalid framebuffer id [ 1427.355683] [drm:drm_mode_getfb] *ERROR* invalid framebuffer id [ 1428.193549] [drm:drm_mode_getfb] *ERROR* invalid framebuffer id ... the invalid framebuffer id error is repeated for many times over ... I were able to successfully recover by switching back to the Intel card and restarting X from ssh; indicating that only the Radeon card has problems switching. System info: $ uname -a Linux lieryan-dell-ubuntu 3.0.0-14-generic #23-Ubuntu SMP Mon Nov 21 20:28:43 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 11.10 Release: 11.10 Codename: oneiric The laptop also do not have the option to set graphic card at BIOS and the proprietary driver, fglrx, also have never worked; when I installed it through jockey ("Additional Drivers"), glxinfo showed that it still being rendered by Mesa, the /sys/kernel/debug/vgaswitcheroo directory has gone missing, and the driver crashes with a traceback if I use xorg.conf to tell X to use fglrx. Anyone had any idea if it is possible to use this AMD card either with the radeon or the fglrx driver? logs: dmesg

    Read the article

  • Token based Authentication and Claims for Restful Services

    - by Your DisplayName here!
    WIF as it exists today is optimized for web applications (passive/WS-Federation) and SOAP based services (active/WS-Trust). While there is limited support for WCF WebServiceHost based services (for standard credential types like Windows and Basic), there is no ready to use plumbing for RESTful services that do authentication based on tokens. This is not an oversight from the WIF team, but the REST services security world is currently rapidly changing – and that’s by design. There are a number of intermediate solutions, emerging protocols and token types, as well as some already deprecated ones. So it didn’t make sense to bake that into the core feature set of WIF. But after all, the F in WIF stands for Foundation. So just like the WIF APIs integrate tokens and claims into other hosts, this is also (easily) possible with RESTful services. Here’s how. HTTP Services and Authentication Unlike SOAP services, in the REST world there is no (over) specified security framework like WS-Security. Instead standard HTTP means are used to transmit credentials and SSL is used to secure the transport and data in transit. For most cases the HTTP Authorize header is used to transmit the security token (this can be as simple as a username/password up to issued tokens of some sort). The Authorize header consists of the actual credential (consider this opaque from a transport perspective) as well as a scheme. The scheme is some string that gives the service a hint what type of credential was used (e.g. Basic for basic authentication credentials). HTTP also includes a way to advertise the right credential type back to the client, for this the WWW-Authenticate response header is used. So for token based authentication, the service would simply need to read the incoming Authorization header, extract the token, parse and validate it. After the token has been validated, you also typically want some sort of client identity representation based on the incoming token. This is regardless of how technology-wise the actual service was built. In ASP.NET (MVC) you could use an HttpModule or an ActionFilter. In (todays) WCF, you would use the ServiceAuthorizationManager infrastructure. The nice thing about using WCF’ native extensibility points is that you get self-hosting for free. This is where WIF comes into play. WIF has ready to use infrastructure built-in that just need to be plugged into the corresponding hosting environment: Representation of identity based on claims. This is a very natural way of translating a security token (and again I mean this in the widest sense – could be also a username/password) into something our applications can work with. Infrastructure to convert tokens into claims (called security token handler) Claims transformation Claims-based authorization So much for the theory. In the next post I will show you how to implement that for WCF – including full source code and samples. (Wanna learn more about federation, WIF, claims, tokens etc.? Click here.)

    Read the article

  • BIP 10.1.3.4.x June 2010 Update Available

    - by Tim Dexter
    A new patchset for 10.1.3.4.0 and 10.1.3.4.1 is available on Metalink. some notes: The patch number is 9791839. This patchset includes 28 new bug fixes since the last patchset release on March 31. This is a culmulative update that includes all the fixes and enhancements from previous updates. The patch will supercede the other two updates. Install instructions are in the readme inside the patch There is also a new BIP client patch available, 9821068. No new template building features to my knowledge but there is an update to the template viewer to allow you to test and debug you siny new Excel templates. Server 8529759XMLP_TEMPLATE_DESIGNER CANNOT SAVE / UPLOAD TEMPLATE 8566455 BI PUBLISHER SCHEDULER DOES NOT START WITH JNDI DATA SOURCE 9295667RESPONSE OF GETSCHEDULEDREPORTINFO RETURNS STATUS AS 'UNKNOWN' INSTEAD OF 'SCHED 9542413 UNABLE TO CREATE A NEW TEMPLATE FROM UI 9546137 EXCEL ANALYZER TEMPLATE FAILS FOR A STRUCTURED XML WHEN IT IS UPLOADED 9556338 SIEBEL - BIP PARAMETERS SORT ORDER 9560562 BI PUBLISHER CACHE DIRECTORY FILLING UP AND POINTING TO INVALID DIRECTORY 9646599 USER ROLE DEFINED AS PRIMARYGROUP IN ACTIVEDIRECTORY GROUP ARE NOT RECOGNIZED 9664768 ER: NEED TO BIND USER ATTRIBUTE VALUES DEFINED IN ACTIVEDIRECTORY IN DATA QUERY 9665075 BI PUBLISHER AFTER 9546699 NOTIFICATIONS FOR REPORTS FAIL 9669973 ER: NEED TO SUPPORT PRE-PROCESSING XML WITH XSL FOR EXCEL TEMPLATE 9704401 ER: NEED TO SUPPORT DEFAULT GROUP FOR ALL USERS IN LDAP/AD SECURITY 9711899 SEARCH PARAMETER IS NOT VISIBLE WHEN SCHEDULE A REPORT 9753736 SOME ROLES FROM ACTIVEDIRECTORY ARE NOT LISTED IN ADMIN ROLE-FOLDER MAPPING 9771354 MULTIPLE PARAMETERS IN 10.1.3.4.1 DATA TEMPLATE ACT ACT DIFFERENTLY FROM 10.1.3. 9772982 "REFRESH OTHER PARAMETERS ON CHANGE" DOESN'T WORK PROPERLY Core  8599646 ER:EXTRA SPACE ADDED BELOW IMAGE IN A TABLE CELL OF TEMPLATE IN FIREFOX 9377593 SOME ROWS HEIGHT IN HTML/EXCEL OUTPUT ARE TOO BIG IN BI PUBLISHER 9487030 NAVIGATION TREE REPEATING TWICE IN PDF DCCUMENT CREATED BY BI PUBLISHER 9509432 PERFORMANCE ISSUE WHEN USING PDF TEMPLATE 9534424 PS: DOCUMENT-REPEAT-FULLPATH-ELEMENTNAME SHOULDNT USE DOT "." AS PATH SEPARATOR 9553360 FORMPROCESSOR CANNOT PARSE SOME PDF TEMPLATES 9554959 TEXT IN AUTOSHAPE IS NOT PROPERLY CUT OFF FOR LINE WRAPPING 9569417 AFTER APPLYING PATCH 9509432 PDF TEMPLATES WITH DBDRV PRODUCE NO OUTPUT 9571670 ER: EXCEL TEMPLATE TO SUPPORT XSLT LOGIC AND XSL CUSTOM EXTENTIONS 9589809 XSL:CALL-TEMPLATE IS MISSING IN GENERATED XSL FILE 9605920 BOOKMARK TESTCASE FAILED DUE TO ER9283933 9689634 PRINT FLOW CHART USING ACROSS 3 DOWN 0 GIVES EXTRA BLANK PAGES You might have noticed some fixes and ehancements to the Excel templates so I can get back on those now. There is a part two to the Mapviewer BIP Mashup coming ... just need aanother 4 hours in the day to squeeze it in.

    Read the article

  • Designing a completely new database/gui solution for my compnay

    - by user1277304
    I'm no expert when it come to everything Visual Studio 2010 and utilizing SQL server 2008. I'm sure some of my personal projects I've built for personal use would get laughed off the face of the planet, but SQLCe has been the solution I was looking for those home type of projects. And they work, flawlessly. Now I feel it's time to step up to the big league. I want to develop a complete, unified and module based solution for my company that I'm working for. We're still using stuff from the 80s for goodness sake! I use Excel and query the ancient database on my own because I can't stand the GUI. Nothing against people of age, but the IDE our programmers are using is from the stone age, and they use APL of all things with it. I've yet to see a radio button control anywhere in the GUI where it would make sense. Anyway, I want to do this right from the ground up. I'm by no means a newbie when it comes to programming in .NET 2010, however, I want the entire solution to be professionally done. I want version control, test projects, project flow, SQL 2008 integration and all the bells and whistles that come with that. I know for a fact that if we had something like that running, not only would development costs and time be slashed four fold, but the possibilities for expansion and performance would sky rocket. (Between the GUI an our DB engine, it can only use ONE CORE! ONE! It's 2012 for goodness sake!) Our business is growing and our current ancient solution just can't keep up, and I'd hate to see our business go down in flames because our programmer is stuck in the 80's and refuses to use anything current. So I ask you guys, the experts and know-it-alls, where do I start? Are there any gems of good books out there in the haystack of all "This for dummies" type of deals? I already have several people backing me in this endeavor, and while it may seem brash to just usurp the current programmers, I'm doing this for the company as a whole.

    Read the article

  • C++ Accelerated Massive Parallelism

    - by Daniel Moth
    At AMD's Fusion conference Herb Sutter announced in his keynote session a technology that our team has been working on that we call C++ Accelerated Massive Parallelism (C++ AMP) and during the keynote I showed a brief demo of an app built with our technology. After the keynote, I go deeper into the technology in my breakout session. If you read both those abstracts, you'll get some information about what C++ AMP is, without being too explicit since we published the abstracts before the technology was announced. You can find the official online announcement at Soma's blog post. Here, I just wanted to capture the key points about C++ AMP that can serve as an introduction and an FAQ. So, in no particular order… C++ AMP lowers the barrier to entry for heterogeneous hardware programmability and brings performance to the mainstream, without sacrificing developer productivity or solution portability. is designed not only to help you address today's massively parallel hardware (i.e. GPUs and APUs), but it also future proofs your code investments with a forward looking design. is part of Visual C++. You don't need to use a different compiler or learn different syntax. is modern C++. Not C or some other derivative. is integrated and supported fully in Visual Studio vNext. Editing, building, debugging, profiling and all the other goodness of Visual Studio work well with C++ AMP. provides an STL-like library as part of the existing concurrency namespace and delivered in the new amp.h header file. makes it extremely easy to work with large multi-dimensional data on heterogeneous hardware; in a manner that exposes parallelization. introduces only one core C++ language extension. builds on DirectX (and DirectCompute in particular) which offers a great hardware abstraction layer that is ubiquitous and reliable. The architecture is such, that this point can be thought of as an implementation detail that does not surface to the API layer. Stay tuned on my blog for more over the coming months where I will switch from just talking about C++ AMP to showing you how to use the API with code examples… Comments about this post welcome at the original blog.

    Read the article

  • CAM v2.0 ships – all new foundation version

    - by drrwebber
    The latest release of the CAM editor toolset is now available on Sourceforge.net – search NIEM. In this all new version the support from Oracle has enabled a transformation of the editor underpinning Java framework and results in 3x performance improvement and 50% better memory utilization. The result of nearly six months of improvements are catalogued in the release notes. http://sourceforge.net/projects/camprocessor/files/CAM%20Editor/Releases/2.0/CAM_Editor_2-0_Release_Notes.pdf/download However here I’d like to talk about the strategic vision and highlight specific new go to features that make a difference for exchange schema designers and with a focus on the NIEM community. So why is this a foundation version? Basically the new drag and drop designer tool allows you to tailor your own dictionary collection of components and then simply select and position those into your resulting exchange structure. This is true global reuse enabled from a canonical domain dictionary collection. So instead of grappling with XSD Schema syntax, or UML model nuances – this is straightforward direct WYSIWYG visual engineering – using familiar sets of business components. Then the toolkit writes the complex XSD Schema for you, along with test samples, documentation, XMI/UML models, Mindmaps and more. So how do you get a set of business components? The toolkit allows you to harvest these from existing schema collections or enterprise data models, or as in the case of NIEM, existing domain dictionary collections. I’ve been using this for the latest IEEE/OASIS/NIST initiative on a Common Data Format (CDF) for elections management systems. So you can download those from OASIS and see how this can transform how you build actual business exchanges – improving the quality, consistency and usability – and dramatically allowing automated generation of artifacts you only dreamed of before – such as a model of your entire major exchange collection components. http://www.oasis-open.org/committees/documents.php?wg_abbrev=election So what we have here is a foundation version – setting the scene and the basis for changing how people can generate and manage information exchanges. A foundation built using the OASIS CAM standard combined with aspects of the NIEM Naming and Design Rules and the UN/CEFACT Core Components specifications and emerging work on OASIS CIQ name and address and ANSI/ISO code list schema. We still have a raft of work to do to integrate this into SOA best practices and extend the dictionary capabilities to assist true community development. Answering questions such as: - How good is my canonical component collection? - How much reuse is really occurring? - What inconsistencies and extensions are there in the dictionary components? Expect us to begin tackling these areas now that the foundation is in place. The immediate need is to develop training and self-start materials – so we will be focusing there for the next couple of months and especially leading up to the IJIS industry event in July in New Jersey, and the NIEM NTE event in August in Philadelphia. http://sourceforge.net/projects/camprocessor

    Read the article

  • Five Best Practices for Going Mobile

    - by kellsey.ruppel
    76% of IT decision makers indicate mobile trends will have a high to extremely high impact on their organization. Has your organization gone mobile? Looking for some ideas on how to get started? John Brunswick shares his Best Practices for Going Mobile. Mobile technology has gone from nice-to-have to a cornerstone of user engagement. Mobile access enables social networking, decision support, purchasing, content consumption, and location-based searching, extending experiences beyond what is available in traditional desktop computing.  Organizations rushing to ensure their brand's mobile availability may have taken a tactical approach to implementation, but strategically approaching mobile can enable greater returns on a similar investment and subsequent mobile projects. Here are some strategic considerations for delivering products, services, and information to mobile constituents.  Who, Why, and What? Ask yourself these key questions: who are you attempting to engage through the channel, and why are they engaging you through this channel? What experience will satisfy their needs? What outcome will support your core business? Will you be informing and/or transacting with this person?  Mobile Behavior. Mobile users generally engage for a very specific purpose. Ensure that access to information, services, and products is streamlined. Arriving on a mobile site through search only to be asked to search again frustrates users.  Mobile Is Broad. After establishing the audience and goal, review technology requirements to support them. Do you need a mobile Website, native mobile application, or both? Do you need to support multiple devices? Know the difference between native mobile and mobile Web.  Social Strategy. Users are more likely to trust reviews from peers than marketing information from a vendor. If you are selling products or services, be sure to make social integration part of your strategy.  Content Management. Consider a shared content platform strategy for Web and mobile projects. Fresh, consistent content is important for high-quality experiences. Read more from John Brunswick.We'll also be talking mobile strategies and how you can transform your portal experience and optimize online engagement -- making your portals more interactive and more engaging across multiple channels in a webcast tomorrow. We hope you'll join us!

    Read the article

  • Implementing the Reactive Manifesto with Azure and AWS

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/10/31/implementing-the-reactive-manifesto-with-azure-and-aws.aspxMy latest Pluralsight course, Implementing the Reactive Manifesto with Azure and AWS has just been published! I’d planned to do a course on dual-running a messaging-based solution in Azure and AWS for super-high availability and scale, and the Reactive Manifesto encapsulates exactly what I wanted to do. A “reactive” application describes an architecture which is inherently resilient and scalable, being event-driven at the core, and using asynchronous communication between components. In the course, I compare that architecture to a classic n-tier approach, and go on to build out an app which exhibits all the reactive traits: responsive, event-driven, scalable and resilient. I use a suite of technologies which are enablers for all those traits: ASP.NET SignalR for presentation, with server push notifications to the user Messaging in the middle layer for asynchronous communication between presentation and compute Azure Service Bus Queues and Topics AWS Simple Queue Service AWS Simple Notification Service MongoDB at the storage layer for easy HA and scale, with minimal locking under load. Starting with a couple of console apps to demonstrate message sending, I build the solution up over 7 modules, deploying to Azure and AWS and running the app across both clouds concurrently for the whole stack - web servers, messaging infrastructure, message handlers and database servers. I demonstrating failover by killing off bits of infrastructure, and show how a reactive app deployed across two clouds can survive machine failure, data centre failure and even whole cloud failure. The course finishes by configuring auto-scaling in AWS and Azure for the compute and presentation layers, and running a load test with blitz.io. The test pushes masses of load into the app, which is deployed across four data centres in Azure and AWS, and the infrastructure scales up seamlessly to meet the load – the blitz report is pretty impressive: That’s 99.9% success rate for hits to the website, with the potential to serve over 36,000,000 hits per day – all from a few hours’ build time, and a fairly limited set of auto-scale configurations. When the load stops, the infrastructure scales back down again to a minimal set of servers for high availability, so the app doesn’t cost much to host unless it’s getting a lot of traffic. This is my third course for Pluralsight, with Nginx and PHP Fundamentals and Caching in the .NET Stack: Inside-Out released earlier this year. Now that it’s out, I’m starting on the fourth one, which is focused on C#, and should be out by the end of the year.

    Read the article

< Previous Page | 255 256 257 258 259 260 261 262 263 264 265 266  | Next Page >