Search Results

Search found 6488 results on 260 pages for 'global thermonuclear war'.

Page 95/260 | < Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >

  • A temporary disagreement

    - by Tony Davis
    Last month, Phil Factor caused a furore amongst some MVPs with an article that attempted to offer simple advice to developers regarding the use of table variables, versus local and global temporary tables, in their code. Phil makes clear that the table variables do come with some fairly major limitations.no distribution statistics, no parallel query plans for queries that modify table variables.but goes on to suggest that for reasonably small-scale strategic uses, and with a bit of due care and testing, table variables are a "good thing". Not everyone shares his opinion; in fact, I imagine he was rather aghast to learn that there were those felt his article was akin to pulling the pin out of a grenade and tossing it into the database; table variables should be avoided in almost all cases, according to their advice, in favour of temp tables. In other words, a fairly major feature of SQL Server should be more-or-less 'off limits' to developers. The problem with temp tables is that, because they are scoped either in the procedure or the connection, it is easy to allow them to hang around for too long, eating up precious memory and bulking up the shared tempdb database. Unless they are explicitly dropped, global temporary tables, and local temporary tables created within a connection rather than within a stored procedure, will persist until the connection is closed or, with connection pooling, until the connection is reused. It's also quite common with ASP.NET applications to have connection leaks, as Bill Vaughn explains in his chapter in the "SQL Server Deep Dives" book, meaning that the web page exits without closing the connection object, maybe due to an error condition. This will then hang around in the heap for what might be hours before picked up by the garbage collector. Table variables are much safer in this regard, since they are batch-scoped and so are cleaned up automatically once the batch is complete, which also means that they are intuitive to use for the developer because they conform to scoping rules that are closer to those in procedural code. On the surface then, an ideal way to deal with issues related to tempdb memory hogging. So why did Phil qualify his recommendation to use Table Variables? This is another of those cases where, like scalar UDFs and table-valued multi-statement UDFs, developers can sometimes get into trouble with a relatively benign-looking feature, due to way it's been implemented in SQL Server. Once again the biggest problem is how they are handled internally, by the SQL Server query optimizer, which can make very poor choices for JOIN orders and so on, in the absence of statistics, especially when joining to tables with highly-skewed data. The resulting execution plans can be horrible, as will be the resulting performance. If the JOIN is to a large table, that will hurt. Ideally, Microsoft would simply fix this issue so that developers can't get burned in this way; they've been around since SQL Server 2000, so Microsoft has had a bit of time to get it right. As I commented in regard to UDFs, when developers discover issues like with such standard features, the database becomes an alien planet to them, where death lurks around each corner, and they continue to avoid these "killer" features years after the problems have been eventually resolved. In the meantime, what is the right approach? Is it to say "hammers can kill, don't ever use hammers", or is it to try to explain, as Phil's article and follow-up blog post have tried to do, what the feature was intended for, why care must be applied in its use, and so enable developers to make properly-informed decisions, without requiring them to delve deep into the inner workings of SQL Server? Cheers, Tony.

    Read the article

  • Merge sort versus quick sort performance

    - by Giorgio
    I have implemented merge sort and quick sort using C (GCC 4.4.3 on Ubuntu 10.04 running on a 4 GB RAM laptop with an Intel DUO CPU at 2GHz) and I wanted to compare the performance of the two algorithms. The prototypes of the sorting functions are: void merge_sort(const char **lines, int start, int end); void quick_sort(const char **lines, int start, int end); i.e. both take an array of pointers to strings and sort the elements with index i : start <= i <= end. I have produced some files containing random strings with length on average 4.5 characters. The test files range from 100 lines to 10000000 lines. I was a bit surprised by the results because, even though I know that merge sort has complexity O(n log(n)) while quick sort is O(n^2), I have often read that on average quick sort should be as fast as merge sort. However, my results are the following. Up to 10000 strings, both algorithms perform equally well. For 10000 strings, both require about 0.007 seconds. For 100000 strings, merge sort is slightly faster with 0.095 s against 0.121 s. For 1000000 strings merge sort takes 1.287 s against 5.233 s of quick sort. For 5000000 strings merge sort takes 7.582 s against 118.240 s of quick sort. For 10000000 strings merge sort takes 16.305 s against 1202.918 s of quick sort. So my question is: are my results as expected, meaning that quick sort is comparable in speed to merge sort for small inputs but, as the size of the input data grows, the fact that its complexity is quadratic will become evident? Here is a sketch of what I did. In the merge sort implementation, the partitioning consists in calling merge sort recursively, i.e. merge_sort(lines, start, (start + end) / 2); merge_sort(lines, 1 + (start + end) / 2, end); Merging of the two sorted sub-array is performed by reading the data from the array lines and writing it to a global temporary array of pointers (this global array is allocate only once). After each merge the pointers are copied back to the original array. So the strings are stored once but I need twice as much memory for the pointers. For quick sort, the partition function chooses the last element of the array to sort as the pivot and scans the previous elements in one loop. After it has produced a partition of the type start ... {elements <= pivot} ... pivotIndex ... {elements > pivot} ... end it calls itself recursively: quick_sort(lines, start, pivotIndex - 1); quick_sort(lines, pivotIndex + 1, end); Note that this quick sort implementation sorts the array in-place and does not require additional memory, therefore it is more memory efficient than the merge sort implementation. So my question is: is there a better way to implement quick sort that is worthwhile trying out? If I improve the quick sort implementation and perform more tests on different data sets (computing the average of the running times on different data sets) can I expect a better performance of quick sort wrt merge sort? EDIT Thank you for your answers. My implementation is in-place and is based on the pseudo-code I have found on wikipedia in Section In-place version: function partition(array, 'left', 'right', 'pivotIndex') where I choose the last element in the range to be sorted as a pivot, i.e. pivotIndex := right. I have checked the code over and over again and it seems correct to me. In order to rule out the case that I am using the wrong implementation I have uploaded the source code on github (in case you would like to take a look at it). Your answers seem to suggest that I am using the wrong test data. I will look into it and try out different test data sets. I will report as soon as I have some results.

    Read the article

  • Converting from mp4 to Xvid avi using avconv?

    - by Ricardo Gladwell
    I normally use avidemux to convert mp4s to Xvid AVI for my Philips Streamium SLM5500. Normally I select MPEG-4 ASP (Xvid) at Two Pass with an average bitrate f 1500kb/s for video and AC3 (lav) audio and it converts correctly. However, I'm trying to using avconv so I can automate the process with a script, but when I do this the video stutters and stops playing part way through. I have a suspicion its something to do with a faulty audio conversion. The commands I'm using are as follows: avconv -y -i video.mp4 -pass 1 -vtag xvid -c:a ac3 -b:a 128k -b:v 1500k -f avi /dev/null avconv -y -i video.mp4 -pass 2 -vtag xvid -c:a ac3 -b:a 128k -b:v 1500k -f avi video.avi There is a bewildering array of arguments for avconv. Is there something I'm doing wrong? Is there a way I can script avidemux from a headless server? Please see command line output: $ avconv -y -i video.mp4 -pass 1 -vtag xvid -an -b:v 1500k -f avi /dev/null avconv version 0.8.5-6:0.8.5-0ubuntu0.12.10.1, Copyright (c) 2000-2012 the Libav developers built on Jan 24 2013 14:49:20 with gcc 4.7.2 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'video.mp4': Metadata: major_brand : isom minor_version : 1 compatible_brands: isomavc1 creation_time : 2013-02-04 13:53:38 Duration: 00:44:09.16, start: 0.000000, bitrate: 669 kb/s Stream #0.0(und): Video: h264 (High), yuv420p, 720x404 [PAR 1:1 DAR 180:101], 538 kb/s, 25 fps, 25 tbr, 100 tbn, 50 tbc Metadata: creation_time : 2013-02-04 13:53:38 Stream #0.1(und): Audio: ac3, 44100 Hz, stereo, s16, 127 kb/s Metadata: creation_time : 2013-02-04 13:53:42 [buffer @ 0x7f4c40] w:720 h:404 pixfmt:yuv420p Output #0, avi, to '/dev/null': Metadata: major_brand : isom minor_version : 1 compatible_brands: isomavc1 creation_time : 2013-02-04 13:53:38 ISFT : Lavf53.21.1 Stream #0.0(und): Video: mpeg4, yuv420p, 720x404 [PAR 1:1 DAR 180:101], q=2-31, pass 1, 1500 kb/s, 25 tbn, 25 tbc Metadata: creation_time : 2013-02-04 13:53:38 Stream mapping: Stream #0:0 -> #0:0 (h264 -> mpeg4) Press ctrl-c to stop encoding frame=66227 fps=328 q=2.0 Lsize= 0kB time=2649.16 bitrate= 0.0kbits/s video:401602kB audio:0kB global headers:0kB muxing overhead -100.000000% $ avconv -y -i video.mp4 -pass 2 -vtag xvid -c:a ac3 -b:a 128k -b:v 1500k -f avi video.avi avconv version 0.8.5-6:0.8.5-0ubuntu0.12.10.1, Copyright (c) 2000-2012 the Libav developers built on Jan 24 2013 14:49:20 with gcc 4.7.2 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'video.mp4': Metadata: major_brand : isom minor_version : 1 compatible_brands: isomavc1 creation_time : 2013-02-04 13:53:38 Duration: 00:44:09.16, start: 0.000000, bitrate: 669 kb/s Stream #0.0(und): Video: h264 (High), yuv420p, 720x404 [PAR 1:1 DAR 180:101], 538 kb/s, 25 fps, 25 tbr, 100 tbn, 50 tbc Metadata: creation_time : 2013-02-04 13:53:38 Stream #0.1(und): Audio: ac3, 44100 Hz, stereo, s16, 127 kb/s Metadata: creation_time : 2013-02-04 13:53:42 [buffer @ 0x12b4f00] w:720 h:404 pixfmt:yuv420p Incompatible sample format 's16' for codec 'ac3', auto-selecting format 'flt' [mpeg4 @ 0x12b3ec0] [lavc rc] Using all of requested bitrate is not necessary for this video with these parameters. Output #0, avi, to 'video.avi': Metadata: major_brand : isom minor_version : 1 compatible_brands: isomavc1 creation_time : 2013-02-04 13:53:38 ISFT : Lavf53.21.1 Stream #0.0(und): Video: mpeg4, yuv420p, 720x404 [PAR 1:1 DAR 180:101], q=2-31, pass 2, 1500 kb/s, 25 tbn, 25 tbc Metadata: creation_time : 2013-02-04 13:53:38 Stream #0.1(und): Audio: ac3, 44100 Hz, stereo, flt, 128 kb/s Metadata: creation_time : 2013-02-04 13:53:42 Stream mapping: Stream #0:0 -> #0:0 (h264 -> mpeg4) Stream #0:1 -> #0:1 (ac3 -> ac3) Press ctrl-c to stop encoding Input stream #0:1 frame changed from rate:44100 fmt:s16 ch:2 to rate:44100 fmt:flt ch:2 frame=66227 fps=284 q=2.2 Lsize= 458486kB time=2649.13 bitrate=1417.8kbits/s video:413716kB audio:41393kB global headers:0kB muxing overhead 0.741969%

    Read the article

  • Announcing Oracle Knowledge 8.5: Even Superheroes Need Upgrades

    - by Richard Lefebvre
    It’s no secret that we like Iron Man here at Oracle. We've certainly got stuff in common: one of the world’s largest technology companies and one of the world’s strongest technology-driven superheroes. If you've seen the recent Iron Man movies, you might have even noticed some of our servers sitting in Tony Stark’s lab. Heck, our CEO made a cameo appearance in one of the movies. Yeah, we’re fans. Especially as Iron Man is a regular guy with some amazing technology – like us. But Like all great things even Superheroes need upgrades, whether it’s their suit, their car or their spacestation. Oracle certainly has its share of advanced technology.  For example, Oracle acquired InQuira in 2011 after years of watching the company advance the science of Knowledge Management.  And it was some extremely super technology.  At that time, Forrester’s Kate Leggett wrote about it in ‘Standalone Knowledge Management Is Dead With Oracle's Announcement To Acquire InQuira’ saying ‘Knowledge, accessible via web self-service or agent UIs, is a critical customer service component for industries fielding repetitive questions about policies, procedures, products, and solutions.’  One short sentence that amounts to a very tall order.  Since the acquisition our KM scientists have been hard at work in their labs. Today Oracle announced its first major knowledge management release since its acquisition of InQuira: Oracle Knowledge 8.5. We’ve put a massively-upgraded supersuit on our KM solution because we still have bad guys to fight. And we are very proud to say that we went way beyond our original plans. So what, exactly, did we do in Oracle Knowledge 8.5? We did what any high-tech super-scientist would do. We made Oracle Knowledge smarter, stronger and faster. First, we gave Oracle Knowledge a stronger heart: Certified on Oracle technologies, including Oracle WebLogic Server, Oracle Business Intelligence, Oracle Exadata Database Machine and Oracle Exalogic Elastic Cloud. Huge scaling and performance improvements. Then we gave it a better reach: Improved iConnect functionality that delivers contextualized knowledge directly into CRM applications. Better content acquisition support across disparate sources. Enhanced Language Support including Natural Language search support for 16 Languages. Enhanced Keyword Search for 23 authoring languages, as well as enhanced out-of-the-box industry ontologies covering 14 languages. And finally we made Oracle Knowledge ridiculously smarter: Improved Natural Language Search and a new Contextual Answer Delivery that understands the true intent of each inquiry to deliver the best possible answers. AnswerFlow for Guided Navigation & Answer Delivery, a new application for guided troubleshooting and answer delivery. Knowledge Analytics standardized on Oracle’s Business Intelligence Enterprise Edition. Knowledge Analytics Dashboards optimized search and content creation through targeted, actionable insights. A new three-level language model "Global - Language - Locale" that provides an improved search experience for organizations with a global footprint. We believe that Oracle Knowledge 8.5 is the most sophisticated KM solution in existence today and we’ve worked very hard to help it fulfill the promise of KM: empowering customers and employees with deep insights wherever they need them. We hope you agree it’s a suit worth wearing. We are continuing to invest in Knowledge Management as it continues to be especially relevant today with the enterprise push for peer collaboration, crowd-sourced wisdom, agile innovation, social interaction channels, applied real-time analytics, and personalization. In fact, we believe that Knowledge Management is a critical part of the Customer Experience portfolio for success. From empowering employee’s, to empowering customers, to gaining the insights from interactions across all channels, businesses today cannot efficiently scale their efforts, strengthen their customer relationships or achieve their growth goals without a solid Knowledge Management foundation to build from. And like every good superhero saga, we’re not even close to being finished. Next we are taking Oracle Knowledge into the Cloud. Yes, we’re thinking what you’re thinking: ROCKET BOOTS! Stay tuned for the next adventure… By Nav Chakravarti, Vice-President, Product Management, CRM Knowledge and previously the CTO of InQuira, a knowledge management company acquired by Oracle in 2011

    Read the article

  • Announcing Oracle Knowledge 8.5: Even Superheroes Need Upgrades

    - by Chris Warner
    It’s no secret that we like Iron Man here at Oracle. We've certainly got stuff in common: one of the world’s largest technology companies and one of the world’s strongest technology-driven superheroes. If you've seen the recent Iron Man movies, you might have even noticed some of our servers sitting in Tony Stark’s lab. Heck, our CEO made a cameo appearance in one of the movies. Yeah, we’re fans. Especially as Iron Man is a regular guy with some amazing technology – like us. But Like all great things even Superheroes need upgrades, whether it’s their suit, their car or their spacestation. Oracle certainly has its share of advanced technology.  For example, Oracle acquired InQuira in 2011 after years of watching the company advance the science of Knowledge Management.  And it was some extremely super technology.  At that time, Forrester’s Kate Leggett wrote about it in ‘Standalone Knowledge Management Is Dead With Oracle's Announcement To Acquire InQuira’ saying ‘Knowledge, accessible via web self-service or agent UIs, is a critical customer service component for industries fielding repetitive questions about policies, procedures, products, and solutions.’  One short sentence that amounts to a very tall order.  Since the acquisition our KM scientists have been hard at work in their labs. Today Oracle announced its first major knowledge management release since its acquisition of InQuira: Oracle Knowledge 8.5. We’ve put a massively-upgraded supersuit on our KM solution because we still have bad guys to fight. And we are very proud to say that we went way beyond our original plans. So what, exactly, did we do in Oracle Knowledge 8.5? We did what any high-tech super-scientist would do. We made Oracle Knowledge smarter, stronger and faster. First, we gave Oracle Knowledge a stronger heart: Certified on Oracle technologies, including Oracle WebLogic Server, Oracle Business Intelligence, Oracle Exadata Database Machine and Oracle Exalogic Elastic Cloud. Huge scaling and performance improvements. Then we gave it a better reach: Improved iConnect functionality that delivers contextualized knowledge directly into CRM applications. Better content acquisition support across disparate sources. Enhanced Language Support including Natural Language search support for 16 Languages. Enhanced Keyword Search for 23 authoring languages, as well as enhanced out-of-the-box industry ontologies covering 14 languages. And finally we made Oracle Knowledge ridiculously smarter: Improved Natural Language Search and a new Contextual Answer Delivery that understands the true intent of each inquiry to deliver the best possible answers. AnswerFlow for Guided Navigation & Answer Delivery, a new application for guided troubleshooting and answer delivery. Knowledge Analytics standardized on Oracle’s Business Intelligence Enterprise Edition. Knowledge Analytics Dashboards optimized search and content creation through targeted, actionable insights. A new three-level language model "Global - Language - Locale" that provides an improved search experience for organizations with a global footprint. We believe that Oracle Knowledge 8.5 is the most sophisticated KM solution in existence today and we’ve worked very hard to help it fulfill the promise of KM: empowering customers and employees with deep insights wherever they need them. We hope you agree it’s a suit worth wearing. We are continuing to invest in Knowledge Management as it continues to be especially relevant today with the enterprise push for peer collaboration, crowd-sourced wisdom, agile innovation, social interaction channels, applied real-time analytics, and personalization. In fact, we believe that Knowledge Management is a critical part of the Customer Experience portfolio for success. From empowering employee’s, to empowering customers, to gaining the insights from interactions across all channels, businesses today cannot efficiently scale their efforts, strengthen their customer relationships or achieve their growth goals without a solid Knowledge Management foundation to build from. And like every good superhero saga, we’re not even close to being finished. Next we are taking Oracle Knowledge into the Cloud. Yes, we’re thinking what you’re thinking: ROCKET BOOTS! Stay tuned for the next adventure… By Nav Chakravarti, Vice-President, Product Management, CRM Knowledge and previously the CTO of InQuira, a knowledge management company acquired by Oracle in 2011. 

    Read the article

  • Inspiring a co-worker to adopt better coding practices?

    - by Aaronaught
    In the Handling my antiquated coworker question, various people discussed strategies for dealing with coworkers who are unwilling to integrate their workflow with the team's. I'd like, if possible, to learn some strategies for "teaching" a coworker who is merely ignorant of modern techniques and tools, and possibly a little apathetic. I've started working with a programmer who until recently has been working in relative isolation, in a different part of the company. He has extensive domain knowledge and most importantly he has demonstrated good problem-solving skills, something which many candidates seem to lack. However, the actual (C#) code I've seen is a throwback to the VB6 days. Procedural structure, Hungarian notation, global variables (abuse of static), no interfaces, no tests, non-use of Generics, throwing System.Exception... you get the idea. This programmer is a fair bit older than I am and, by first impressions at least, doesn't actively seek positive change. I'm not going to say resistant to change, because I think that is largely an issue of how the topic gets broached, and I want to be prepared. Programmers tend to be stubborn people, and going in with guns blazing and instituting rip-it-to-shreds code reviews and strictly-enforced policies is very likely not going to produce the end result that I want. If this were a new hire, a junior programmer, I wouldn't think twice about taking a "mentor" stance, but I'm extremely wary of treating an experienced employee as a clueless newbie (which he's not - he just hasn't kept pace with certain advancements in the field). How might I go about raising this developer's code quality standard the Dale Carnegie way, through gentle persuasion and non-material incentives? What would be the best strategy for effecting subtle, gradual changes, without creating an adversarial situation? Have other people - especially lead developers - been in this type of situation before? Which strategies were successful at stimulating interest and creating a positive group dynamic? Which strategies weren't successful and would be better to avoid? Clarifications: I really feel that several people are answering based on personal feelings without actually reading all of the details of the question. Please note the following, which should have been implied but I am now making explicit: This coworker is only my "senior" by virtue of age. I never said that his title, sphere of influence, or years at the organization exceed mine, and in fact, none of those things are true. He's a LOB programmer who's been absorbed into the main development shop. That's it. I am not a new hire, junior programmer, or other naïve idiot with grand plans to transform the company overnight. I am basically in charge of the software process, but as many who've worked as "leads" will know, responsibilities don't always correlate precisely with the org chart. I'm not asking people how to get my way, come hell or high water. I could do that if I wanted to, with the net result being that this person would become resentful and/or quit. Please try to understand that I am looking for a social, cooperative method of driving change. The mention of "...global variables... no tests... throwing System.Exception" was intended to demonstrate that the problems are not just superficial or aesthetic. Practices that may work for relatively small CRUD apps do not necessarily work for large enterprise apps, and in fact, none of the code so far has actually passed the integration tests. Please, try to take the question at face value, accept that I actually know what I'm talking about, and either answer the question that I actually asked or move on. P.S. My sincerest gratitude to those who -did- offer constructive advice rather than arguing with the premise. I'm going to leave this open for a while longer as I'm hoping to hear more in the way of real-world experiences.

    Read the article

  • Ubuntu + Wacom Intuos 4 + MyPaint HELP!

    - by Sativa
    Please keep in mind I'm not that computer savvy, but I will try any suggestion so please help me out! My tablet will stop working if the USB connection is ever broken, or the Ubuntu software is being updated. Sometimes it will stop working for no reason that I can see. The lights will still be on, but it won't be responsive. It doesn't work again until I restart the laptop with the tablet plugged in, which is grating if you have to do it every 25 min. or so... I'm not sure if the issue is with the port, the tablet/cable or the driver but any suggestions would be very welcome! Also, MyPaint is frequently having hiccups. It seems to save fine but at times it will randomly close down and when I open files they're often empty. They turn into 0Kb files and only contain a single empty layer. Also very grating, considering I lose days of work for no clear reason and without any heads up. Again, I'm not sure if the issue is with the port, the tablet/cable or the driver but any suggestions would be very welcome! The error message reads; Traceback (most recent call last): File "/usr/share/mypaint/gui/application.py", line 177, at_application_start(*junk=()) else: self.filehandler.open_file(fn) variables: {'fn': ('local', u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora'), 'self.filehandler.open_file': ('local', <bound method FileHandler.wrapper of <gui.filehandling.FileHandler object at 0x7fdb89063a10>>)} File "/usr/share/mypaint/gui/drawwindow.py", line 60, wrapper(self=<gui.filehandling.FileHandler object>, *args=(u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora',), **kwargs={}) try: func(self, *args, **kwargs) # gtk main loop may be called in here... variables: {'self': ('local', <gui.filehandling.FileHandler object at 0x7fdb89063a10>), 'args': ('local', (u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora',)), 'func': ('local', <function open_file at 0x7fdb8b397b90>), 'kwargs': ('local', {})} File "/usr/share/mypaint/gui/filehandling.py", line 231, open_file(self=<gui.filehandling.FileHandler object>, filename=u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora') try: self.doc.model.load(filename, feedback_cb=self.gtk_main_tick) except document.SaveLoadError, e: variables: {'self.doc.model.load': ('local', <bound method Document.load of <lib.document.Document instance at 0x7fdb8ab4f8c0>>), 'feedback_cb': (None, []), 'self.gtk_main_tick': ('local', <function gtk_main_tick at 0x7fdb8b397b18>), 'filename': ('local', u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora')} File "/usr/share/mypaint/lib/document.py", line 544, load(self=<lib.document.Document instance>, filename=u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora', **kwargs={'feedback_cb': <function gtk_main_tick>}) try: load(filename, **kwargs) except gobject.GError, e: variables: {'load': ('local', <bound method Document.load_ora of <lib.document.Document instance at 0x7fdb8ab4f8c0>>), 'kwargs': ('local', {'feedback_cb': <function gtk_main_tick at 0x7fdb8b397b18>}), 'filename': ('local', u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora')} File "/usr/share/mypaint/lib/document.py", line 772, load_ora(self=<lib.document.Document instance>, filename=u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora', feedback_cb=<function gtk_main_tick>) tempdir = tempdir.decode(sys.getfilesystemencoding()) z = zipfile.ZipFile(filename) print 'mimetype:', z.read('mimetype').strip() variables: {'zipfile.ZipFile': ('global', <class 'zipfile.ZipFile'>), 'z': (None, []), 'filename': ('local', u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora')} File "/usr/lib/python2.7/zipfile.py", line 770, __init__(self=<zipfile.ZipFile object>, file=u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora', mode='r', compression=0, allowZip64=False) if key == 'r': self._RealGetContents() elif key == 'w': variables: {'self._RealGetContents': ('local', <bound method ZipFile._RealGetContents of <zipfile.ZipFile object at 0x7fdb9b952790>>)} File "/usr/lib/python2.7/zipfile.py", line 811, _RealGetContents(self=<zipfile.ZipFile object>) if not endrec: raise BadZipfile, "File is not a zip file" if self.debug > 1: variables: {'BadZipfile': ('global', <class 'zipfile.BadZipfile'>)} BadZipfile: File is not a zip file

    Read the article

  • Tuning Red Gate: #1 of Many

    - by Grant Fritchey
    Everyone runs into performance issues at some point. Same thing goes for Red Gate software. Some of our internal systems were running into some serious bottlenecks. It just so happens that we have this nice little SQL Server monitoring tool. What if I were to, oh, I don't know, use the monitoring tool to identify the bottlenecks, figure out the causes and then apply a fix (where possible) and then start the whole thing all over again? Just a crazy thought. OK, I was asked to. This is my first time looking through these servers, so here's how I'd go about using SQL Monitor to get a quick health check, sort of like checking the vitals on a patient. First time opening up our internal SQL Monitor instance and I was greeted with this: Oh my. Maybe I need to get our internal guys to read my blog. Anyway, I know that there are two servers where most of the load is. I'll drill down on the first. I'm selecting the server, not the instance, by clicking on the server name. That opens up the Global Overview page for the server. The information here much more applicable to the "oh my gosh, I have a problem now" type of monitoring. But, looking at this, I am seeing something immediately. There are four(4) drives on the system. The C:\ has an average read time of 16.9ms, more than double the others. Is that a problem? Not sure, but it's something I'll look at. It's write time is higher too. I'll keep drilling down, first, to the unclosed alerts on the server. Now things get interesting. SQL Monitor has a number of different types of alerts, some related to error states, others to service status, and then some related to performance. Guess what I'm seeing a bunch of right here: Long running queries and long job durations. If you check the dates, they're all recent, within the last 24 hours. If they had just been old, uncleared alerts, I wouldn't be that concerned. But with all these, all performance related, and all in the last 24 hours, yeah, I'm concerned. At this point, I could just start responding to the Alerts. If I click on one of the the Long-running query alerts, I'll get all kinds of cool data that can help me determine why the query ran long. But, I'm not in a reactive mode here yet. I'm still gathering data, trying to understand how the server works. I have the information that we're generating a lot of performance alerts, let's sock that away for the moment. Instead, I'm going to back up and look at the Global Overview for the SQL Instance. It shows all the databases on the server and their status. Then it shows a number of basic metrics about the SQL Server instance, again for that "what's happening now" view or things. Then, down at the bottom, there is the Top 10 expensive queries list: This is great stuff. And no, not because I can see the top queries for the last 5 minutes, but because I can adjust that out 3 days. Now I can see where some serious pain is occurring over the last few days. Databases have been blocked out to protect the guilty. That's it for the moment. I have enough knowledge of what's going on in the system that I can start to try to figure out why the system is running slowly. But, I want to look a little more at some historical data, to understand better how this server is behaving. More next time.

    Read the article

  • JCP Awards 10 Year Retrospective

    - by Heather VanCura
    As we celebrate 10 years of JCP Program Award recognition in 2012,  take a look back in the Retrospective article covering the history of the JCP awards.  Most recently, the JCP awards were  celebrated at JavaOne Latin America in Brazil, where SouJava was presented the JCP Member of the Year Award for 2012 (won jointly with the London Java Community) for their contributions and launch of the Global Adopt-a-JSR Program. This is also a good time to honor the JCP Award Nominees and Winners who have been designated as Star Spec Leads.  Spec Leads are key to the Java Community Process (JCP) program. Without them, none of the Java Specification Requests (JSRs) would have begun, much less completed and become implemented in shipping products.  Nominations for 2012 Start Spec Leads are now open until 31 December. The Star Spec Lead program recognizes Spec Leads who have repeatedly proven their merit by producing high quality specifications, establishing best practices, and mentoring others. The point of such honor is to endorse the good work that they do, showcase their methods for other Spec Leads to emulate, and motivate other JCP program members and participants to get involved in the JCP program. Ed Burns – A Star Spec Lead for 2009, Ed first got involved with the JCP program when he became co-Spec Lead of JSR 127, JavaServer Faces (JSF), a role he has continued through JSF 1.2 and now JSF 2.0, which is JSR 314. Linda DeMichiel – Linda thus involved in the JCP program from its very early days. She has been the Spec Lead on at least three JSRs and an EC member for another three. She holds a Ph.D. in Computer Science from Stanford University. Gavin King – Nominated as a JCP Outstanding Spec Lead for 2010, for his work with JSR 299. His endorsement said, “He was not only able to work through disputes and objections to the evolving programming model, but he resolved them into solutions that were more technically sound, and which gained support of its pundits.” Mike Milikich –  Nominated for his work on Java Micro Edition (ME) standards, implementations, tools, and Technology Compatibility Kits (TCKs), Mike was a 2009 Star Spec Lead for JSR 271, Mobile Information Device Profile 3. David Nuescheler – Serving as the CTO for Day Software, acquired by Adobe Systems, David has been a key player in the growth of the company’s global content management solution. In 2002, he became Spec Lead for JSR 170, Content Repository for Java Technology API, continuing for the subsequent version, JSR 283. Bill Shannon – A well-respected name in the Java community, Bill came to Oracle from Sun as a Distinguished Engineer and is still performing at full speed as Spec Lead for JSR 342, Java EE 7,  as an alternate EC member, and hands-on problem solver for the Java community as a whole. Jim Van Peursem – Jim holds a PhD in Computer Engineering. He was part of the Motorola team that worked with Sun labs on the Spotless VM that became the KVM. From within Motorola, Jim has been responsible for many aspects of Java technology deployment, from an independent Connected Limited Device Configuration (CLDC) and Mobile Information Device Profile (MIDP) implementations, to handset development, to working with the industry in defining many related standards. Participation in the JCP Program goes well beyond technical proficiency. The JCP Awards Program is an attempt to say “Thank You” to all of the JCP members, Expert Group Members, Spec Leads, and EC members who give their time to contribute to the evolution of Java technology.

    Read the article

  • Attachments in Oracle BPM 11g – Create a BPM Process Instance by passing an Attachment

    - by Venugopal Mangipudi
    Problem Statement: On a recent engagement I had  a requirement where we needed to create BPM instances using a message start event. The challenge was that the instance needed to be created after polling a file location and attaching the picked up file (pdf) as an attachment to the instance. Proposed Solution: I was contemplating using process API to accomplish this,but came up with a solution which involves a BPEL process to pickup the file and send a notification to the BPM process by passing the attachment as a payload. The following are some of the brief steps that were used to build the solution: BPM Process to receive an attachment as part of the payload: The BPM Process is a very simple process which has a Message Start event that accepts the attachment as an argument and a Simple User Task that the user can use to view the attachment (as part of the OOTB attachment panel). The Input payload is based on AttachmentPayload.xsd.  The 3 key elements of the the payload are: <xsd:element name="filename" type="xsd:string"/> <xsd:element name="mimetype" type="xsd:string"/> <xsd:element name="content" type="xsd:base64Binary"/> A screenshot of the Human task data assignment that need to performed to attach the file is provided here. Once the process and the UI project (default generated UI) are deployed to the SOA server, copy the wsdl location of the process service (from EM). This WSDL would be used in the BPEL project to create the Instances in the BPM process after a file is polled. BPEL Process to Poll for File and create instances in the BPM process: For the BPEL process a File adapter was configured as a Read service (File Streaming option and keeping the Schema as Opaque). Once a location and the file pattern to poll are provided the Readservice Partner Link was wired to Invoke the BPEL Process. Also, using the BPM Process WSDL, we can create the Webservice reference and can invoke the start operation. Before we do the assignment for the Invoke operation, a global variable should be created to hold the value of the fileName of the file. The mapping to the global variable can be done on the Receive activity properties (jca.file.FileName).  So for the assign operation before we invoke the BPM process service, we can get the content of the file from the receive input variable and the fileName from the jca.file.FileName property. The mimetype needs to be hard coded to the mime-type of the file: application/pdf (I am still researching ways to derive the mime type as it is not available as part of the jca.file properties).  The screenshot of the BPEL process can be found here and the Assign activity can be found here. The project source can be found at the following location. A sample pdf file to test the project and a screenshot of the BPM Human task screen after the successful creation of the instance can be found here. References: [1] https://blogs.oracle.com/fmwinaction/entry/oracle_bpm_adding_an_attachment

    Read the article

  • trying to use mod_proxy with httpd and tomcat

    - by techsjs2012
    I been trying to use mod_proxy with httpd and tomcat... I have on VirtualBox running Scientific Linux which has httpd and tomcat 6 on it.. I made two nodes of tomcat6. I followed this guide like 10 times and still cant get the 2nd node of tomcat working.. http://www.richardnichols.net/2010/08/5-minute-guide-clustering-apache-tomcat/ Here is the lines from my http.conf file <Proxy balancer://testcluster stickysession=JSESSIONID> BalancerMember ajp://127.0.0.1:8009 min=10 max=100 route=node1 loadfactor=1 BalancerMember ajp://127.0.0.1:8109 min=10 max=100 route=node2 loadfactor=1 </Proxy> ProxyPass /examples balancer://testcluster/examples <Location /balancer-manager> SetHandler balancer-manager AuthType Basic AuthName "Balancer Manager" AuthUserFile "/etc/httpd/conf/.htpasswd" Require valid-user </Location> Now here is my server.xml from node1 <?xml version='1.0' encoding='utf-8'?> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <!-- Note: A "Server" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/server.html --> <Server port="8005" shutdown="SHUTDOWN"> <!--APR library loader. Documentation at /docs/apr.html --> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html --> <Listener className="org.apache.catalina.core.JasperListener" /> <!-- Prevent memory leaks due to use of particular java/javax APIs--> <Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" /> <!-- JMX Support for the Tomcat server. Documentation at /docs/non-existent.html --> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <!-- Global JNDI resources Documentation at /docs/jndi-resources-howto.html --> <GlobalNamingResources> <!-- Editable user database that can also be used by UserDatabaseRealm to authenticate users --> <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/tomcat-users.xml" /> </GlobalNamingResources> <!-- A "Service" is a collection of one or more "Connectors" that share a single "Container" Note: A "Service" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/service.html --> <Service name="Catalina"> <!--The connectors can use a shared executor, you can define one or more named thread pools--> <!-- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="4"/> --> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- A "Connector" using the shared thread pool--> <!-- <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> --> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> <!-- An Engine represents the entry point (within Catalina) that processes every request. The Engine implementation for Tomcat stand alone analyzes the HTTP headers included with the request, and passes them on to the appropriate Host (virtual host). Documentation at /docs/config/engine.html --> <!-- You should set jvmRoute to support load-balancing via AJP ie : <Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1"> --> <Engine name="Catalina" defaultHost="localhost" jvmRoute="node1"> <!--For clustering, please take a look at documentation at: /docs/cluster-howto.html (simple how to) /docs/config/cluster.html (reference documentation) --> <!-- <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> --> <!-- The request dumper valve dumps useful debugging information about the request and response data received and sent by Tomcat. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.RequestDumperValve"/> --> <!-- This Realm uses the UserDatabase configured in the global JNDI resources under the key "UserDatabase". Any edits that are performed against this UserDatabase are immediately available for use by the Realm. --> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <!-- Define the default virtual host Note: XML Schema validation will not work with Xerces 2.2. --> <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="common" resolveHosts="false"/> --> </Host> </Engine> </Service> </Server> now here is the server.xml file from node2 <?xml version='1.0' encoding='utf-8'?> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <!-- Note: A "Server" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/server.html --> <Server port="8105" shutdown="SHUTDOWN"> <!--APR library loader. Documentation at /docs/apr.html --> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html --> <Listener className="org.apache.catalina.core.JasperListener" /> <!-- Prevent memory leaks due to use of particular java/javax APIs--> <Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" /> <!-- JMX Support for the Tomcat server. Documentation at /docs/non-existent.html --> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <!-- Global JNDI resources Documentation at /docs/jndi-resources-howto.html --> <GlobalNamingResources> <!-- Editable user database that can also be used by UserDatabaseRealm to authenticate users --> <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/tomcat-users.xml" /> </GlobalNamingResources> <!-- A "Service" is a collection of one or more "Connectors" that share a single "Container" Note: A "Service" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/service.html --> <Service name="Catalina"> <!--The connectors can use a shared executor, you can define one or more named thread pools--> <!-- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="4"/> --> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- A "Connector" using the shared thread pool--> <!-- <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> --> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8109" protocol="AJP/1.3" redirectPort="8443" /> <!-- An Engine represents the entry point (within Catalina) that processes every request. The Engine implementation for Tomcat stand alone analyzes the HTTP headers included with the request, and passes them on to the appropriate Host (virtual host). Documentation at /docs/config/engine.html --> <!-- You should set jvmRoute to support load-balancing via AJP ie : <Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1"> --> <Engine name="Catalina" defaultHost="localhost" jvmRoute="node2"> <!--For clustering, please take a look at documentation at: /docs/cluster-howto.html (simple how to) /docs/config/cluster.html (reference documentation) --> <!-- <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> --> <!-- The request dumper valve dumps useful debugging information about the request and response data received and sent by Tomcat. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.RequestDumperValve"/> --> <!-- This Realm uses the UserDatabase configured in the global JNDI resources under the key "UserDatabase". Any edits that are performed against this UserDatabase are immediately available for use by the Realm. --> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <!-- Define the default virtual host Note: XML Schema validation will not work with Xerces 2.2. --> <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="common" resolveHosts="false"/> --> </Host> </Engine> </Service> </Server> I dont know what it is. but I been trying for days

    Read the article

  • "Why We Chose Fusion CRM" by Vikas Bhambri, Managing Partner, The Athene Group

    - by Natalia Rachelson
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A guest post by Vikas Bhambri, Managing Partner, The Athene Group This year The Athene Group (www.theathenegroup.com) celebrated our tenth anniversary. The company has accomplished a lot in ten years overcoming a number of hurdles and challenges to have grown organically to a 150+ person global company with offices in the US, UK, and India and customers in the US, Canada, and Europe. Now more than ever with the current global landscape from an economic and competitive standpoint it was vital that we make some changes to remain successful for the next ten years. There were two key initiatives that we discussed internally that would enable us to successfully accomplish this – collaboration and the concept of “insight to action”. With our existing Oracle CRM On Demand platform we had components of this but not the full depth and breadth that we were looking for. When we started to discuss Fusion CRM we immediately saw several next generation tools that would embrace these two objectives. For a consulting and development organization the collaboration required between business development and consulting delivery is as important as the collaboration required during the projects between the project delivery and account management teams. The Activity Streams functionality in Fusion CRM immediately addressed the communication of key discussion topics and exchanges around our clients. Of course when we saw the Oracle Social Network (which is part of our Fusion CRM roadmap) we were blown away. The combination OSN and our CRM is going to make us more effective as we discuss and work cohesively on client engagements – ensuring mutual success for both Athene and our clients. When we looked at “insight to action” we saw that we had a great platform when folks were at their desks, unfortunately a lot of our business development and consulting folks are on the road. The Fusion Mobile Sales and Fusion Outlook Desktop provide information to our teams when they are on the go. So that they can provide real-time information and react to real-time information provided by their peers. We are in the early stages of our transformative experience with Fusion CRM but we believe the platform along with our people and processes are going to help us achieve our goals in the future.

    Read the article

  • WebCenter Customer Spotlight: Textron Inc.

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryTextron Inc. is one of the world's best known multi-industry companies and is a pioneer of the diversified business model. Founded in 1923, it has grown into a network of businesses—including Bell Helicopter, E-Z-GO, Cessna, and Jacobsen—with facilities and a presence in 25 countries, serving a diverse and global customer base. Textron is ranked 236th on the Fortune 500 list of the largest US companies. Textron needed a Web experience management solution to centralize control, minimize costs, and enable more efficient operations. Specifically, the company wanted to take IT out of the picture as much as possible, enabling sales and marketing leads for subsidiaries to make Website updates as they deem appropriate for their business.   Textron worked with Oracle partner Element Solutions to consolidate its Website management systems onto Oracle WebCenter Sites. The implementation enables Textron’s subsidiaries to adjust more quickly to customer demands,  reduced Website management cost & time to update content on a Website while allowing to integrate its Website updates more closely with social media and mobile platforms. Company OverviewTextron Inc. is one of the world's best known multi-industry companies and is a pioneer of the diversified business model. Founded in 1923, it has grown into a network of businesses—including Bell Helicopter, E-Z-GO, Cessna, and Jacobsen—with facilities and a presence in 25 countries, serving a diverse and global customer base. Textron is ranked 236th on the Fortune 500 list of the largest US companies. Business ChallengesWith numerous subsidiaries and more than 50 public Websites, Textron needed a Web experience management solution to centralize control, minimize costs, and enable more efficient operations. Specifically, the company wanted to take IT out of the picture as much as possible, enabling sales and marketing leads for subsidiaries to make Website updates as they deem appropriate for their business.   Solution DeployedTextron worked with Oracle partner Element Solutions to consolidate its Website management systems onto Oracle WebCenter Sites. Specifically, Textron: Used Oracle WebCenter Sites to integrate Web experience management capabilities for all Textron brands, including Bell Helicopter, E-Z-GO, Cessna, and Jacobsen Developed Website templates to enable marketing and communications professionals to easily make updates to their Websites, without having to work with IT Reduced Website management costs, as it costs more for IT to coordinate Website updates as opposed to marketing and communications Enabled IT to concentrate on other activities to enhance overall operations for Textron, such as project workflows Acquired a platform that enables marketing teams to integrate their Websites with social media and mobile platforms, allowing subsidiaries to make updates and contact customers anytime and everywhere—including through tablets and smartphones Reduced the time it takes to update content on a Website, including press releases, by enabling communications professionals to make updates directly Developed more appealing visual designs for Websites to help enhance customer purchase Business ResultsThe implementation enabled Textron’s subsidiaries to adjust more quickly to customer demands and Textron’s IT staff to concentrate on other processes, such as writing code and developing new workflows, enabling them to enhance company processes. In addition, Textron can use Oracle WebCenter Sites to integrate its Website updates more closely with social media and mobile platforms, enabling marketing and communications teams to make updates anytime and everywhere. The initiative has enabled Textron to save money by freeing IT up to work on more important tasks, instituting new e-commerce and mobile initiatives to better engage customers, and by ensuring efficient Website management processes to quickly adjust to customer demands.  “We considered a number of products, but chose Oracle WebCenter Sites because it provides the best user interface. We reviewed customer references and analyst reports, and Oracle WebCenter Sites was consistently at the top of the list,” Brad Hof, Manager, Advanced Business Solutions and Web Communications, Textron Inc. Additional Information Tectron Inc. Customer Snapshot Oracle WebCenter Sites

    Read the article

  • PASS: International Travels

    - by Bill Graziano
    Nihao!  One of the largest changes PASS is going through is the the expansion outside the US and Canada.  We’ve had international chapters and events in Europe since the early 2000’s.  But nothing on the scale we’re seeing now.  Since January 1st there have been 18 SQL Saturday events outside North America and 19 events in North America.  We hope to have three international SQLRally events outside the US in FY13 (budget willing).  I don’t know the exact percentage of chapters outside the US but it’s got be 50% or higher. We recently started an effort to remake the Board to better reflect the growing global face of PASS.  This involves assigning some Board seats to geographic regions.  You can ask questions about this in our feedback forum, participate in a Twitter chat or ask questions directly of Board members.  You can email me at if you’d like to ask a question directly.  We’re doing this very slowly and deliberately in hopes that a long communication cycle gives us a chance to address all the issues that our members will raise. After the Summit we passed a budget exception allocating an extra $20,000 for Board members to travel to local events.  I think it’s important for Board members to visit new areas and talk to more of our members.  I sent out an email asking where people had attended events outside their home city.  Here’s the list I got back: Albuquerque, Amsterdam, Boston, Brisbane, Chicago, Colorado Springs, Columbus, Dallas, Houston, Jacksonville, Las Vegas, London, Louisville, Minneapolis, New York City, Orange County, Orlando, Pensacola, Perth, Philadelphia, Phoenix, Redmond, Seattle, Silicon Valley, Sydney, Tampa Bay, Vancouver, Washington DC and Wellington.  (Disclaimer: Some of this travel was paid for by employers or Board members themselves.  Some of this travel may have been completed before the Summit.  That’s still one heck of a list!) The last SQL Saturday event this fiscal year is SQL Saturday Shanghai.  And that’s one I’m attending.  This is our first event in China and is being put on in cooperation with the local Microsoft office.  Hopefully this event will be the start of a growing community in China that includes chapters, SQL Saturdays and maybe a SQLRally or two in the future.  I’m excited to speak with people that are just starting down this path and watching this community grow. I encourage you to visit the PASS Global Growth site and read through the material there.  This is the biggest change we’ve made to our governance since I’ve been on the Board.  You need to understand how it affects you and how it affects the organization. And wish me luck on the 15 hour flight to Shanghai on Friday afternoon.  Rob Farley flies from Australia to the US for PASS events multiple times per year and I don’t know how he does it so often.  I think one of these is going to wipe me out.  (And Nihao (knee-how) is Chinese for Hello.)

    Read the article

  • Can't connect to certain HTTPS sites

    - by mind.blank
    I've just moved to a new apartment and with internet connection via a router and I'm finding that I can't connect to quite a few sites that use SSL. For example trying to connect to PayPal: curl -v https://paypal.com * About to connect() to paypal.com port 443 (#0) * Trying 66.211.169.3... connected * successfully set certificate verify locations: * CAfile: none CApath: /etc/ssl/certs * SSLv3, TLS handshake, Client hello (1): * Unknown SSL protocol error in connection to paypal.com:443 * Closing connection #0 curl: (35) Unknown SSL protocol error in connection to paypal.com:443 curl -v -ssl https://paypal.com gives the same output. For some sites it works: curl -v https://www.google.com * About to connect() to www.google.com port 443 (#0) * Trying 74.125.235.112... connected * successfully set certificate verify locations: * CAfile: none CApath: /etc/ssl/certs * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Server key exchange (12): * SSLv3, TLS handshake, Server finished (14): * SSLv3, TLS handshake, Client key exchange (16): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSL connection using ECDHE-RSA-RC4-SHA * Server certificate: * subject: C=US; ST=California; L=Mountain View; O=Google Inc; CN=www.google.com * start date: 2011-10-26 00:00:00 GMT * expire date: 2013-09-30 23:59:59 GMT * common name: www.google.com (matched) * issuer: C=ZA; O=Thawte Consulting (Pty) Ltd.; CN=Thawte SGC CA * SSL certificate verify ok. > GET / HTTP/1.1 > User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3 > Host: www.google.com > Accept: */* > < HTTP/1.1 302 Found < Location: https://www.google.co.jp/ . . . I'm using Ubuntu 12.04, with Windows 7 installed as well. These sites work on Windows :( Not sure if this information helps but I ran ifconfig and got the following: eth0 Link encap:Ethernet HWaddr 1c:c1:de:bc:e2:4f inet6 addr: 2408:c3:7fff:991:686b:8d18:81b3:8dd1/64 Scope:Global inet6 addr: 2408:c3:7fff:991:1ec1:deff:febc:e24f/64 Scope:Global inet6 addr: fe80::1ec1:deff:febc:e24f/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:87075 errors:0 dropped:0 overruns:0 frame:0 TX packets:54522 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:78167937 (78.1 MB) TX bytes:10016891 (10.0 MB) Interrupt:46 Base address:0x4000 eth1 Link encap:Ethernet HWaddr ac:81:12:0d:93:80 inet6 addr: fe80::ae81:12ff:fe0d:9380/64 Scope:Link UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:498 TX packets:0 errors:26 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:17 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:630 errors:0 dropped:0 overruns:0 frame:0 TX packets:630 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:39592 (39.5 KB) TX bytes:39592 (39.5 KB) ppp0 Link encap:Point-to-Point Protocol inet addr:180.57.228.200 P-t-P:118.23.8.175 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1492 Metric:1 RX packets:39631 errors:0 dropped:0 overruns:0 frame:0 TX packets:22391 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:3 RX bytes:43462054 (43.4 MB) TX bytes:2834628 (2.8 MB)

    Read the article

  • Friday Tips #6, Part 1

    - by Chris Kawalek
    We have a two parter this week, with this post focusing on desktop virtualization and the next one on server virtualization. Question: Why would I use the Oracle Secure Global Desktop Secure Gateway? Answer by Rick Butland, Principal Sales Consultant, Oracle Desktop Virtualization: Well, for the benefit of those who might not be familiar with client connections in Oracle Secure Global Desktop (SGD), let me back up and briefly explain. An SGD client connects to an SGD server using two distinct protocols, which, by default, require two distinct TCP ports. The first is the HTTP protocol, used by the web browser to connect to the SGD webserver on TCP port 80, or if secure connections are enabled (SSL/TLS), then TCP port 443, commonly identified as the "HTTPS" port, that is, "SSL encrypted HTTP." The second protocol from the client to the server is the Adaptive Internet Protocol, or AIP, which is used for displaying applications, transferring drive mapping data, print jobs, and so on. By default, AIP uses the TCP port 3104, or port 5307 when SSL is enabled. When SGD clients need to access SGD over a firewall, the ports that AIP requires are typically "closed"; and most administrators are reluctant, to put it mildly, to change their firewall configurations to allow AIP traffic on 3144/5307.   To avoid this problem, SGD introduced "Firewall Forwarding", a technique where, in effect, both http and AIP traffic are "multiplexed" onto a single "well-known" TCP port, that is port 443, the https port.  This is also known as single-port firewall traversal.  This technique takes advantage of the fact that, as a "well-known service", port 443 is usually "open",   allowing (encrypted) traffic to pass. At the target SGD server, the two protocols are de-multiplexed and routed appropriately. The Secure Gateway was developed in response to requirements from customers for SGD to support multi-stage DMZ's, and to avoid exposing SGD servers and the information they contain directly to connections from the Internet. The Secure Gateway acts as a reverse-proxy in the first-tier of the DMZ, accepting, authenticating, and terminating incoming client connections, and then re-encrypting the connections, and proxying them, routing them on to SGD servers, deeper in the network. The client no longer needs to know the name/IP address of the SGD servers in their network, they connect to the gateway, only. The gateway takes care of those internal network details.     The Secure Gateway supports the same "single-port firewall" capability as does "Firewall Forwarding", but offers the additional advantage of load-balancing incoming client connections amongst SGD array members, which could be cumbersome without a forward-deployed secure gateway. Load-balancing weights and policies can be monitored and tuned using the "Balancer Manager" application, and Apache mod_proxy_balancer directives.   Going forward, our architects recommend the use of the Secure Gateway over "Firewall Forwarding" for single-port firewall traversal, due to its architectural advantages, its greater flexibility and enhanced features.  Finally, it should be noted that the Secure Gateway is not separately priced; any licensed SGD customer may use the Secure Gateway component at no additional cost.   For more information, see the "Secure Gateway Administrator's Guide".

    Read the article

  • Rolling Along: PASS Board Year 2, Q2

    - by Denise McInerney
    Eighteen months into my time as a PASS Director I’m especially proud of what the Virtual Chapters have accomplished and want to share that progress with you. I'm also pleased that the organization has invested more resources to support the VCs. In this quarter I got to attend two conferences and meet more members of the SQL community. Virtual Chapters In the first six months of 2013 VCs have hosted more than 50 webinars, offering free technical education to over 6200 attendees. This is a great benefit to PASS members; thanks to the VC leaders, volunteers and speakers who contribute their time to produce these events. The Performance VC held their “Summer Performance Palooza”, an event featuring eight back-to-back sessions. Links to the session recordings can be found on the VCs web site. The new webinar platform, GoToWebinar, has been rolled out to all the VCs. This is a more stable, scalable platform and represents an important investment into the future of the VCs. A few new VCs are in the planning stages, including one focused on Security and one for Russian speakers. Visit the Virtual Chapter home page to sign up for the chapters that interest you. Each Virtual Chapter is offering a discount code for PASS Summit 2013. Be sure to ask your VC leader for the code to save $200 on Summit registration. 24 Hours of PASS The next 24HOP will be on July 31. This Summit Preview edition will feature 24 consecutive webcasts presented by experts who will be speaking at Summit in October. Registration for this free event is open now. And we will be using the GoToWebinar platform for 24HOP also. Business Analytics Conference April marked the first PASS Business Analytics Conference in Chicago. This introduced PASS to another segment of data professionals: the analysts and data scientists who work with the world’s growing collection of data. Overall the inaugural event was a success and gave us a glimpse into this increasingly important space. After Chicago the Board had several serious discussions about the lessons learned from this seven and what we should do next. We agreed to apply those lessons and continue to invest in this event; there will be a PASS Business Analytics Conference in 2014. I’m very pleased the next event will be in San Jose, CA, the heart of Silicon Valley, a place where a great deal of investment and innovation in data analytics is taking place. Global SQL Community Over the last couple of years PASS has been taking steps to become more relevant to SQL communities in different parts of the world. In May I had the opportunity to attend SQL Bits XI in Nottingham, England. It was enlightening to meet and talk with SQL professionals from around the U.K. as well as many other European countries. The many SQL Bits volunteers put on a great event and were gracious hosts. Budgets The Board passed the FY14 budget at the end of June. The  budget process can be challenging and requires the Board to make some difficult choices about where to allocate resources. Overall I’m satisfied with the decisions we made and think we are investing in the right activities and programs. Next Up The Board is meeting July 18-19 in Kansas City. We will be holding the Executive Committee election for the Exec Co that will take office in 2014. We will also be discussing plans for the next BA conference as well as the next steps for our Global Growth initiative. Applications for the upcoming Board of Directors election open on July 24. If you are considering running for the Board you can visit the PASS elections site to learn more about the election process. And I encourage anyone considering running to reach out to current and past Board members to learn about what the role entails. Plans for the next PASS Summit are in full swing. We are working on some fun new ideas to introduce attendees to the many ways to become involved in the SQL community.

    Read the article

  • Single page not appearing in Google Search

    - by Dan
    Description I have a static franchise website which has various sub pages each dedicated to an individual franchisee. For each franchisee the page, the only thing slightly similar between all of them are the page titles, they follow this structure: <title> Welcome to THE_COMPANY - PRODUCT_DESCRIPTION Services, THE_LOCATION </title> THE_COMPANY and PRODUCT_DESCRIPTION are the same across all franchisees, however THE_LOCATION changes depeding on where they are located in the UK. Each franchisee page has the following <meta /> tags: <meta name="DC.creator" content="user"/> <meta name="DC.format" content="text/html"/> <meta name="DC.language" content="en"/> <meta name="DC.date.modified" content="2014-01-23T11:22:31+00:00"/> <meta name="DC.date.created" content="2014-01-23T11:22:09+00:00"/> <meta name="DC.type" content="Page"/> <meta name="DC.distribution" content="Global"/> <meta name="robots" content="ALL"/> <meta name="distribution" content="Global"/> The main content on each franchisee page is completely different. The Problem There is one particular franchisee page, located in Area A.. Which will not appear in Google Search results at all. However every single other franchisee (if you Google Search for "THE_COMPANY, THE_LOCATION" is number 1). And if I do the same search on Bing, Yahoo or DuckDuckGo, the Area A franchisee is the first result on all of them. Has Google for some reason black listed one page on the site? What I Have Tried Ensuring the page is referenced in my sitemap.xml file 'Fetching as Google Bot' the link www.the_company.co.uk/areaa When that came back as OK I would submit to index Resubmitting the sitemap.xml file in Webmaster Tools Linking to the Area A page from another pages content For this I also waited about 3 weeks before checking again to give Google time to re-index Making a change to the page content and waiting another 2 / 3 weeks Removing the page completely and recreating it with an alternative URL The closest thing I have found to this issue is this StackOverflow question but this particular franchisee has existed for almost a year, it used to appear on Google searches however no longer does. I'm guessing the Panda update wasn't too happy with something on the page, but it hasn't effected anything else on the site and I am at a loss for things to try. I would greatly appreciate any information or thoughts as to what could have caused this Thanks. Update In line with Daniel Fukudas answer below, I have followed some of his steps but everything seems to check out alright: HTTP Headers HTTP/1.1 200 OK => Date => Tue, 25 Feb 2014 16:31:29 GMT Server => Zope/(2.12.16, python 2.6.6, linux2) ZServer/1.1 Content-Length => 40078 Expires => Sat, 01 Jan 2000 00:00:00 GMT Content-Type => text/html;charset=utf-8 Content-Language => en Vary => Accept-Encoding Connection => close Robots <meta /> tag: <meta name="robots" content="ALL"/> I have updated this <meta /> tag to read content="INDEX" instead now. robots.txt: User-agent: * Disallow: User-Agent: Googlebot Disallow: /*sendto_form$ Disallow: /*folder_factories$ Using site:THE_COMPANY.co.uk: Searching for 'AREA A site:THE_COMPANY.co.uk' does not return the page, but regardless of that searching just for site:THE_COMPANY.co.uk will not necessarily return every indexed page, or so I understand... Update It appears Google likes to drop pages every now and then from the index, despite my steps above, I left the site alone and the page appeared back in the SERPs by itself.

    Read the article

  • Solaris 11.1: Encrypted Immutable Zones on (ZFS) Shared Storage

    - by darrenm
    Solaris 11 brought both ZFS encryption and the Immutable Zones feature and I've talked about the combination in the past.  Solaris 11.1 adds a fully supported method of storing zones in their own ZFS using shared storage so lets update things a little and put all three parts together. When using an iSCSI (or other supported shared storage target) for a Zone we can either let the Zones framework setup the ZFS pool or we can do it manually before hand and tell the Zones framework to use the one we made earlier.  To enable encryption we have to take the second path so that we can setup the pool with encryption before we start to install the zones on it. We start by configuring the zone and specifying an rootzpool resource: # zonecfg -z eizoss Use 'create' to begin configuring a new zone. zonecfg:eizoss> create create: Using system default template 'SYSdefault' zonecfg:eizoss> set zonepath=/zones/eizoss zonecfg:eizoss> set file-mac-profile=fixed-configuration zonecfg:eizoss> add rootzpool zonecfg:eizoss:rootzpool> add storage \ iscsi://zs7120-tvp540-c.uk.oracle.com/luname.naa.600144f09acaacd20000508e64a70001 zonecfg:eizoss:rootzpool> end zonecfg:eizoss> verify zonecfg:eizoss> commit zonecfg:eizoss> Now lets create the pool and specify encryption: # suriadm map \ iscsi://zs7120-tvp540-c.uk.oracle.com/luname.naa.600144f09acaacd20000508e64a70001 PROPERTY VALUE mapped-dev /dev/dsk/c10t600144F09ACAACD20000508E64A70001d0 # echo "zfscrypto" > /zones/p # zpool create -O encryption=on -O keysource=passphrase,file:///zones/p eizoss \ /dev/dsk/c10t600144F09ACAACD20000508E64A70001d0 # zpool export eizoss Note that the keysource example above is just for this example, realistically you should probably use an Oracle Key Manager or some other better keystorage, but that isn't the purpose of this example.  Note however that it does need to be one of file:// https:// pkcs11: and not prompt for the key location.  Also note that we exported the newly created pool.  The name we used here doesn't actually mater because it will get set properly on import anyway. So lets go ahead and do our install: zoneadm -z eizoss install -x force-zpool-import Configured zone storage resource(s) from: iscsi://zs7120-tvp540-c.uk.oracle.com/luname.naa.600144f09acaacd20000508e64a70001 Imported zone zpool: eizoss_rpool Progress being logged to /var/log/zones/zoneadm.20121029T115231Z.eizoss.install Image: Preparing at /zones/eizoss/root. AI Manifest: /tmp/manifest.xml.ujaq54 SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml Zonename: eizoss Installation: Starting ... Creating IPS image Startup linked: 1/1 done Installing packages from: solaris origin: http://pkg.us.oracle.com/solaris/release/ Please review the licenses for the following packages post-install: consolidation/osnet/osnet-incorporation (automatically accepted, not displayed) Package licenses may be viewed using the command: pkg info --license <pkg_fmri> DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 187/187 33575/33575 227.0/227.0 384k/s PHASE ITEMS Installing new actions 47449/47449 Updating package state database Done Updating image state Done Creating fast lookup database Done Installation: Succeeded Note: Man pages can be obtained by installing pkg:/system/manual done. Done: Installation completed in 929.606 seconds. Next Steps: Boot the zone, then log into the zone console (zlogin -C) to complete the configuration process. Log saved in non-global zone as /zones/eizoss/root/var/log/zones/zoneadm.20121029T115231Z.eizoss.install That was really all we had to do, when the install is done boot it up as normal. The zone administrator has no direct access to the ZFS wrapping keys used for the encrypted pool zone is stored on.  Due to how inheritance works in ZFS he can still create new encrypted datasets that use those wrapping keys (without them ever being inside a process in the zone) or he can create encrypted datasets inside the zone that use keys of his own choosing, the output below shows the two cases: rpool is inheriting the key material from the global zone (note we can see the value of the keysource property but we don't use it inside the zone nor does that path need to be (or is) accessible inside the zone). Whereas rpool/export/home/bob has set keysource locally. # zfs get encryption,keysource rpool rpool/export/home/bob NAME PROPERTY VALUE SOURCE rpool encryption on inherited from $globalzone rpool keysource passphrase,file:///zones/p inherited from $globalzone rpool/export/home/bob encryption on local rpool/export/home/bob keysource passphrase,prompt local  

    Read the article

  • Openmatics Revolutionizes Fleet Management with Standards-Based Vehicle Telematics Platform

    - by Michael Snow
    Openmatics s.r.o. was founded in 2010 as a subsidiary of ZF Friedrichshafen AG, a global player in driveline and chassis technology. Oracle Customer:  Openmatics s.r.o.Location:  Pilsen, Czech RepublicIndustry:  AutomotiveEmployees:  70 Its goal was to develop and operate a flexible, open telematics platform for automotive applications, which is independent from vehicle and component suppliers—recognizing that the fragmented telematics market was not meeting today’s fleet management needs. Openmatics provides a rich product portfolio, and customers can extend the platform, as required, to meet their needs. Partners and third-parties can develop their own applications using the Openmatics’ software development kit and can sell them via the Openmatics app shop.ZF Friedrichshafen AG is a global player in driveline and chassis technology. With 121 production companies and 650 service partners in 26 countries, ZF is among the top 10 largest automotive suppliers worldwide. Founded in 1915 to develop and produce transmissions for airships and vehicles, the group’s product offerings now include transmissions and steering systems as well as chassis components and complete axle systems and modules.  A word from Openmatics s.r.o.  “Oracle WebCenter Portal, together with the underlying Oracle Application Development Framework, provided the fundamental infrastructure for the Openmatics platform. Fleet managers can now reduce fuel consumption and operating costs, and more efficiently manage vehicle usage, maintenance, and safety. The standards-based platform allows third-party suppliers to deploy their own vehicle telematics services as Openmatics apps and creates a de facto standard for the automotive industry, independent from a single manufacturer or service provider.” – Gero Strobel, Head of Development, Openmatics s.r.o. Challenges Create an industry standard for vehicle telematics by establishing a customizable platform that enables access to telematics information, such as current and past fuel consumption, through a web browser to better meet automotive market and customer needs Reduce fleet-management costs by eliminating the need to invest in isolated telematics hardware and software solutions per vehicle brand and vehicle component manufacturer Establish an open platform where third-party providers—such as original equipment manufacturers (OEM), insurers, fleet operators, and individual developers—can deploy their own vehicle telematics services Allow users to purchase targeted telematics services as single apps to reduce costs and ensure rapid growth of telematics services available on the platform Enable users to configure their telematics apps with ease to make sure the platform meets individual fleet management requirements, such as analyzing past and current fuel consumption of a truck fleet Solutions Deployed Oracle WebCenter Portal as a foundation for Openmatics, a standards-based automotive telematics platform that provides next-generation fleet management with unified digital communication from and to vehicles on the move Used Oracle Application Development Framework as the development framework for Oracle WebCenter Portal’s components and services, providing developers with ready-to-use software development kits with application programming interfaces, design templates, and visual tools that accelerated time to market Used Oracle Enterprise Pack for Eclipse to simplify telematics application development in Java Enabled fleet monitoring by recording vehicle data—such as fuel consumption information—through onboard units, delivering the information to Oracle Database, and making it accessible through a customizable app portfolio on any web browser Stored vehicle telematics data—sent as encrypted information—in Oracle Database, ensuring data integrity and immediate availability for the platform’s telematics applications Enabled a wide range of telematics services suppliers, from vehicle component manufacturers to fleet application developers, to offer vehicle telematics services on the Openmatics platform, ensuring platform independence from OEMs Provided Openmatics customers with the means to individually select the automotive telematics services that are relevant to their business requirements, eliminating the need to pay for superfluous information and reducing fleet management costs Oracle Products & Services Oracle Application Development Framework Oracle WebCenter Portal Oracle SOA Suite Oracle Enterprise Pack for Eclipse Oracle Database Oracle Consulting &amp;amp;amp;amp;amp;amp;amp;&amp;amp;amp;amp;amp;lt;span id=&amp;amp;amp;amp;amp;quot;XinhaEditingPostion&amp;amp;amp;amp;amp;quot;&amp;amp;amp;amp;amp;gt;&amp;amp;amp;amp;amp;lt;/span&amp;amp;amp;amp;amp;gt;amp;&amp;amp;amp;amp;amp;amp;lt;span id=&amp;amp;amp;amp;amp;amp;quot;XinhaEditingPostion&amp;amp;amp;amp;amp;amp;quot;&amp;amp;amp;amp;amp;amp;gt;&amp;amp;amp;amp;amp;amp;lt;/span&amp;amp;amp;amp;amp;amp;gt;lt;p&amp;amp;amp;amp;amp;amp;amp;amp;gt; &amp;amp;amp;amp;amp;amp;amp;amp;lt;/p&amp;amp;amp;amp;amp;amp;amp;amp;gt;

    Read the article

  • JavaOne Session Report: “50 Tips in 50 Minutes for GlassFish Fans”

    - by Janice J. Heiss
    At JavaOne 2012 on Monday, Oracle’s Engineer Chris Kasso, and Technology Evangelist Arun Gupta, presented a head-spinning session (CON4701) in which they offered 50 tips for GlassFish fans. Kasso and Gupta alternated back and forth with each presenting 10 tips at a time. An audience of about (appropriately) 50 attentive and appreciative developers was on hand in what has to be one of the most information-packed sessions ever at JavaOne!Aside: I experienced one of the quiet joys of JavaOne when, just before the session began, I spotted Java Champion and JavaOne Rock Star Adam Bien sitting nearby – Adam is someone I have been fortunate to know for many years.GlassFish is a freely available, commercially supported Java EE reference implementation. The session prioritized quantity of tips over depth of information and offered tips that are intended for both seasoned and new users, that are meant to increase the range of functional options available to GlassFish users. The focus was on lesser-known dimensions of GlassFish. Attendees were encouraged to pursue tips that contained new information for them. All 50 tips can be accessed here.Below are several examples of more elaborate tips and a final practical tip on how to get in touch with these folks. Tip #1: Using the login Command * To execute a remote command with asadmin you must provide the admin's user name and password.* The login command allows you to store the login credentials to be reused in subsequent commands.* Can be logged into multiple servers (distinguish by host and port). Example:     % asadmin --host ouch login     Enter admin user name [default: admin]>     Enter admin password>     Login information relevant to admin user name [admin]     for host [ouch] and admin port [4848] stored at     [/Users/ckasso/.asadminpass] successfully.     Make sure that this file remains protected.     Information stored in this file will be used by     asadmin commands to manage the associated domain.     Command login executed successfully.     % asadmin --host ouch list-clusters     c1 not running     Command list-clusters executed successfully.Tip #4: Using the AS_DEBUG Env Variable* Environment variable to control client side debug output* Exposes: command processing info URL used to access the command:                           http://localhost:4848/__asadmin/uptime Raw response from the server Example:   % export AS_DEBUG=true  % asadmin uptime  CLASSPATH= ./../glassfish/modules/admin-cli.jar  Commands: [uptime]  asadmin extension directory: /work/gf-3.1.2/glassfish3/glassfish/lib/asadm      ------- RAW RESPONSE  ---------   Signature-Version: 1.0   message: Up 7 mins 10 secs   milliseconds_value: 430194   keys: milliseconds   milliseconds_name: milliseconds   use-main-children-attribute: false   exit-code: SUCCESS  ------- RAW RESPONSE  ---------Tip #11: Using Password Aliases * Some resources require a password to access (e.g. DB, JMS, etc.).* The resource connector is defined in the domain.xml.Example:Suppose the DB resource you wish to access requires an entry like this in the domain.xml:     <property name="password" value="secretp@ssword"/>But company policies do not allow you to store the password in the clear.* Use password aliases to avoid storing the password in the domain.xml* Create a password alias:     % asadmin create-password-alias DB_pw_alias     Enter the alias password>     Enter the alias password again>     Command create-password-alias executed successfully.* The password is stored in domain's encrypted keystore.* Now update the password value in the domain.xml:     <property name="password" value="${ALIAS=DB_pw_alias}"/>Tip #21: How to Start GlassFish as a Service * Configuring a server to automatically start at boot can be tedious.* Each platform does it differently.* The create-service command makes this easy.   Windows: creates a Windows service Linux: /etc/init.d script Solaris: Service Management Facility (SMF) service * Must execute create-service with admin privileges.* Can be used for the DAS or instances* Try it first with the --dry-run option.* There is a (unsupported) _delete-serverExample:     # asadmin create-service domain1     The Service was created successfully. Here are the details:     Name of the service:application/GlassFish/domain1     Type of the service:Domain     Configuration location of the service:/work/gf-3.1.2.2/glassfish3/glassfish/domains     Manifest file location on the system:/var/svc/manifest/application/GlassFish/domain1_work_gf-3.1.2.2_glassfish3_glassfish_domains/Domain-service-smf.xml.     You have created the service but you need to start it yourself. Here are the most typical Solaris commands of interest:     * /usr/bin/svcs  -a | grep domain1  // status     * /usr/sbin/svcadm enable domain1 // start     * /usr/sbin/svcadm disable domain1 // stop     * /usr/sbin/svccfg delete domain1 // uninstallTip #34: Posting a Command via REST* Use wget/curl to execute commands on the DAS.Example:  Deploying an application   % curl -s -S \       -H 'Accept: application/json' -X POST \       -H 'X-Requested-By: anyvalue' \       -F id=@/path/to/application.war \       -F force=true http://localhost:4848/management/domain/applications/application* Use @ before a file name to tell curl to send the file's contents.* The force option tells GlassFish to force the deployment in case the application is already deployed.* Use wget/curl to execute commands on the DAS.Example:  Deploying an application   % curl -s -S \       -H 'Accept: application/json' -X POST \       -H 'X-Requested-By: anyvalue' \       -F id=@/path/to/application.war \       -F force=true http://localhost:4848/management/domain/applications/application* Use @ before a file name to tell curl to send the file's contents.* The force option tells GlassFish to force the deployment in case the application is already deployed.Tip #46: Upgrading to a Newer Version * Upgrade applications and configuration from an earlier version* Upgrade Tool: Side-by-side upgrade– GUI: asupgrade– CLI: asupgrade --c– What happens ?* Copies older source domain -> target domain directory* asadmin start-domain --upgrade* Update Tool and pkg: In-place upgrade– GUI: updatetool, install all Available Updates– CLI: pkg image-update– Upgrade the domain* asadmin start-domain --upgradeTip #50: How to reach us?* GlassFish Forum: http://www.java.net/forums/glassfish/glassfish* [email protected]* @glassfish* facebook.com/glassfish* youtube.com/GlassFishVideos* blogs.oracle.com/theaquariumArun Gupta acknowledged that their method of presentation was experimental and actively solicited feedback about the session. The best way to reach them is on the GlassFish user forum.In addition, check out Gupta’s new book Java EE 6 Pocket Guide.

    Read the article

  • XNA WP7 Texture memory and ContentManager

    - by jlongstreet
    I'm trying to get my WP7 XNA game's memory under control and under the 90MB limit for submission. One target I identified was UI textures, especially fullscreen ones, that would never get unloaded. So I moved UI texture loads to their own ContentManager so I can unload them. However, this doesn't seem to affect the value of Microsoft.Phone.Info.DeviceExtendedProperties.GetValue("ApplicationCurrentMemoryUsage"), so it doesn't look like the memory is actually being released. Example: splash screens. In Game.LoadContent(): Application.Instance.SetContentManager("UI"); // set the current content manager for (int i = 0; i < mSplashTextures.Length; ++i) { // load the splash textures mSplashTextures[i] = Application.Instance.Content.Load<Texture2D>(msSplashTextureNames[i]); } // set the content manager back to the global one Application.Instance.SetContentManager("Global"); When the initial load is done and the title screen starts, I do: Application.Instance.GetContentManager("UI").Unload(); The three textures take about 6.5 MB of memory. Before unloading the UI ContentManager, I see that ApplicationCurrentMemoryUsage is at 34.29 MB. After unloading the ContentManager (and doing a full GC.Collect()), it's still at 34.29 MB. But after that, I load another fullscreen texture (for the title screen background) and memory usage still doesn't change. Could it be keeping the memory for these textures allocated and reusing it? edit: very simple test: protected override void LoadContent() { // Create a new SpriteBatch, which can be used to draw textures. spriteBatch = new SpriteBatch(GraphicsDevice); PrintMemUsage("Before texture load: "); // TODO: use this.Content to load your game content here red = this.Content.Load<Texture2D>("Untitled"); PrintMemUsage("After texture load: "); } private void PrintMemUsage(string tag) { Debug.WriteLine(tag + Microsoft.Phone.Info.DeviceExtendedProperties.GetValue("ApplicationCurrentMemoryUsage")); } protected override void Update(GameTime gameTime) { // Allows the game to exit if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); // TODO: Add your update logic here if (count++ == 100) { GC.Collect(); GC.WaitForPendingFinalizers(); PrintMemUsage("Before CM unload: "); this.Content.Dispose(); GC.Collect(); GC.WaitForPendingFinalizers(); PrintMemUsage("After CM unload: "); red = null; spriteBatch.Dispose(); GC.Collect(); GC.WaitForPendingFinalizers(); PrintMemUsage("After SpriteBatch Dispose(): "); } base.Update(gameTime); } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); // TODO: Add your drawing code here if (red != null) { spriteBatch.Begin(); spriteBatch.Draw(red, new Vector2(0,0), Color.White); spriteBatch.End(); } base.Draw(gameTime); } This prints something like (it changes every time): Before texture load: 7532544 After texture load: 10727424 Before CM unload: 9875456 After CM unload: 9953280 After SpriteBatch Dispose(): 9953280

    Read the article

  • Challenges in Corporate Reporting - New Independent Research

    - by ndwyouell
    Earlier this year, Oracle and Accenture sponsored a global study on trends in financial close and reporting. We surveyed 1,123 finance professionals in large organizations in 12 countries around the world during February and March. Financial Consolidation and Reporting is the most mature aspect of Enterprise Performance Management with mainstream solutions having been around for over 30 years. But of course over this time there have been many changes and very significant increases in regulation. So just what is the current state is Financial Consolidation and Reporting in our major corporations across the world? We commissioned this independent research to find out. Highlights of the result are: •          Seeking change: Businesses recognize they need to invest in financial reporting to address the challenges they currently face. 47 percent of companies have made substantial investments over the last year to the financial close, filing, and reporting processes. •          Ineffective investments: Despite these investments, spreadsheets (72 percent) and e-mails (68 percent) are still being used daily to track and manage reporting, suggesting that new investments are falling short of expectations. •          Increased costs and uncertainty: The situation is so opaque that managers across the finance function are unable to fully understand the financial impact or cost implications of reporting, with 60 percent of respondents admitting they did not know the total cost of managing and publicizing their financial results. •          Persistent challenges: 68 percent of respondents admitted that they have inadequate visibility into reporting processes, while 84 percent of finance managers surveyed said they find it difficult to control the quality of financial data across the entire reporting process. •          Decreased effectiveness: 71 percent of finance managers feel their effectiveness is limited in some way by data-analysis–related issues, while 39 percent of C-level or VP-level respondents say their effectiveness is impaired by limited visibility. •          Missed deadlines: Due to late changes to the chart of accounts, 15 percent of global businesses have missed statutory filings, putting their companies at risk of financial penalties and potentially impacting share value. The report makes it clear that investments made to date by these large organizations around the world have been uneven across the close, reporting, and filing processes, which has led to the challenges these organizations currently face in the overall process. Regardless of whether companies are using a variety of solutions or a single solution, the report shows they continue to witness increased costs, ineffectual data management, and missed reporting, which—in extreme circumstances—can impact a company’s corporate image and share value. The good news is that businesses realize that these problems persist and 86 percent of companies are likely to make a significant investment during the next five years to address these issues. While they should invest, it is critical that they direct investments correctly to address the key issues this research identified: •          Improving data integrity •          Optimizing processes •          Integrating the extended financial close process By addressing these issues and with clear guidance on how to implement the correct business processes, infrastructure, and software solutions, finance teams will find that their reporting processes are much more effective, cost-efficient, and aligned with their performance expectations. To get a copy of the full report: http://www.oracle.com/webapps/dialogue/ns/dlgwelcome.jsp?p_ext=Y&p_dlg_id=11747758&src=7300117&Act=92 To replay a webcast discussing the findings: http://www.cfo.com/webcast.cfm?webcast=14639438&pcode=ORA061912_ORA

    Read the article

  • Unable to install SQL 2008 on Windows 7

    - by Axel
    SQL 2008 install hangs on Windows 7 The story: Trying to install SQL2008 on Windows 7 hangs on SqlEngineDBStartconfigAction_install_configrc_Cpu32. What I Tried: Uninstall hangs on validation Manual uninstall using msiinv.exe and msiexec /x works Added SQL service accounts to local admins no help Turn of UAC no help Last lines in setup log: 2010-04-01 16:18:05 SQLEngine: : Checking Engine checkpoint 'GetSqlServerProcessHandle' 2010-04-01 16:18:05 SQLEngine: --SqlServerServiceSCM: Waiting for nt event 'Global\sqlserverRecComplete' to be created 2010-04-01 16:18:07 SQLEngine: --SqlServerServiceSCM: Waiting for nt event 'Global\sqlserverRecComplete' or sql process handle to be signaled 2010-04-01 16:18:07 SQLEngine: : Checking Engine checkpoint 'WaitSqlServerStartEvents' 2010-04-01 16:18:53 Slp: Sco: Attempting to initialize script 2010-04-01 16:18:53 Slp: Sco: Attempting to initialize default connection string 2010-04-01 16:18:53 Slp: Sco: Attempting to set script connection protocol to NotSpecified 2010-04-01 16:18:53 Slp: Sco: Attempting to set script connection protocol to NamedPipes 2010-04-01 16:18:53 SQLEngine: --SqlDatabaseServiceConfig: Connection String: Data Source=\.\pipe\SQLLocal\MSSQLSERVER;Initial Catalog=master;Integrated Security=True;Pooling=False;Network Library=dbnmpntw;Application Name=SqlSetup 2010-04-01 16:18:53 SQLEngine: : Checking Engine checkpoint 'ServiceConfigConnect' 2010-04-01 16:18:53 SQLEngine: --SqlDatabaseServiceConfig: Connecting to SQL.... 2010-04-01 16:18:53 Slp: Sco: Attempting to connect script 2010-04-01 16:18:53 Slp: Connection string: Data Source=\.\pipe\SQLLocal\MSSQLSERVER;Initial Catalog=master;Integrated Security=True;Pooling=False;Network Library=dbnmpntw;Application Name=SqlSetup And now comes the fun part: When I open conf mgr I can see the service running, I enabled named pipes and TCP/IP, restarted the service I'm able to connect to the server using an OLE DB connection but not with the Native Client. And what I find suspicious is the following error in my app log: .NET Runtime Optimization Service (clr_optimization_v2.0.50727_32) - Failed to compile: C:\Program Files\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\Tools\VDT\DataProjects.dll . Error code = 0x8007000b In MS connect this is reported as a bug but MS is unable to reproduce the problem altough when you search the fora I'm not the only one with this problem. So any help is appreciated.

    Read the article

  • Configuring correct port for Oozie (invoking PIG script) in Cloudera Hue

    - by user2985324
    I am new to CDH4 Oozie workflow editor. While trying to invoke a pig script from Oozie workflow editor, i am getting the following error. HadoopAccessorException: E0900: Jobtracker [mymachine:8032] not allowed, not in Oozies whitelist It looks like Oozie is submitting the job to Yarn port (8032). I want it to submit to 8021 (MR jobtracker) port. Can someone help me in identify where to set the job tracker URL or port so that oozie picks up the correct one (using Hue or Cloudera manager). Previously I tried the following but none of them helped Modfied workflow.xml file /user/hue/oozie/workspaces/../workflow.xml file. However it gets overwritten when I submit the job from workflow editor. In cloudera Manager -- oozie -- configuration --Oozie Server (advanced) -- Oozie Server Configuration Safety Valve for oozie-site.xml property I set the following- <property> <name>oozie.service.HadoopAccessorService.nameNode.whitelist</name> <value>mymachine:8020</value> oozie.service.HadoopAccessorService.jobTracker.whitelist mymachine:8021 and restarted the oozie service. 3. Tried to override 'jobTracker' property while configuring the pig task. This appears as follows in the workflow file however it doesn't take effect (or doesn't override) and still uses 8032 port. <global> <configuration> <property> <name>jobTracker</name> <value>mymachine:8021</value> </property> </configuration> </global> I am using CDH4 version. Thanks for looking into my question.

    Read the article

< Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >