Search Results

Search found 6152 results on 247 pages for 'known'.

Page 50/247 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • Error Handling Examples(C#)

    “The purpose of reviewing the Error Handling code is to assure that the application fails safely under all possible error conditions, expected and unexpected. No sensitive information is presented to the user when an error occurs.” (OWASP, 2011) No Error Handling The absence of error handling is still a form of error handling. Based on the code in Figure 1, if an error occurred and was not handled within either the ReadXml or BuildRequest methods the error would bubble up to the Search method. Since this method does not handle any acceptations the error will then bubble up the stack trace. If this continues and the error is not handled within the application then the environment in which the application is running will notify the user running the application that an error occurred based on what type of application. Figure 1: No Error Handling public DataSet Search(string searchTerm, int resultCount) { DataSet dt = new DataSet(); dt.ReadXml(BuildRequest(searchTerm, resultCount)); return dt; } Generic Error Handling One simple way to add error handling is to catch all errors by default. If you examine the code in Figure 2, you will see a try-catch block. On April 6th 2010 Louis Lazaris clearly describes a Try Catch statement by defining both the Try and Catch aspects of the statement. “The try portion is where you would put any code that might throw an error. In other words, all significant code should go in the try section. The catch section will also hold code, but that section is not vital to the running of the application. So, if you removed the try-catch statement altogether, the section of code inside the try part would still be the same, but all the code inside the catch would be removed.” (Lazaris, 2010) He also states that all errors that occur in the try section cause it to stops the execution of the try section and redirects all execution to the catch section. The catch section receives an object containing information about the error that occurred so that they system can gracefully handle the error properly. When errors occur they commonly log them in some form. This form could be an email, database entry, web service call, log file, or just an error massage displayed to the user.  Depending on the error sometimes applications can recover, while others force an application to close. Figure 2: Generic Error Handling public DataSet Search(string searchTerm, int resultCount) { DataSet dt = new DataSet(); try { dt.ReadXml(BuildRequest(searchTerm, resultCount)); } catch (Exception ex) { // Handle all Exceptions } return dt; } Error Specific Error Handling Like the Generic Error Handling, Error Specific error handling allows for the catching of specific known errors that may occur. For example wrapping a try catch statement around a soap web service call would allow the application to handle any error that was generated by the soap web service. Now, if the systems wanted to send a message to the web service provider every time a soap error occurred but did not want to notify them if any other type of error occurred like a network time out issue. This would be varying tedious to accomplish using the General Error Handling methodology. This brings us to the use case for using the Error Specific error handling methodology.  The Error Specific Error handling methodology allows for the TryCatch statement to catch various types of errors depending on the type of error that occurred. In Figure 3, the code attempts to handle DataException differently compared to how it potentially handles all other errors. This allows for specific error handling for each type of known error, and still allows for error handling of any unknown error that my occur during the execution of the TryCatch statement. Figure 5: Error Specific Error Handling public DataSet Search(string searchTerm, int resultCount) { DataSet dt = new DataSet(); try { dt.ReadXml(BuildRequest(searchTerm, resultCount)); } catch (TimeoutException ex) { // Handle Timeout TimeoutException Only } catch (Exception) { // Handle all Exceptions } return dt; }

    Read the article

  • How to fix Software Center crashes?

    - by shandna towns
    E: Type '<!DOCTYPE' is not known on line 1 in source list /etc/apt/sources.list.d/medibuntu.list E: The list of sources could not be read. shandanatowns@ubuntu:~$ software-center 2012-09-25 12:23:35,115 - softwarecenter.ui.gtk3.app - INFO - setting up proxy 'None' 2012-09-25 12:23:35,123 - softwarecenter.db.database - INFO - open() database: path=None use_axi=True use_agent=True (software-center:4524): Gtk-WARNING **: Theme parsing error: softwarecenter.css:34:20: Not using units is deprecated. Assuming 'px'. (software-center:4524): Gtk-WARNING **: Theme parsing error: softwarecenter.css:34:22: Not using units is deprecated. Assuming 'px'. (software-center:4524): Gtk-WARNING **: Theme parsing error: softwarecenter.css:56:20: Not using units is deprecated. Assuming 'px'. (software-center:4524): Gtk-WARNING **: Theme parsing error: softwarecenter.css:56:22: Not using units is deprecated. Assuming 'px'. (software-center:4524): Gtk-WARNING **: Theme parsing error: softwarecenter.css:60:20: Not using units is deprecated. Assuming 'px'. (software-center:4524): Gtk-WARNING **: Theme parsing error: softwarecenter.css:60:22: Not using units is deprecated. Assuming 'px'. 2012-09-25 12:23:35,472 - softwarecenter.backend.reviews - WARNING - Could not get usefulness from server, no username in config file 2012-09-25 12:23:35,477 - softwarecenter.fixme - WARNING - logs to the root logger: '('/usr/lib/python2.7/dist-packages/gi/importer.py', 51, 'find_module')' 2012-09-25 12:23:35,477 - root - ERROR - Could not find any typelib for LaunchpadIntegration 2012-09-25 12:23:35,605 - softwarecenter.ui.gtk3.app - INFO - show_available_packages: search_text is '', app is None. 2012-09-25 12:23:35,987 - softwarecenter.db.pkginfo_impl.aptcache - INFO - aptcache.open() Traceback (most recent call last): File "/usr/share/software-center/softwarecenter/db/pkginfo_impl/aptcache.py", line 257, in open self._cache = apt.Cache(progress) File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 102, in __init__ self.open(progress) File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 149, in open self._list.read_main_list() SystemError: E:Type '<!DOCTYPE' is not known on line 1 in source list /etc/apt/sources.list.d/medibuntu.list 2012-09-25 12:23:37,000 - softwarecenter.db.enquire - ERROR - _get_estimate_nr_apps_and_nr_pkgs failed Traceback (most recent call last): File "/usr/share/software-center/softwarecenter/db/enquire.py", line 115, in _get_estimate_nr_apps_and_nr_pkgs tmp_matches = enquire.get_mset(0, len(self.db), None, xfilter) File "/usr/share/software-center/softwarecenter/db/appfilter.py", line 89, in __call__ if (not pkgname in self.cache and File "/usr/share/software-center/softwarecenter/db/pkginfo_impl/aptcache.py", line 277, in __contains__ return self._cache.__contains__(k) AttributeError: 'NoneType' object has no attribute '__contains__' Traceback (most recent call last): File "/usr/bin/software-center", line 182, in <module> app.run(args) File "/usr/share/software-center/softwarecenter/ui/gtk3/app.py", line 1385, in run self.show_available_packages(args) File "/usr/share/software-center/softwarecenter/ui/gtk3/app.py", line 1323, in show_available_packages self.view_manager.set_active_view(ViewPages.AVAILABLE) File "/usr/share/software-center/softwarecenter/ui/gtk3/session/viewmanager.py", line 151, in set_active_view view_widget.init_view() File "/usr/share/software-center/softwarecenter/ui/gtk3/panes/availablepane.py", line 173, in init_view self.cache, self.db, self.icons, self.apps_filter) File "/usr/share/software-center/softwarecenter/ui/gtk3/views/lobbyview.py", line 81, in __init__ self.build() File "/usr/share/software-center/softwarecenter/ui/gtk3/views/lobbyview.py", line 322, in build self._build_homepage_view() File "/usr/share/software-center/softwarecenter/ui/gtk3/views/lobbyview.py", line 120, in _build_homepage_view self._append_whats_new() File "/usr/share/software-center/softwarecenter/ui/gtk3/views/lobbyview.py", line 251, in _append_whats_new whats_new_cat = self._update_whats_new_content() File "/usr/share/software-center/softwarecenter/ui/gtk3/views/lobbyview.py", line 236, in _update_whats_new_content docs = whats_new_cat.get_documents(self.db) File "/usr/share/software-center/softwarecenter/db/categories.py", line 132, in get_documents nonblocking_load=False) File "/usr/share/software-center/softwarecenter/db/enquire.py", line 317, in set_query self._blocking_perform_search() File "/usr/share/software-center/softwarecenter/db/enquire.py", line 212, in _blocking_perform_search matches = enquire.get_mset(0, self.limit, None, xfilter) File "/usr/share/software-center/softwarecenter/db/appfilter.py", line 89, in __call__ if (not pkgname in self.cache and File "/usr/share/software-center/softwarecenter/db/pkginfo_impl/aptcache.py", line 277, in __contains__ return self._cache.__contains__(k) AttributeError: 'NoneType' object has no attribute '__contains__'

    Read the article

  • #MIX Day 2 Keynote: Put the Phone Down and Listen

    - by andrewbrust
    MIX day 1’s keynote was all about Windows Phone 7 (WP7).  MIX day 2’s was a reminder that Microsoft has much more going on than a new mobile platform.  Steven Sinofsky, Scott Guthrie, Doug Purdy and others showed us lots of other good things coming from Microsoft, mostly in the developer stack, that we certainly shouldn’t overlook.  These included the forthcoming IE9, its new JavaScript compiling engine and support for HTML 5 that takes full advantage of the local PC resources, including the Graphics Processing Unit.  The announcements also included important additions to ASP.NET (and one subtraction, in the form of lighter-weight ViewState technology) including almost-obsessive jQuery support.  That support is so good that John Resig, creator of the jQuery project, came on stage to tell us so.  Then Scott Guthrie told us that Microsoft would be contributing code to Open Source jQuery project. This is not your father’s Microsoft, it would seem. But to me, the crown jewel in today’s keynote were the numerous announcements around the Open Data Protocol (OData).  OData is nothing more than the protocol side of “Astoria” (now known as WCF Data Services, and until recently called ADO.NET Data Services) separated out and opened up as a platform-neutral standard.  The 2009 Professional Developers Conference (PDC) was Microsoft’s vehicle for first announcing OData, as well as project “Dallas,” an Azure-based cloud platform for publishing commercial OData feeds.  And we had already known about “bridges” for Astoria (and thus OData) for PHP and Java.  We also knew that PowerPivot, Microsoft’s forthcoming self-service BI plug-in for Excel 2010, will consume OData feeds and then facilitate drill-down analysis of their data.  And we recently found out that SQL Reporting Services reports (in the forthcoming SQL Server 2008 R2) and SharePoint 2010 lists will be consumable in OData format as well. So what was left to announce?  How about OData clients for Palm webOS and Apple iPhone/Objective C?  How about the release to Open Source of .NET’s OData client?  Or the ability to publish any SQL Azure database as an OData service by simply checking a checkbox at deployment?  Maybe even a Silverlight tool (code-named “Houston”) to create SQL Azure databases (and then publish them as OData) right in the browser?  And what if you you could get at NetFlix’s entire catalog in OData format?  You can – just go to http://odata.netflix.com/Catalog/ and see for yourself.  Douglas Purdy, who made these announcements said “we want OData to work on as many devices and platforms as possible.”  After all the cross-platform OData announcements made in about a half year’s time, it’s hard to dispute this. When Microsoft plays the data card, and plays it well, watch out, because data programmability is the company’s heritage.  I’ll be discussing OData at length in my April Redmond Review column.  I wrote that column two weeks ago, and was convinced then that OData was a big deal. Today upped the ante even more.  And following the Windows Phone 7 euphoria of yesterday was, I think, smart timing.  The phone, if it’s successful, will be because it’s a good developer platform play.  And developer platforms (as well as their creators) are most successful when they have a good data strategy.  OData is very Silverlight-friendly, and that means it’s WP7-friendly too.  Phone plus service-oriented data is a one-two punch.  A phone platform without data would have been a phone with no signal.

    Read the article

  • Postfix log.... spam attempt?

    - by luri
    I have some weird entries in my mail.log. What I'd like to ask is if postfix is avoiding correctly (according with the main.cf attached below) what seems to be relay attempts, presumably for spamming, or if I can enhance it's security somehow. Feb 2 11:53:25 MYSERVER postfix/smtpd[9094]: connect from catv-80-99-46-143.catv.broadband.hu[80.99.46.143] Feb 2 11:53:25 MYSERVER postfix/smtpd[9094]: warning: non-SMTP command from catv-80-99-46-143.catv.broadband.hu[80.99.46.143]: GET / HTTP/1.1 Feb 2 11:53:25 MYSERVER postfix/smtpd[9094]: disconnect from catv-80-99-46-143.catv.broadband.hu[80.99.46.143] Feb 2 11:56:45 MYSERVER postfix/anvil[9097]: statistics: max connection rate 1/60s for (smtp:80.99.46.143) at Feb 2 11:53:25 Feb 2 11:56:45 MYSERVER postfix/anvil[9097]: statistics: max connection count 1 for (smtp:80.99.46.143) at Feb 2 11:53:25 Feb 2 11:56:45 MYSERVER postfix/anvil[9097]: statistics: max cache size 1 at Feb 2 11:53:25 Feb 2 12:09:19 MYSERVER postfix/smtpd[9302]: connect from vs148181.vserver.de[62.75.148.181] Feb 2 12:09:19 MYSERVER postfix/smtpd[9302]: warning: non-SMTP command from vs148181.vserver.de[62.75.148.181]: GET / HTTP/1.1 Feb 2 12:09:19 MYSERVER postfix/smtpd[9302]: disconnect from vs148181.vserver.de[62.75.148.181] Feb 2 12:12:39 MYSERVER postfix/anvil[9304]: statistics: max connection rate 1/60s for (smtp:62.75.148.181) at Feb 2 12:09:19 Feb 2 12:12:39 MYSERVER postfix/anvil[9304]: statistics: max connection count 1 for (smtp:62.75.148.181) at Feb 2 12:09:19 Feb 2 12:12:39 MYSERVER postfix/anvil[9304]: statistics: max cache size 1 at Feb 2 12:09:19 Feb 2 14:17:02 MYSERVER postfix/smtpd[10847]: connect from unknown[202.46.129.123] Feb 2 14:17:02 MYSERVER postfix/smtpd[10847]: warning: non-SMTP command from unknown[202.46.129.123]: GET / HTTP/1.1 Feb 2 14:17:02 MYSERVER postfix/smtpd[10847]: disconnect from unknown[202.46.129.123] Feb 2 14:20:22 MYSERVER postfix/anvil[10853]: statistics: max connection rate 1/60s for (smtp:202.46.129.123) at Feb 2 14:17:02 Feb 2 14:20:22 MYSERVER postfix/anvil[10853]: statistics: max connection count 1 for (smtp:202.46.129.123) at Feb 2 14:17:02 Feb 2 14:20:22 MYSERVER postfix/anvil[10853]: statistics: max cache size 1 at Feb 2 14:17:02 Feb 2 20:57:33 MYSERVER postfix/smtpd[18452]: warning: 95.110.224.230: hostname host230-224-110-95.serverdedicati.aruba.it verification failed: Name or service not known Feb 2 20:57:33 MYSERVER postfix/smtpd[18452]: connect from unknown[95.110.224.230] Feb 2 20:57:33 MYSERVER postfix/smtpd[18452]: lost connection after CONNECT from unknown[95.110.224.230] Feb 2 20:57:33 MYSERVER postfix/smtpd[18452]: disconnect from unknown[95.110.224.230] Feb 2 21:00:53 MYSERVER postfix/anvil[18455]: statistics: max connection rate 1/60s for (smtp:95.110.224.230) at Feb 2 20:57:33 Feb 2 21:00:53 MYSERVER postfix/anvil[18455]: statistics: max connection count 1 for (smtp:95.110.224.230) at Feb 2 20:57:33 Feb 2 21:00:53 MYSERVER postfix/anvil[18455]: statistics: max cache size 1 at Feb 2 20:57:33 Feb 2 21:13:44 MYSERVER pop3d: Connection, ip=[::ffff:219.94.190.222] Feb 2 21:13:44 MYSERVER pop3d: LOGIN FAILED, user=admin, ip=[::ffff:219.94.190.222] Feb 2 21:13:50 MYSERVER pop3d: LOGIN FAILED, user=test, ip=[::ffff:219.94.190.222] Feb 2 21:13:56 MYSERVER pop3d: LOGIN FAILED, user=danny, ip=[::ffff:219.94.190.222] Feb 2 21:14:01 MYSERVER pop3d: LOGIN FAILED, user=sharon, ip=[::ffff:219.94.190.222] Feb 2 21:14:07 MYSERVER pop3d: LOGIN FAILED, user=aron, ip=[::ffff:219.94.190.222] Feb 2 21:14:12 MYSERVER pop3d: LOGIN FAILED, user=alex, ip=[::ffff:219.94.190.222] Feb 2 21:14:18 MYSERVER pop3d: LOGIN FAILED, user=brett, ip=[::ffff:219.94.190.222] Feb 2 21:14:24 MYSERVER pop3d: LOGIN FAILED, user=mike, ip=[::ffff:219.94.190.222] Feb 2 21:14:29 MYSERVER pop3d: LOGIN FAILED, user=alan, ip=[::ffff:219.94.190.222] Feb 2 21:14:35 MYSERVER pop3d: LOGIN FAILED, user=info, ip=[::ffff:219.94.190.222] Feb 2 21:14:41 MYSERVER pop3d: LOGIN FAILED, user=shop, ip=[::ffff:219.94.190.222] Feb 3 06:49:29 MYSERVER postfix/smtpd[25834]: warning: 71.6.142.196: hostname db4142196.aspadmin.net verification failed: Name or service not known Feb 3 06:49:29 MYSERVER postfix/smtpd[25834]: connect from unknown[71.6.142.196] Feb 3 06:49:29 MYSERVER postfix/smtpd[25834]: lost connection after CONNECT from unknown[71.6.142.196] Feb 3 06:49:29 MYSERVER postfix/smtpd[25834]: disconnect from unknown[71.6.142.196] Feb 3 06:52:49 MYSERVER postfix/anvil[25837]: statistics: max connection rate 1/60s for (smtp:71.6.142.196) at Feb 3 06:49:29 Feb 3 06:52:49 MYSERVER postfix/anvil[25837]: statistics: max connection count 1 for (smtp:71.6.142.196) at Feb 3 06:49:29 Feb 3 06:52:49 MYSERVER postfix/anvil[25837]: statistics: max cache size 1 at Feb 3 06:49:29 I have Postfix 2.7.1-1 running on Ubuntu 10.10. This is my (modified por privacy) main.cf: smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no append_dot_mydomain = no readme_directory = no smtpd_tls_cert_file = /etc/ssl/certs/smtpd.crt smtpd_tls_key_file = /etc/ssl/private/smtpd.key myhostname = mymailserver.org alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = mymailserver.org, MYSERVER, localhost relayhost = mynetworks = 127.0.0.0/8, 192.168.1.0/24 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all inet_protocols = all home_mailbox = Maildir/ smtpd_recipient_restrictions = permit_sasl_authenticated,permit_mynetworks,reject_unauth_destination mailbox_command = smtpd_sasl_local_domain = smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous broken_sasl_auth_clients = yes smtpd_tls_security_level = may smtpd_tls_auth_only = no smtp_tls_note_starttls_offer = yes smtpd_tls_CAfile = /etc/ssl/certs/cacert.pem smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_session_cache_timeout = 3600s tls_random_source = dev:/dev/urandom smtp_tls_security_level = may

    Read the article

  • Few events I&rsquo;m speaking at in early 2013

    - by Mladen Prajdic
    2013 has started great and the SQL community is already brimming with events. At some of these events you can come say hi. I’ll be glad you do! These are the events with dates and locations that I know I’ll be speaking at so far.   February 16th: SQL Saturday #198 - Vancouver, Canada The session I’ll present in Vancouver is SQL Impossible: Restoring/Undeleting a table Yes, you read the title right. No, it's not about the usual "one table per partition" and "restore full backup then copy the data over" methods. No, there are no 3rd party tools involved. Just you and your SQL Server. Yes, it's crazy. No, it's not for production purposes. And yes, that's why it's so much fun. Prepare to dive into the world of data pages, log records, deletes, truncates and backups and how it all works together to get your table back from the endless void. Want to know more? Come and see! This is an advanced level session where we’ll dive into the internals of data pages, transaction log records and page restores.   March 8th-9th: SQL Saturday #194 - Exeter, UK In Exeter I’ll be presenting twice. On the first day I’ll have a full day precon titled: From SQL Traces to Extended Events - The next big switch This pre-con will give you insight into both of the current tracing technologies in SQL Server. The old SQL Trace which has served us well over the past 10 or so years is on its way out because the overhead and details it produces are no longer enough to deal with today's loads. The new Extended Events are a new lightweight tracing mechanism built directly into the SQLOS thus giving us information SQL Trace just couldn't. They were designed and built with performance in mind and it shows. The new Extended Events are a new lightweight tracing mechanism built directly into the SQLOS thus giving us information SQL Trace just couldn't. They were designed and built with performance in mind and it shows. Mastering Extended Events requires learning at least one new skill: XML querying. The second session I’ll have on Saturday titled: SQL Injection from website to SQL Server SQL Injection is still one of the biggest reasons various websites and applications get hacked. The solution as everyone tells us is simple. Use SQL parameters. But is that enough? In this session we'll look at how would an attacker go about using SQL Injection to gain access to your database, see its schema and data, take over the server, upload files and do various other mischief on your domain. This is a fun session that always brings out a few laughs in the audience because they didn’t realize what can be done.   April 23rd-25th: NTK conference - Bled, Slovenia (Slovenian website only) This is a conference with history. This year marks its 18th year running. It’s a relatively large IT conference that focuses on various Microsoft technologies like .Net, Azure, SQL Server, Exchange, Security, etc… The main session’s language is Slovenian but this is slowly changing so it’s becoming more interesting for foreign attendees. This year it’s happening in the beautiful town of Bled in the Alps. The scenery alone is worth the visit, wouldn’t you agree? And this year there are quite a few well known speakers present! Session title isn’t known yet.       May 2nd-4th: SQL Bits XI – Nottingham, UK SQL Bits is the largest SQL Server conference in Europe. It’s a 3 day conference with top speakers and content all dedicated to SQL Server. The session I’ll present here is an hour long version of the precon I’ll give in Exeter. From SQL Traces to Extended Events - The next big switch The session description is the same as for the Exeter precon but we'll focus more on how the Extended Events work with only a brief overview of old SQL Trace architecture.

    Read the article

  • Why some links appear in a new tab, others in a new window?

    - by SAMIR BHOGAYTA
    Originally, it was made to resolve problems on IE8 32 bits when you use a 32 bits OS. I changed "%ProgramFiles(x86)%" var, and now my issue is resolved for IE8 32 bits on my Windows 7 64 bits. Please try it, and tell me if everything work for you. For Win 7 64 bits : 1 - Create a new notepad document and paste this text : @echo off echo. echo IEREREG Version 1.07 for IE8 27.03.2009 echo by Kai Schaetzl http://iefaq.info echo installs and registers (if suitable) all DLLs known to be used by IE8. echo should only take a few seconds, but please be patient echo. REM ****************************** echo registering IE files REM IE files (= part of setup) regsvr32 /s /i browseui.dll REM regsvr32 /s /i browseui.dll,NI (unnecessary) regsvr32 /s corpol.dll regsvr32 /s dxtmsft.dll regsvr32 /s dxtrans.dll REM simple HTML Mail API regsvr32 /s "%ProgramFiles(x86)%\internet explorer\hmmapi.dll" REM group policy snap-in regsvr32 /s ieaksie.dll REM smart screen regsvr32 /s ieapfltr.dll REM ieak branding regsvr32 /s iedkcs32.dll REM dev tools regsvr32 /s "%ProgramFiles(x86)%\internet explorer\iedvtool.dll" regsvr32 /s iepeers.dll REM Symptom: IE8 closes immediately on launch, missing from IE7 regsvr32 /s "%ProgramFiles(x86)%\internet explorer\ieproxy.dll" REM no install point anymore REM regsvr32 /s /i iesetup.dll REM no reg point anymore REM regsvr32 /s imgutil.dll regsvr32 /s /i /n inetcpl.cpl REM no install point anymore REM regsvr32 /s /i inseng.dll regsvr32 /s jscript.dll REM license manager regsvr32 /s licmgr10.dll REM regsvr32 /s msapsspc.dll REM regsvr32 /s mshta.exe REM VS debugger regsvr32 /s msdbg2.dll REM no install point anymore REM regsvr32 /s /i mshtml.dll regsvr32 /s mshtmled.dll regsvr32 /s msident.dll REM no reg point anymore REM regsvr32 /s msrating.dll REM multimedia timer regsvr32 /s mstime.dll REM no install point anymore REM regsvr32 /s /i occache.dll REM process debug manager regsvr32 /s "%ProgramFiles(x86)%\internet explorer\pdm.dll" REM no reg point anymore REM regsvr32 /s pngfilt.dll REM regsvr32 /s /i setupwbv.dll (not there anymore!) regsvr32 /s tdc.ocx regsvr32 /s /i urlmon.dll REM regsvr32 /s /i urlmon.dll,NI,HKLM regsvr32 /s vbscript.dll REM VML renderer regsvr32 /s "%CommonProgramFiles%\microsoft shared\vgx\vgx.dll" REM no install point anymore REM regsvr32 /s /i webcheck.dll regsvr32 /s /i /n wininet.dll REM ****************************** echo registering system files REM additional system dlls known to be used by IE REM added 11.05.2006 Symptom: Add-Ons-Manager menu entry is present but nothing happens regsvr32 /s extmgr.dll REM added 12.05.2006 Symptom: Javascript links don't work (Robin Walker) .NET hub file regsvr32 /s mscoree.dll REM added 23.03.2009 Symptom: Find on this page is blank regsvr32 /s oleacc.dll REM added 24.03.2009 Symptom: Printing problems, open in new window regsvr32 /s ole32.dll REM mscorier.dll REM mscories.dll REM Symptom: open in new tab/window not working regsvr32 /s actxprxy.dll regsvr32 /s asctrls.ocx regsvr32 /s cdfview.dll regsvr32 /s comcat.dll regsvr32 /s /i /n comctl32.dll regsvr32 /s cryptdlg.dll regsvr32 /s /i /n digest.dll regsvr32 /s dispex.dll regsvr32 /s hlink.dll regsvr32 /s mlang.dll regsvr32 /s mobsync.dll regsvr32 /s /i msieftp.dll REM regsvr32 /s msnsspc.dll #no entry point regsvr32 /s msr2c.dll regsvr32 /s msxml.dll regsvr32 /s oleaut32.dll REM regsvr32 /s plugin.ocx #no entry point regsvr32 /s proctexe.ocx REM plus DllRegisterServerEx ExA ExW ... ? regsvr32 /s /i scrobj.dll REM shdocvw.dll hasn't been updated for IE7 and IE8, it still registers itself for the Windows Internet Controls regsvr32 /s /i shdocvw.dll regsvr32 /s sendmail.dll REM ****************************** REM PKI/crypto functionality REM initpki can take very long to run and is rarely a problem REM if there are problems with crypto, SSL, certificates REM remove the three following REMs from the lines REM echo We are almost done except one crypto file REM echo but this will take very long, be patient! REM regsvr32 /s /i:A initpki.dll REM ****************************** REM tabbed browser, do at the end, why originally with /n ? regsvr32 /s /i ieframe.dll REM ****************************** echo correcting bugs in the registry REM do some corrective work REM Symptom: new tabs page cannot display content because it cannot access the controls (added 27. 3.2009) REM This is a result of a bug in shdocvw.dll (see above), probably only on Windows XP reg add "HKCR\TypeLib\{EAB22AC0-30C1-11CF-A7EB-0000C05BAE0B}\1.1\0\win32" /ve /t REG_SZ /d %systemroot%\system32\ieframe.dll /f REM ****************************** echo all tasks have been finished echo. pause 2 - Close all your IE windows and processes. 3 - Save your document on your Desktop by example, with the .bat extension. Right-click on it, and select "Run as administrator". 4 - Test if this tip resolved your issue by openning IE. If you use a Windows 32 bits version, please replace %ProgramFiles(x86)% by %ProgramFiles% in your .bat file.

    Read the article

  • The Value of SOA Specialization - Fujitsu

    - by Jürgen Kress
    Thanks for the nice ink The Value of Specialization In my last post  I talked about Fujitsu's achievement in obtaining SOA and other specializations, but I have heard murmurings from other partners about what just is the value? I think Oracle have to do more to advertise the benefits to customers, we need to see customers asking for specialization for it to really work, but Oracle have made great promises about only recommending those partners who are specialized. For us there was another benefit. Oracle was sponsoring the 3rd Annual SOA Symposium in Berlin and invited us as their first specialized partner to take part. There is a great blog about the symposium on the SOA community blog site. This is real commitment from Oracle and we have other marketing opportunities being worked on with Jürgen. This does generate leads so my message to other Oracle Partners is, you need to do this, it is worthwhile.   Fujitsu - First SOA Specialized Partner Globally Just before Oracle Open World I found out that Fujitsu had achieved the first SOA Specialization globally. I think most partners know what the requirements are for Specialization and that in itself is challenging but the bureaucracy around the actual submission is an exercise in tenacity. I won’t go into that now; I have had my dig at Oracle this month, but enough to say the process could be improved. As a platinum partner we needed 5 specializations and we decided to go for SOA first. The reasoning behind this is that our Oracle Practice is known for being applications centric. We have always had an excellent technical capability but no one ever talked about that, it was just part and parcel of an implementation. However today we have just as many bids that are technology lead as there is applications lead, so it seemed a good plan to work on the areas we were not known for. We appointed a capability lead to be responsible for putting the team through the training and testing and Rosemary (Kell) was excellent, she ensured that everyone was on track and that it wasn’t just getting put into the ‘to do list’. In Fujitsu everyone in the Oracle Practice has an objective to achieve the competency tests in their area, so achieving the 2 pre sales, 2 sales and 1 support was no problem at all. We actually had 22 with the support capability proficiency.  The implementation specialist exams are much harder, more like OCP in the database area. We had help from the Oracle SOA Community; Jürgen Kress who runs this in EMEA is really motivational. At the time we started SOA was a beta exam which means you do not get the results immediately but again we put forward more than we needed. Manjit Chopra, Sukhraj Sahota, Emely Patra, Ian Scorrer and Sunny Sidhu all took the exam and eventually got the results they wanted they had passed. Congratulations. Here is Jurgen expalining why specialization is important. After the tests came the submissions where you need to include deals and experience, this was my bit, and persuading Oracle we really deserved the specialization. Finally we got the news we had been awarded the specialization, and a few days later that we were first globally. I am very proud. However there is no rest for the wicked and we plodded on to make the 5 specializations needed for Platinum and now we are working on the new Diamond status and I think SOA will be one of our 5 ‘super specializations’. This is a global Fujitsu initiative and I work closely with my colleague in Germany Jessika Weiss. It was nice to be able to have a press release about this and a comment from Judson Althoff  head of Oracle Alliances. For more information on SOA Specialization and the SOA Partner Community please feel free to register at www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Wiki Website Technorati Tags: SOA,SOA Community,OPN,Oracle,Fujitsu,Debra Lilley,Jürgen Kress,Specialization,SOA Specialization

    Read the article

  • Dark Sun Dispatch 001

    - by Chris Williams
    If you aren't into tabletop (aka pen & paper) RPGs, you might as well click to the next post now... Still here? Awesome. I've recently started running a new D&D 4.0 Dark Sun campaign. If you don't know anything about Dark Sun, here's a quick intro: The campaign take place on the world of Athas, formerly a lush green world that is now a desert wasteland. Forests are rare in the extreme, as is water and metal. Coins are made of ceramic and weapons are often made of hardened wood, bone or obsidian. The green age of Athas was centuries ago and the current state was brought about through the reckless use of sorcerous magic. (In this world, you can augment spells by drawing on the life force of the world & people around you. This is called defiling. Preserving magic draws upon the casters life force and does not damage the surrounding world, but it isn't as powerful.) Humans are pretty much unchanged, but the traditional fantasy races have changed quite a bit. Elves don't live in the forest, they are shifty and untrustworthy desert traders known for their ability to run long distances through the wastes. Halflings are not short, fat, pleasant little riverside people. Instead they are bloodthirsty feral cannibals that roam the few remaining forests and ride reptilians beasts akin to raptors. Gnomes are extinct, as are orcs. Dwarves are mostly farmers and gladiators, and live out in the sun instead of staying under the mountains. Goliaths are half-giants, not known for their intellect. Muls are a Dwarf & Human crossbreed that displays the best traits of both races (human height and dwarven stoutness.) Thri-Kreen are sentient mantis people that are extremely fast. Most of the same character classes are available, with a few new twists. There are no divine characters (such as Priests, Paladins, etc) because the gods are gone. Nobody alive today can remember a time when they were still around. Instead, some folks worship the elemental forces (although they don't give out spells.) The cities are all ruled by Sorcerer King tyrants (except one city: Tyr) who are hundreds of years old and still practice defiling magic whenever they please. Serving the Sorcerer Kings are the Templars, who are also defilers and psionicists. Crossing them is as bad, in many cases, as crossing the Kings themselves. Between the cities you have small towns and trading outposts, and mostly barren desert with sometimes 4-5 days on foot between towns and the nearest oasis. Being caught out in the desert without adequate supplies and protection from the elements is pretty much a death sentence for even the toughest heroes. When you add in the natural (and unnatural) predators that roam the wastes, often in packs, most people don't last long alone. In this campaign, the adventure begins in the (small) trading fortress of Altaruk, a couple weeks walking distance from the newly freed city of Tyr. A caravan carrying trade goods from Altaruk has not made it to Tyr and the local merchant house has dispatched the heroes to find out what happened and to retrieve the goods (and drivers) if possible. The unlikely heroes consist of a human shaman, a thri-kreen monk, a human wizard, a kenku assassin and a (void aspect) genasi swordmage. Gathering up supplies and a little liquid courage, they set out into the desert and manage to find the northbound tracks of the wagon. Shortly after finding the tracks, they are ambushed by a pack of silt-runners (small lizard people with very large teeth and poisoned pointy spears.) The party makes short work of the creatures, taking a few minor wounds in the process. Proceeding onward without resting, they find the remains of the wagon and manage to sneak up on a pack of Kruthiks picking through the rubble and spilled goods. Unfortunately, they failed to take advantage of the opportunity and had a hard fight ahead of them. The party defeated the kruthiks, but took heavy damage (and almost lost a couple of their own) in the process. Once the kruthiks were dispatched, they followed a set of tracks further north to a ruined tower...

    Read the article

  • Algorithm for tracking progress of controller method running in background

    - by SilentAssassin
    I am using Codeigniter framework for PHP on Windows platform. My problem is I am trying to track progress of a controller method running in background. The controller extracts data from the database(MySQL) then does some processing and then stores the results again in the database. The complete aforesaid process can be considered as a single task. A new task can be assigned while another task is running. The newly assigned task will be added in a queue. So if I can track progress of the controller, I can show status for each of these tasks. Like I can show "Pending" status for tasks in the queue, "In Progress" for tasks running and "Done" for tasks that are completed. Main Issue: Now first thing I need to find is an algorithm to track the progress of how much amount of execution the controller method has completed and that means tracking how much amount of method has completed execution. For instance, this PHP script tracks progress of array being counted. Here the current state and state after total execution are known so it is possible to track its progress. But I am not able to devise anything analogous to it in my case. Maybe what I am trying to achieve is programmtically not possible. If its not possible then suggest me a workaround or a completely new approach. If some details are pending you can mention them. Sorry for my ignorance this is my first post here. I welcome you to point out my mistakes. EDIT: Database outline: The URL(s) and keyword(s) are first entered by user which are stored in a database table called link_master and keyword_master respectively. Then keywords are extracted from all the links present in this table and compared with keywords entered by user and their frequency is calculated which is the final result. And the results are stored in another table called link_result. Now sub-links are extracted from the domain links and stored in a table called sub_link_master. Now again the keywords are extracted from these sub-links and the corresponding results are stored in a table called sub_link_result. The number of records cannot be defined beforehand as the number of links on any web page can be different. Only the cardinality of *link_result* table can be known which will be equal to multiplication of number of keyword(s) and URL(s) . I insert multiple records at a time using this resource. Controller outline: The controller extracts keywords from a web page and also extracts keywords from all the links present on that page. There is a method called crawlLink. I used Rolling Curl to extract keywords and web page content. It has callback function which I used for extracting keywords alongwith generating results and extracting valid sub-links. There is a insertResult method which stores results for links and sub-links in the respective tables. Yes, the processing depends on the number of records. The more the number of records, the more time it takes to execute: Consider this scenario: Number of Domain Links = 1 Number of Keywords = 3 Number of Domain Links Result generated = 3 (3 x 1 as described in the question) Number of Sub Links generated = 41 Number of Sub Links Result = 117 (41 x 3 = 123 but some links are not valid or searchable) Approximate time taken for above process to complete = 55 seconds. The above result is for a single link. I want to track the progress of the above results getting stored in database. When all results are stored, the task is complete. If results are getting stored, the task is In Progress. I am not clear how can I track this progress.

    Read the article

  • WebCenter Customer Spotlight: Textron Inc.

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryTextron Inc. is one of the world's best known multi-industry companies and is a pioneer of the diversified business model. Founded in 1923, it has grown into a network of businesses—including Bell Helicopter, E-Z-GO, Cessna, and Jacobsen—with facilities and a presence in 25 countries, serving a diverse and global customer base. Textron is ranked 236th on the Fortune 500 list of the largest US companies. Textron needed a Web experience management solution to centralize control, minimize costs, and enable more efficient operations. Specifically, the company wanted to take IT out of the picture as much as possible, enabling sales and marketing leads for subsidiaries to make Website updates as they deem appropriate for their business.   Textron worked with Oracle partner Element Solutions to consolidate its Website management systems onto Oracle WebCenter Sites. The implementation enables Textron’s subsidiaries to adjust more quickly to customer demands,  reduced Website management cost & time to update content on a Website while allowing to integrate its Website updates more closely with social media and mobile platforms. Company OverviewTextron Inc. is one of the world's best known multi-industry companies and is a pioneer of the diversified business model. Founded in 1923, it has grown into a network of businesses—including Bell Helicopter, E-Z-GO, Cessna, and Jacobsen—with facilities and a presence in 25 countries, serving a diverse and global customer base. Textron is ranked 236th on the Fortune 500 list of the largest US companies. Business ChallengesWith numerous subsidiaries and more than 50 public Websites, Textron needed a Web experience management solution to centralize control, minimize costs, and enable more efficient operations. Specifically, the company wanted to take IT out of the picture as much as possible, enabling sales and marketing leads for subsidiaries to make Website updates as they deem appropriate for their business.   Solution DeployedTextron worked with Oracle partner Element Solutions to consolidate its Website management systems onto Oracle WebCenter Sites. Specifically, Textron: Used Oracle WebCenter Sites to integrate Web experience management capabilities for all Textron brands, including Bell Helicopter, E-Z-GO, Cessna, and Jacobsen Developed Website templates to enable marketing and communications professionals to easily make updates to their Websites, without having to work with IT Reduced Website management costs, as it costs more for IT to coordinate Website updates as opposed to marketing and communications Enabled IT to concentrate on other activities to enhance overall operations for Textron, such as project workflows Acquired a platform that enables marketing teams to integrate their Websites with social media and mobile platforms, allowing subsidiaries to make updates and contact customers anytime and everywhere—including through tablets and smartphones Reduced the time it takes to update content on a Website, including press releases, by enabling communications professionals to make updates directly Developed more appealing visual designs for Websites to help enhance customer purchase Business ResultsThe implementation enabled Textron’s subsidiaries to adjust more quickly to customer demands and Textron’s IT staff to concentrate on other processes, such as writing code and developing new workflows, enabling them to enhance company processes. In addition, Textron can use Oracle WebCenter Sites to integrate its Website updates more closely with social media and mobile platforms, enabling marketing and communications teams to make updates anytime and everywhere. The initiative has enabled Textron to save money by freeing IT up to work on more important tasks, instituting new e-commerce and mobile initiatives to better engage customers, and by ensuring efficient Website management processes to quickly adjust to customer demands.  “We considered a number of products, but chose Oracle WebCenter Sites because it provides the best user interface. We reviewed customer references and analyst reports, and Oracle WebCenter Sites was consistently at the top of the list,” Brad Hof, Manager, Advanced Business Solutions and Web Communications, Textron Inc. Additional Information Tectron Inc. Customer Snapshot Oracle WebCenter Sites

    Read the article

  • How views are changing in future versions of SQL

    - by Rob Farley
    April is here, and this weekend, SQL v11.0 (previous known as Denali, now known as SQL Server 2012) reaches general availability. And so I thought I’d share some news about what’s coming next. I didn’t hear this at the MVP Summit earlier this year (where there was lots of NDA information given, but I didn’t go), so I think I’m free to share it. I’ve written before about CTEs being query-scoped views. Well, the actual story goes a bit further, and will continue to develop in future versions. A CTE is a like a “temporary temporary view”, scoped to a single query. Due to globally-scoped temporary objects using a two-hashes naming style, and session-scoped (or ‘local’) temporary objects a one-hash naming style, this query-scoped temporary object uses a cunning zero-hash naming style. We see this implied in Books Online in the CREATE TABLE page, but as we know, temporary views are not yet supported in the SQL Server. However, in a breakaway from ANSI-SQL, Microsoft is moving towards consistency with their naming. We know that a CTE is a “common table expression” – this is proving to be a more strategic than you may have appreciated. Within the Microsoft product group, the term “Table Expression” is far more widely used than just CTEs. Anything that can be used in a FROM clause is referred to as a Table Expression, so long as it doesn’t actually store data (which would make it a Table, rather than a Table Expression). You can see this is not just restricted to the product group by doing an internet search for how the term is used without ‘common’. In the past, Books Online has referred to a view as a “virtual table” (but notice that there is no SQL 2012 version of this page). However, it was generally decided that “virtual table” was a poor name because it wasn’t completely accurate, and it’s typically accepted that virtualisation and SQL is frowned upon. That page I linked to says “or stored query”, which is slightly better, but when the SQL 2012 version of that page is actually published, the line will be changed to read: “A view is a stored table expression (STE)”. This change will be the first of many. During the SQL 2012 R2 release, the keyword VIEW will become deprecated (this will be SQL v11 SP1.5). Three versions later, in SQL 14.5, you will need to be in compatibility mode 140 to allow “CREATE VIEW” to work. Also consistent with Microsoft’s deprecation policy, the execution of any query that refers to an object created as a view (rather than the new “CREATE STE”), will cause a Deprecation Event to fire. This will all be in preparation for the introduction of Single-Column Table Expressions (to be introduced in SQL 17.3 SP6) which will finally shut up those people waiting for a decent implementation of Inline Scalar Functions. And of course, CTEs are “Common” because the Table Expression definition needs to be repeated over and over throughout a stored procedure. ...or so I think I heard at some point. Oh, and congratulations to all the new MVPs on this April 1st. @rob_farley

    Read the article

  • Oracle Solaris Crash Analysis Tool 5.3 now available

    - by user12609056
    Oracle Solaris Crash Analysis Tool 5.3 The Oracle Solaris Crash Analysis Tool Team is happy to announce the availability of release 5.3.  This release addresses bugs discovered since the release of 5.2 plus enhancements to support Oracle Solaris 11 and updates to Oracle Solaris versions 7 through 10. The packages are available on My Oracle Support - simply search for Patch 13365310 to find the downloadable packages. Release Notes General blast support The blast GUI has been removed and is no longer supported. Oracle Solaris 2.6 Support As of Oracle Solaris Crash Analysis Tool 5.3, support for Oracle Solaris 2.6 has been dropped. If you have systems running Solaris 2.6, you will need to use Oracle Solaris Crash Analysis Tool 5.2 or earlier to read its crash dumps. New Commands Sanity Command Though one can re-run the sanity checks that are run at tool start-up using the coreinfo command, many users were unaware that they were. Though these checks can still be run using that command, a new command, namely sanity, can now be used to re-run the checks at any time. Interface Changes scat_explore -r and -t option The -r option has ben added to scat_explore so that a base directory can be specified and the -t op[tion was added to enable color taggging of the output. The scat_explore sub-command now accepts new options. Usage is: scat --scat_explore [-atv] [-r base_dir] [-d dest] [unix.N] [vmcore.]N Where: -v Verbose Mode: The command will print messages highlighting what it's doing. -a Auto Mode: The command does not prompt for input from the user as it runs. -d dest Instructs scat_explore to save it's output in the directory dest instead of the present working directory. -r base_dir Instructs scat_explore to save it's under the directory base_dir instead of the present working directory. If it is not specified using the -d option, scat_explore names it's output file as "scat_explore_system_name_hostid_lbolt_value_corefile_name." -t Enable color tags. When enabled, scat_explore tags important text with colors that match the level of importance. These colors correspond to the color normally printed when running Oracle Solaris Crash Analysis Tool in interactive mode. Tag Name Definition FATAL An extremely important message which should be investigated. WARNING A warning that may or may not have anything to do with the crash. ERROR An error, usually printer with a suggested command ALERT Used to indicate something the tool discovered. INFO Purely informational message INFO2 A follow-up to an INFO tagged message REDZONE Usually used when prnting memory info showing something is in the kernel's REDZONE. N The number of the crash dump. Specifying unix.N vmcore.N is optional and not required. Example: $ scat --scat_explore -a -v -r /tmp vmcore.0 #Output directory: /tmp/scat_explore_oomph_833a2959_0x28800_vmcore.0 #Tar filename: scat_explore_oomph_833a2959_0x28800_vmcore.0.tar #Extracting crash data... #Gathering standard crash data collections... #Panic string indicates a possible hang... #Gathering Hang Related data... #Creating tar file... #Compressing tar file... #Successful extraction SCAT_EXPLORE_DATA_DIR=/tmp/scat_explore_oomph_833a2959_0x28800_vmcore.0 Sending scat_explore results The .tar.gz file that results from a scat_explore run may be sent using Oracle Secure File Transfer. The Oracle Secure File Transfer User Guide describes how to use it to send a file. The send_scat_explore script now has a -t option for specifying a to address for sending the results. This option is mandatory. Known Issues There are a couple known issues that we are addressing in release 5.4, which you should expect to see soon: Display of timestamps in threads and clock information is incorrect in some cases. There are alignment issues with some of the tables produced by the tool.

    Read the article

  • ArchBeat Link-o-Rama Top 10 for November 2012

    - by Bob Rhubart
    Every day ArchBeat searches the web for content created by and for community members, and then shares that content via social media. Here's the list of the Top 10 most popular items posted on the OTN ArchBeat Facebook Page for November 2012. One-Stop Shop for Oracle Webcasts Webcasts can be a great way to get information about Oracle products without having to go cross-eyed reading yet another document off your computer screen. Oracle's new Webcast Center offers selectable filtering to make it easy to get to the information you want. Yes, you have to register to gain access, but that process is quick, and with over 200 webcasts to choose from you know you'll find useful content. OAM/OVD JVM Tuning Vinay from the Oracle Fusion Middleware Architecture Group (otherwise known as the A-Team) shares a process for analyzing and improving performance in Oracle Virtual Directory and Oracle Access Manager. White Paper: Oracle Exalogic Elastic Cloud: Advanced I/O Virtualization Architecture for Consolidating High-Performance Workloads This new white paper by Adam Hawley (with contributions from Yoav Eilat) describes in great detail the incorporation into Oracle Exalogic of virtualized InfiniBand I/O interconnects using Single Root I/O Virtualization (SR-IOV) technology. Architected Systems: "If you don't develop an architecture, you will get one anyway..." "Can you build a system without taking care of architecture?," asks Manuel Ricca. "You certainly can. But inevitably the system will be unbalanced, neglecting the interests of key stakeholders, and problems will soon emerge." Backup and Recovery of an Exalogic vServer via rsync "On Exalogic a vServer will consist of a number of resources from the underlying machine," says the man known only as Donald. "These resources include compute power, networking and storage. In order to recover a vServer from a failure in the underlying rack all of these components have to be thoughts about. This article only discusses the backup and recovery strategies that apply to the storage system of a vServer." This Week on the OTN Architect Community Home Page Make time to check out this week's features on the OTN Solution Architect Homepage, including: SOA Practitioner Guide: Identifying and Discovering Services Technical article by Yuli Vasiliev on Setting Up, Configuring, and Using an Oracle WebLogic Server Cluster Podcast: Are You Future Proof? Clustering ODI11g for High-Availability Part 1: Introduction and Architecture | Richard Yeardley "JEE agents can be deployed alongside, or instead of, standalone agents," says Rittman Meade's Richard Yeardley. "But there is one key advantage in using JEE agents and WebLogic – when you deploy JEE agents as part of a WebLogic cluster they can be configured together to form a high availability cluster." Learn more in Yeardley's extensive post. OIM 11g : Multi-thread approach for writing custom scheduled job | Saravanan V S Saravanan shares insight and expertise relevant to "designing and developing an OIM schedule job that uses multi threaded approach for updating data in OIM using APIs." How to Create Virtual Directory in Weblogic Server | Zeeshan Baig Oracle ACE Zeeshan Baig shows you how in six easy steps. SOA Galore: New Books for Technical Eyes Only Shake up up your technical skills with this trio of new technical books from community members covering SOA and BPM. Thought for the Day "Humans are the best value in computers -- where else can you get a non-linear computer weighing only about 160lbs, having a billion binary decision elements, that can be mass-produced by unskilled labour?" — Anonymous Source: SoftwareQuotes.com

    Read the article

  • Conflict Minerals - Design to Compliance

    - by C. Chadwick
    Dr. Christina  Schröder - Principal PLM Consultant, Enterprise PLM Solutions EMEA What does the Conflict Minerals regulation mean? Conflict Minerals has recently become a new buzz word in the manufacturing industry, particularly in electronics and medical devices. Known as the "Dodd-Frank Section 1502", this regulation requires SEC listed companies to declare the origin of certain minerals by 2014. The intention is to reduce the use of tantalum, tungsten, tin, and gold which originate from mines in the Democratic Republic of Congo (DRC) and adjoining countries that are controlled by violent armed militia abusing human rights. Manufacturers now request information from their suppliers to see if their raw materials are sourced from this region and which smelters are used to extract the metals from the minerals. A standardized questionnaire has been developed for this purpose (download and further information). Soon, even companies which are not directly affected by the Conflict Minerals legislation will have to collect and maintain this information since their customers will request the data from their suppliers. Furthermore, it is expected that the public opinion and consumer interests will force manufacturers to avoid the use of metals with questionable origin. Impact for existing products Several departments are involved in the process of collecting data and providing conflict minerals compliance information. For already marketed products, purchasing typically requests Conflict Minerals declarations from the suppliers. In order to address requests from customers, technical operations or product management are usually responsible for keeping track of all parts, raw materials and their suppliers so that the required information can be provided. For complex BOMs, it is very tedious to maintain complete, accurate, up-to-date, and traceable data. Any product change or new supplier can, in addition to all other implications, have an effect on the Conflict Minerals compliance status. Influence on product development  It makes sense to consider compliance early in the planning and design of new products. Companies should evaluate which metals are needed or contained in supplier parts and if these could originate from problematic sources. The answer influences the cost and risk analysis during the development. If it is known early on that a part could be non-compliant with respect to Conflict Minerals, alternatives can be evaluated and thus costly changes at a later stage can be avoided. Integrated compliance management  Ideally, compliance data for Conflict Minerals, but also for other regulations like REACH and RoHS, should be managed in an integrated supply chain system. The compliance status is directly visible across the entire BOM at any part level and for the finished product. If data is missing, a request to the supplier can be triggered right away without having to switch to another system. The entire process, from identification of the relevant parts, requesting information, handling responses, data entry, to compliance calculation is fully covered end-to-end while being transparent for all stakeholders. Agile PLM Product Governance and Compliance (PG&C) The PG&C module extends Agile PLM with exactly this integrated functionality. As with the entire Agile product suite, PG&C can be configured according to customer requirements: data fields, attributes, workflows, routing, notifications, and permissions, etc… can be quickly and easily tailored to a customer’s needs. Optionally, external databases can be interfaced to query commercially available sources of Conflict Minerals declarations which obviates the need for a separate supplier request in many cases. Suppliers can access the system directly for data entry through a special portal. The responses to the standard EICC-GeSI questionnaire can be imported by the supplier or internally. Manual data entry is also supported. A set of compliance-specific dashboards and reports complement the functionality Conclusion  The increasing number of product compliance regulations, for which Conflict Minerals is just one example, requires companies to implement an efficient data and process management in this area. Consumer awareness in this matter increases as well so that an integrated system from development to production also provides a competitive advantage. Follow this link to learn more about Agile's PG&C solution

    Read the article

  • DBA Best Practices - A Blog Series: Episode 2 - Password Lists

    - by Argenis
      Digital World, Digital Locks One of the biggest digital assets that any company has is its secrets. These include passwords, key rings, certificates, and any other digital asset used to protect another asset from tampering or unauthorized access. As a DBA, you are very likely to manage some of these assets for your company - and your employer trusts you with keeping them safe. Probably one of the most important of these assets are passwords. As you well know, the can be used anywhere: for service accounts, credentials, proxies, linked servers, DTS/SSIS packages, symmetrical keys, private keys, etc., etc. Have you given some thought to what you're doing to keep these passwords safe? Are you backing them up somewhere? Who else besides you can access them? Good-Ol’ Post-It Notes Under Your Keyboard If you have a password-protected Excel sheet for your passwords, I have bad news for you: Excel's level of encryption is good for your grandma's budget spreadsheet, not for a list of enterprise passwords. I will try to summarize the main point of this best practice in one sentence: You should keep your passwords on an encrypted, access and version-controlled, backed-up, well-known shared location that every DBA on your team is aware of, and maintain copies of this password "database" on your DBA's workstations. Now I have to break down that statement to you: - Encrypted: what’s the point of saving your passwords on a file that any Windows admin with enough privileges can read? - Access controlled: This one is pretty much self-explanatory. - Version controlled: Passwords change (and I’m really hoping you do change them) and version control would allow you to track what a previous password was if the utility you’ve chosen doesn’t handle that for you. - Backed-up: You want a safe copy of the password list to be kept offline, preferably in long term storage, with relative ease of restoring. - Well-known shared location: This is critical for teams: what good is a password list if only one person in the team knows where it is? I have seen multiple examples of this that work well. They all start with an encrypted database. Certainly you could leverage SQL Server's native encryption solutions like cell encryption for this. I have found such implementations to be impractical, for the most part. Enter The World Of Utilities There are a myriad of open source/free software solutions to help you here. One of my favorites is KeePass, which creates encrypted files that can be saved to a network share, Sharepoint, etc. KeePass has UIs for most operating systems, including Windows, MacOS, iOS, Android and Windows Phone. Other solutions I've used before worth mentioning include PasswordSafe and 1Password, with the latter one being a paid solution – but wildly popular in mobile devices. There are, of course, even more "enterprise-level" solutions available from 3rd party vendors. The truth is that most of the customers that I work with don't need that level of protection of their digital assets, and something like a KeePass database on Sharepoint suits them very well. What are you doing to safeguard your passwords? Leave a comment below, and join the discussion! Cheers, -Argenis

    Read the article

  • Cream of the Crop

    - by KemButller
    JD Edwards has been working hard to ensure that you shouldn't have to work so hard! Yet there are still JD Edwards customers that may not be up to speed on all the new and or improved tools and utilities we have delivered, all designed to make your life easier. So today, I want to share what I consider to be the cream of the crop….those items that every customer should know about and leverage to make ERP life just a little bit (or A LOT) easier! These are my top picks, the cream of a very good crop! Explore and enjoy, and gain some of your time back to do with as you please. · www.runjde.com It’s where to go when you need to know! The Resource Kits available on www.runjde.com provide comprehensive Resource Kits (guides) by user type. The guides provide brief descriptions of the wide array of resources that are available to JD Edwards’s eco system and links to each of those resources. · My Oracle Support (MOS) Information Centers This link will take you to an index that is designed to provide you with simple and quick navigation to the available EnterpriseOne Information Centers. This index provides links to: · EnterpriseOne Application specific Information Centers · EnterpriseOne Tools and Technology Information Centers · EnterpriseOne Performance Information Center · EnterpriseOne 9.1 and 9.0 Information Centers Information Centers give Oracle the ability to aggregate content for a given focus area and present this content in categories for easy browsing by our customers. Information Centers offer a variety of focused dynamic content organized around one or more of the following tasks. · Overview · Use · Troubleshooting · Patching and Maintenance · Install and Configure · Upgrade · Optimize Performance · Security · Certify JD Edwards Newsletters Be in the know by reading the Global Customer Support Product Newsletters. They are PACKED with news and information covering a wide range of topics and news. It is a must read if you want to know what’s happening in the JD Edwards universe! Read the latest EntepriseOne newsletter Read the latest World newsletter Learn How to receive notification when a new newsletter edition is published Oracle Learning Library – (OLL) Oracle Learn Library is the place to go for easy access to JD Edwards Application and Tools training. For a comprehensive view of the training available for a specific product/functional area, explore the Knowledge Paths For Net Change (new feature) training, explore the TOI sessions (TOI stands for Transfer Of Information). Tip: Be sure to experiment with the search filters! · www.upgradejde.com The site designed to help customers and partners with the process of upgrading JD Edwards. The site is a wealth of information, tools and resources designed to assist in the evaluation, planning and execution steps required when upgrading. Of note is the wildly successful upgrade strategy known as “The Art of the Possible” wherein JD Edwards and many of our partners hold free workshops to teach customers how to conduct upgrades in 100 days or less. Equally important is the fact that on www.upgradejde.com, customers can gain visibility into planned enhancements using the Product and Technology Feature Catalogs. The catalogs are great for creating customer specific reports about the net change between older releases and current or planned releases. Examples of other key resources on www.upgradejde.com are the product data base changes between releases, extensibility guides, (formerly known as programmer’s guides), whitepapers, ROI calculators and much more!

    Read the article

  • Is Visual Source Safe (The latest Version) really that bad? Why? What's the Best Alternative? Why? [closed]

    - by hanzolo
    Over the years I've constantly heard horror stories, had people say "Real Programmers Dont Use VSS", and so on. BUT, then in the workplace I've worked at two companies, one, a very well known public facing high traffic website, and another high end Financial Services "Web-Based" hosted solution catering to some very large, very well known companies, which is where I currently Reside and everything's working just fine (KNOCK KNOCK!!). I'm constantly interfacing with EXTREMELY Old technology with some of these financial institutions.. OLD LIKE YOU WOULDN'T BELIEVE.. which leads me to the conclusion that if it works "LEAVE IT", and that maybe there's some value in old technology? at least enough value to overrule a rewrite!? right?? Is there something fundamentally flawed with the underlying technology that VSS uses? I have a feeling that if i said "someone said VSS Sucks" they would beg to differ, most likely give me this look like i dont know -ish, and I'd never gain back their respect and my credibility (well, that'll be hard to blow.. lol), BUT, give me an argument that I can take to someone whose been coding for 30 years, that builds Platforms that leverage current technology (.NET 3.5 / SQL 2008 R2 ), write's their own ORM with scaffolding and is able to provide a quality platform that supports thousands of concurrent users on a multi-tenant hosted solution, and does not agree with any benefits from having Source Control Integrated, and yet uses the Infamous Visual Source Safe. I have extensive experience with TFS up to 2010, and honestly I think it's great when a team (beyond developers) can embrace it. I've worked side by side with someone whose a die hard SVN'r and from a purist standpoint, I see the beauty in it (I need a bit more, out of my SS, but it surely suffices). So, why are such smarties not running away from Visual Source Safe? surely if it was so bad, it would've have been realized by now, and I would not be sitting here with this simple old, Check In, Check Out, Version Resistant, Label Intensive system. But here I am... I would love to drop an argument that would be the end all argument, but if it's a matter of opinion and personal experience, there seems to be too much leeway for keeping VSS. UPDATE: I guess the best case is to have the VSS supporters check other people's experiences and draw from that until we (please no) experience the breaking factor ourselves. Until then, i wont be engaging in a discussion to migrate off of VSS.. UPDATE 11-2012: So i was able to convince everyone at my work place that since MS is sun downing Visual Source Safe it might be time to migrate over to TFS. I was able to convince them and have recently upgraded our team to Visual Studio 2012 and TFS 2012. The migration was fairly painless, had to run analyze.exe which found a bunch of errors (not sure they'll ever affect the project) and then manually run the VSSConverter.exe. Again, painless, except it took 16 hours to migrate 5 years worth of everything.. and now we're on TFS.. much more integrated.. much more cooler.. so all in all, VSS served it's purpose for years without hick-up. There were no horror stories and Visual Source Save as source control worked just fine. so to all the nay sayers (me included). there's nothing wrong with using VSS. i wouldnt start a new project with it, and i would definitely consider migrating to TFS. (it's really not super difficult and a new "wizard" type converter is due out any day now so migrating should be painless). But from my experience, it worked just fine and got the job done.

    Read the article

  • SEO/Google: How should I handle multiple countries and domains?

    - by Valorized
    Hello. I'm the webmaster of an online shop based in Austria (Europe). Therefore we registered "example.at". We also own different other domain names like "example-shop.com" and "example.info". Currently all those domains are redirected (301) to the .at one. Still available is: "example.net" and "example.org" (and .ws/.cc), unfortunately not available: .de/.eu The .com is currently owned by one of our partners, the contract ends in 2012 but until then we have no chance to get this one. Recently I read more about geo-targeting and I noticed ONE big deal. The tld ".at" is hardly recognised in Germany (google.de) whereas it is excellently listed in Austria (google.at). As a result of the .at I cannot set the target location manually (or to unlisted). More info: https://www.google.com/support/webmasters/bin/answer.py?answer=62399&hl=en This is a big problem. I looked at Google Analytics and - although Germany is 10x as big as Austria - there are more visits from Austria. So, how should I config the domain in order to get the best results in both, Germany and Austria? I thought of some solutions: First I could stop redirecting the .info. Then there would be a duplicate of the .at one. Moreover, in Webmastertools, I could set the target location of the .info to Germany. As the .at still targets Austria, both would be targeted - however I don't now if google punishes one of them because of the duplicate content? Same as 1. but with .net or .org (I think .info is not a "nice" domain and moreover I think search engines prefer .com, .net or .org to .info). Same as 1. (or 2.) but with a rel="canonical" on the new one (pointing to the .at). Con: I don't think this will improve the situation, because it still tells google that the .at one is more important, like: "if .info points to .at, the target may still be Austria". rel="canonical" on the .at pointing to the new (.info or .net or .org). However I fear that this will have a negative impact on the listing on google.at because: "Hey, the well-known .at is not important anymore, so let's focus on the .info which is not well-known." - Therefore: bad position in search results. Redirect .at to the new (.info or .net or .org) with a 301-Redirect. Con: Might be worse than 4, we might loose Page-Rank (or "the value of the page", because google says that page rank is not important anymore). Moreover this might be even more confusing for the customers. In 3. or 4. customers don't get redirected, they do not see the canonical-meta-tag. So, dear experts, please tell me what the best option would be! Thank you very much for your advice in advance and please excuse the long question. I really appreciate this network! Please note: It's exactly the same content AND language. In Austria we speak German.

    Read the article

  • Schliemann's method of programming language learning

    - by DVK
    Background: 19th-century German archeologist Heinrich Schliemann was of course famous for his successful quest to find and excavate the city of Troy (an actual archeological site for the Troy of Homer's Iliad). However, he is just as famous for being an astonishing learner of languages - within the space of two years, he taught himself fluent Dutch, English, French, Spanish, Italian and Portuguese, and later went on to learn seven more, including both modern and ancient Greek. One of the methods he famously used was comparison of a known text, e.g. take a book in a language one is fluent in, take a good translation of a book in a language you wish to learn, and go over them in parallel. (various sources cited the book used by Schliemann to be the Bible, or, as the link above states, a novel). Now, for the actual question. Has anyone used (or heard of) an equivalent of Schliemann's method for learning a new programming language? E.g. instead of basing the leaning on references and tutorials, take a somewhat comprehensive set of programs known to have high-quality code in both languages implementing similar/identical algorithms and learn by comparing them? I'm curious about either personal experiences of applying such an approach, or references to something published, or existance of codebases which could be used for such an approach? What got me thinking about the idea was Project Euler and some code snippets I saw on SO, in C++, Perl and Lisp.

    Read the article

  • Automatically calling OnDetaching() for Silverlight Behaviors

    - by Dan Auclair
    I am using several Blend behaviors and triggers on a silverlight control. I am wondering if there is any mechanism for automatically detaching or ensuring that OnDetaching() is called for a behavior or trigger when the control is no longer being used (i.e. removed from the visual tree). My problem is that there is a managed memory leak with the control because of one of the behaviors. The behavior subscribes to an event on some long-lived object in the OnAttached() override and should be unsubscribing to that event in the OnDetaching() override so that it can become a candidate for garbage collection. However, OnDetaching() never seems to be getting called when I remove the control from the visual tree... the only way I can get it to happen is by explicit detaching the behavior BEFORE removing the control and then it is properly garbage collected. Right now my only solution was to create a public method in the code-behind for the control that can go through and detach any known behaviors that would cause garbage collection problems. It would be up to the client code to know to call this before removing the control from the panel. I don't really like this approach, so I am looking for some automatic way of doing this that I am overlooking or a better suggestion. public void DetachBehaviors() { foreach (var behavior in Interaction.GetBehaviors(this.LayoutRoot)) { behavior.Detach(); } //... //continue detaching all known problematic behaviors.... }

    Read the article

  • Prevent full table scan for query with multiple where clauses

    - by Dave Jarvis
    A while ago I posted a message about optimizing a query in MySQL. I have since ported the data and query to PostgreSQL, but now PostgreSQL has the same problem. The solution in MySQL was to force the optimizer to not optimize using STRAIGHT_JOIN. PostgreSQL offers no such option. Here is the explain: Here is the query: SELECT avg(d.amount) AS amount, y.year FROM station s, station_district sd, year_ref y, month_ref m, daily d LEFT JOIN city c ON c.id = 10663 WHERE -- Find all the stations within a specific unit radius ... -- 6371.009 * SQRT( POW(RADIANS(c.latitude_decimal - s.latitude_decimal), 2) + (COS(RADIANS(c.latitude_decimal + s.latitude_decimal) / 2) * POW(RADIANS(c.longitude_decimal - s.longitude_decimal), 2)) ) <= 50 AND -- Ignore stations outside the given elevations -- s.elevation BETWEEN 0 AND 2000 AND sd.id = s.station_district_id AND -- Gather all known years for that station ... -- y.station_district_id = sd.id AND -- The data before 1900 is shaky; insufficient after 2009. -- y.year BETWEEN 1980 AND 2000 AND -- Filtered by all known months ... -- m.year_ref_id = y.id AND m.month = 12 AND -- Whittled down by category ... -- m.category_id = '001' AND -- Into the valid daily climate data. -- m.id = d.month_ref_id AND d.daily_flag_id <> 'M' GROUP BY y.year It appears as though PostgreSQL is looking at the DAILY table first, which is simply not the right way to go about this query as there are nearly 300 million rows. How do I force PostgreSQL to start at the CITY table? Thank you!

    Read the article

  • Why do properties require explicit typing during compilation?

    - by ctpenrose
    Compilation using property syntax requires the type of the receiver to be known at compile time. I may not understand something, but this seems like a broken or incomplete compiler implementation considering that Objective-C is a dynamic language. The property "comment" is defined with: @property (nonatomic, retain) NSString *comment; and synthesized with: @synthesize comment; "document" is an instance of one of several classes which conform to: @protocol DocumentComment <NSObject> @property (nonatomic, retain) NSString *comment; @end and is simply declared as: id document; When using the following property syntax: stringObject = document.comment; the following error is generated by gcc: error: request for member 'comment' in something not a structure or union However, the following equivalent receiver-method syntax, compiles without warning or error and works fine, as expected, at run-time: stringObject = [document comment]; I don't understand why properties require the type of the receiver to be known at compile time. Is there something I am missing? I simply use the latter syntax to avoid the error in situations where the receiving object has a dynamic type. Properties seem half-baked.

    Read the article

  • resolving overloads in boost.python

    - by swarfrat
    I have a C++ class like this: class ConnectionBase { public: ConnectionBase(); template <class T> Publish(const T&); private: virtual void OnEvent(const Overload_a&) {} virtual void OnEvent(const Overload_b&) {} }; My templates & overloads are a known fixed set of types at compile time. The application code derives from ConnectionBase and overrides OnEvent for the events it cares about. I can do this because the set of types is known. OnEvent is private because the user never calls it, the class creates a thread that calls it as a callback. The C++ code works. I have wrapped this in boost.python, I can import it and publish from python. I want do create the equivalent of the following in python : class ConnectionDerived { public: ConnectionDerived(); private: virtual void OnEvent(const Overload_b&) { // application code } }; But ... since python isn't typed, and all the boost.python examples I've seen dealing with internals are on the C++ side, I'm a little puzzled as to how to do this. How do I override specific overloads?

    Read the article

  • Encoding / Error Correction Challenge

    - by emi1faber
    Is it mathematically feasible to encode and initial 4 byte message into 8 bytes and if one of the 8 bytes is completely dropped and another is wrong to reconstruct the initial 4 byte message? There would be no way to retransmit nor would the location of the dropped byte be known. If one uses Reed Solomon error correction with 4 "parity" bytes tacked on to the end of the 4 "data" bytes, such as DDDDPPPP, and you end up with DDDEPPP (where E is an error) and a parity byte has been dropped, I don't believe there's a way to reconstruct the initial message (although correct me if I am wrong)... What about multiplying (or performing another mathematical operation) the initial 4 byte message by a constant, then utilizing properties of an inverse mathematical operation to determine what byte was dropped. Or, impose some constraints on the structure of the message so every other byte needs to be odd and the others need to be even. Alternatively, instead of bytes, it could also be 4 decimal digits encoded in some fashion into 8 decimal digits where errors could be detected & corrected under the same circumstances mentioned above - no retransmission and the location of the dropped byte is not known. I'm looking for any crazy ideas anyone might have... Any ideas out there?

    Read the article

  • Service reference addition issue in visual studio 2010

    - by user293072
    I am currently working on an application that allows reverse geocoding using silverlight + bing maps. The thing is that I want to add a reference to the reverse geocoding service provided in msdn ( http://msdn.microsoft.com/en-us/library/cc879136.aspx) i.e. http:// dev.virtualearth.net/webservices/v1/geocodeservice/geocodeservice.svc?wsdl, but when I try to get a reference in vs2010, I get the following error: The document at the url http:// dev.virtualearth.net/webservices/v1/metadata/geocodeservice/geocodeservice.wsdl was not recognized as a known document type. The error message from each known type may help you fix the problem: Report from 'XML Schema' is ''', hexadecimal value 0x1F, is an invalid character. Line 1, position 1.'. Report from 'DISCO Document' is ''', hexadecimal value 0x1F, is an invalid character. Line 1, position 1.'. Report from 'WSDL Document' is 'There is an error in XML document (1, 1).'. '', hexadecimal value 0x1F, is an invalid character. Line 1, position 1. Metadata contains a reference that cannot be resolved: 'http://dev.virtualearth.net/webservices/v1/geocodeservice/geocodeservice.svc?wsdl'. Content Type application/soap+xml; charset=utf-8 was not supported by service http: //dev.virtualearth.net/webservices/v1/geocodeservice/geocodeservice.svc?wsdl. The client and service bindings may be mismatched. The remote server returned an error: (415) Unsupported Media Type. If the service is defined in the current solution, try building the solution and adding the service reference again. It is good to mention that I can access the service URL from the browser (with a no style information warning). I am aware that there are other reverse geolocoding services out there, but I am somewhat forced by certain circumstances to use only Microsoft-related components/services. Please help :)

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >