Search Results

Search found 36283 results on 1452 pages for 'value of certification'.

Page 442/1452 | < Previous Page | 438 439 440 441 442 443 444 445 446 447 448 449  | Next Page >

  • How can i enter in Network Security Field

    - by Master
    I am thinking of Entering in Network Security Field. It can be securing windows network , linux network But exactly don't ave the full picture how does that area is divided I only have the vague idea. i want some position where company call me to check their system to see if its secure. Or govt can hire to secure network from external access. Any thing like that Can anyone give me some idea how can i start. Is there any scope in that area. How its growing in future. Are there any certification which ican do to start with thanks

    Read the article

  • public key infrastructure - distribute bad root certificates

    - by iamrohitbanga
    Suppose a hacker launches a new Linux distro with firefox provided with it. Now a browser contains the certificates of the root certification authorities of PKI. Because firefox is a free browser anyone can package it with fake root certificates. Can this be used to authenticate some websites. How? Many existing linux distros are mirrored by people. They can easily package software containing certificates that can lead to such attacks. Is the above possible? Has such an attack taken place before?

    Read the article

  • What do I need for SSL?

    - by Ency
    Hi guys, just a quick question, I'm kind of confused. I've had set-up my own certification authority and I can create requests and signed them. But, I'm not sure, what I need to give to Apache, currently I've got: CA Private key CA Certificate Website Private key Website Certificate Website Certificate Request (I think I do not need it, but just to be clear) Until today I was using snakeoil certificate, but I've decided to have more SSL services, than CA looks as good solution, so my Apache was configured well, but now I am not sure what I shall provide to apache in following rules: SSLCertificateKeyFile /path/to/Website Private Key SSLCertificateFile /path/to/CA Certificate But than I got [Mon Dec 27 12:09:33 2010] [warn] RSA server certificate CommonName (CN) `EServer' does NOT match server name!? [Mon Dec 27 12:09:33 2010] [error] Unable to configure RSA server private key [Mon Dec 27 12:09:33 2010] [error] SSL Library Error: 185073780 error:0B080074:x509 certificate routines:X509_check_private_key:key values mismatch Something tells me than the warning is quite weird, because "EServer" is a common name of CA, so I think I shall not use CA Certificate in SSLCertificateFile, shall I? Do I need to create Certificate from Website private key or something else?

    Read the article

  • BizTalk 2009 - Custom Functoid Categories

    - by StuartBrierley
    I recently had cause to code a number of custom functoids to aid with some maps that I was writing. Once these were developed and deployed to C:\Program Files\Microsoft BizTalk Server 2009\Developer Tools\Mapper Extensions a quick refresh allowed them to appear in toolbox.  After dropping these on a map and configuring the appropriate inputs I tested the map to check that they worked as expected.  All but one of the functoids worked as expecetd, but the final functoid appeared not to be firing at all. I had already tested the code used in a simple test harness application, so I was confident in the code used, but I still needed to figure out what the problem might be. Debugging the map helped me on the way; for some reason the functoid in question was not shown correctly - the functoid definition was wrong. After some investigations I found that the functoid type you assign when coding a custom functoid affects more than just the category it appears in; different functoid types have different capabilities, including what they can link too.  For example, a logical functoid can not provide content for an output element, it can only say whether the element exists.  Map this via a Value Mapping functoid and the value of true or false can be seen in the output element. The functoid I was having problems with was one whare I had used the XPath functoid type, this had seemed to be a good fit as I was looking up content in a config file using xpath and I wanted it to appear the advanced area.  From the table below you can see that this functoid type is marked as "Internal Only", preventing it from being used for custom functoids.  Changing my type to String allowed the functoid to function as expected. Category Description Toolbox Group Assert Internal Use Only Advanced Conversion Converts characters to and from numerics and converts numbers from one base to another. Conversion Count Internal Use Only Advanced Cumulative Performs accumulations of the value of a field that occurs multiple times in a source document and outputs a single output. Cumulative DatabaseExtract Internal Use Only Database DatabaseLookup Internal Use Only Database DateTime Adds date, time, date and time, or add days to a specified date, in output data. Date/Time ExistenceLooping Internal Use Only Advanced Index Internal Use Only Advanced Iteration Internal Use Only Advanced Keymatch Internal Use Only Advanced Logical Controls conditional behavior of other functoids to determine whether particular output data is created. Logical Looping Internal Use Only Advanced MassCopy Internal Use Only Advanced Math Performs specific numeric calculations such as addition, multiplication, and division. Mathematical NilValue Internal Use Only Advanced Scientific Performs specific scientific calculations such as logarithmic, exponential, and trigonometric functions. Scientific Scripter Internal Use Only Advanced String Manipulates data strings by using well-known string functions such as concatenation, length, find, and trim. String TableExtractor Internal Use Only Advanced TableLooping Internal Use Only Advanced Unknown Internal Use Only Advanced ValueMapping Internal Use Only Advanced XPath Internal Use Only Advanced Links http://msdn.microsoft.com/en-us/library/microsoft.biztalk.basefunctoids.functoidcategory(BTS.20).aspx http://blog.eliasen.dk/CommentView,guid,d33b686b-b059-4381-a0e7-1c56e808f7f0.aspx

    Read the article

  • Best way to go about sorting 2D sprites in a "RPG Maker" styled RPG

    - by Aaron Stewart
    I am trying to come up with the best way to create overlapping sprites without having any issues. I was thinking of having a SortedDictionary and setting the Entity's key to it's Y position relative to the max bound of the simulation, aka the Z value. I'd update the "Z" value in the update method each frame, if the entity's position has changed at all. For those who don't know what I mean, I want characters who are standing closer in front of another character to be drawn on top, and if they are behind the character, they are drawn behind. I'm leery of using SpriteBatch back to front or front to back, I've been doing some searching and have been under the impression they are a bad idea. and want to know exactly how other people are dealing with their depth sorting. Just ultimately trying to come up with the best method of sorting for good practice before I get too far in to refactor the system effectively.

    Read the article

  • How to use call web service action in SharePoint2013 workflow

    - by ybbest
    In SharePoint2013, you can use call web service action and loop. In this post, I will show you how to achieve this. 1. Create a List workflow called CallWebService 2. Create a variable called listurl and assign the value to http://sp2010/_vti_bin/listdata.svc 3. Create a dictionary variable called RequestHeaders and add the following key value pairs. 4. Call the web service with the HttpHeaders you just build in the previous step and store the response in the variable ResponseContent. 5. The ResponseContent variable is the Dynamic values (in SharePoint designer it will be called dictionary type) and it is new feature for SharePoint2013 workflow. We can use the following actions to count the number items in the variable. 6. You can use loop in SharePoint 2013 workflow and out each list title as shown below.

    Read the article

  • Extending Currying: Partial Functions in Javascript

    - by kerry
    Last week I posted about function currying in javascript.  This week I am taking it a step further by adding the ability to call partial functions. Suppose we have a graphing application that will pull data via Ajax and perform some calculation to update a graph.  Using a method with the signature ‘updateGraph(id,value)’. To do this, we have do something like this: 1: for(var i=0;i<objects.length;i++) { 2: Ajax.request('/some/data',{id:objects[i].id},function(json) { 3: updateGraph(json.id, json.value); 4: } 5: } This works fine.  But, using this method we need to return the id in the json response from the server.  This works fine, but is not that elegant and increase network traffic. Using partial function currying we can bind the id parameter and add the second parameter later (when returning from the asynchronous call).  To do this, we will need the updated curry method.  I have added support for sending additional parameters at runtime for curried methods. 1: Function.prototype.curry = function(scope) { 2: scope = scope || window 3: var args = []; 4: for (var i=1, len = arguments.length; i < len; ++i) { 5: args.push(arguments[i]); 6: } 7: var m = this; 8: return function() { 9: for (var i=0, len = arguments.length; i < len; ++i) { 10: args.push(arguments[i]); 11: } 12: return m.apply(scope, args); 13: }; 14: } To partially curry this method we will call the curry method with the id parameter, then the request will callback on it with just the value.  Any additional parameters are appended to the method call. 1: for(var i=0;i<objects.length;i++) { 2: var id=objects[i].id; 3: Ajax.request('/some/data',{id: id}, updateGraph.curry(id)); 4: } As you can see, partial currying gives is a very useful tool and this simple method should be a part of every developer’s toolbox.

    Read the article

  • Final agenda - Oracle Exadata & Manageability Partner Community Forum at OpenWorld

    - by Javier Puerta
    Just a few days for Oracle OpenWorld and our Exadata & Manageability Partner Community Forum for EMEA partners. The event will take place on the afternoon of Monday, October 1st, 2012 during the Oracle OpenWorld week. For all partners that have confirmed their attendance to the event, find below the final detailed agenda. I look forward to meeting again in San Francisco with all of you who can attend the event and hope that you will find the sessions useful for your business.   FINAL AGENDAOracle Exadata & ManageabilityEMEA Partner Community Forum at Oracle OpenWorld 2012 in San Francisco, USAMonday, October 1st, 20112 Detailed agenda Time Session Speaker 15:30 Reception of participants - Networking coffe served 16:00 Welcome Hans-Peter Kipfer, VP Engineered Systems, Oracle EMEA 16:10 Next challenges in building and managing clouds Javier Cabrerizo, VP, Global Business Development for Exadata, Oracle Corp. 16:30 Partner experience 1.- IT modernization, simplification and cost reduction: The case of a customer in Transportation & Logistics with custom applications and SAP. The Technological Renewal Model built by aligning the innovation of Oracle's Engineered Systems and Capgemini's service delivery excellence has resulted in significant cost savings for the client. Francisco Bermúdez, Country Leader Infrastructure Services, Capgemini, Spain 16:55 Partner experience 2.- The Nvision cloud project NCloud is an innovative design that combines advanced technical solutions, virtualization, and dynamic management of IT resources, providing a complete "as-a-Service" offering for Infrastructure, Database, Middleware, and Applications. Dmitry Krasilov, Head of Oracle Competence Center, Nvision Group, Russia 17:20 Partner experience 3.- From Exadata Ready to Exadata Optimized: An ISV Experience The experience of WeDo Technologies in the process and benefits that started as an Exadata Ready certification and ended up as an Exadata Optimized. Miguel Alves,  Product Business Solutions Manager, Wedo Technologies, Portugal 17:45 Next steps in engaging with Oracle Cengiz Yilmaz, Director Partner Strategy, Oracle EMEA Engineered SystemsPatrick Rood, Manageability Partner Business, Oracle EMEA 18:00 Wrap-up & Networking Time and Location:Monday, October 1st, 2012, 15:30 - 18:00 PST Grand Hyatt San Francisco, 345 Stockton Street, San Francisco (Conference Theater) (It is a 15 minute walk from OOW Moscone Center. See directions here)  

    Read the article

  • Final agenda - Oracle Exadata & Manageability Partner Community Forum at OpenWorld

    - by Javier Puerta
    Just a few days for Oracle OpenWorld and our Exadata & Manageability Partner Community Forum for EMEA partners. The event will take place on the afternoon of Monday, October 1st, 2012 during the Oracle OpenWorld week. For all partners that have confirmed their attendance to the event, find below the final detailed agenda. I look forward to meeting again in San Francisco with all of you who can attend the event and hope that you will find the sessions useful for your business.   FINAL AGENDAOracle Exadata & ManageabilityEMEA Partner Community Forum at Oracle OpenWorld 2012 in San Francisco, USAMonday, October 1st, 20112 Detailed agenda Time Session Speaker 15:30 Reception of participants - Networking coffe served 16:00 Welcome Hans-Peter Kipfer, VP Engineered Systems, Oracle EMEA 16:10 Next challenges in building and managing clouds Javier Cabrerizo, VP, Global Business Development for Exadata, Oracle Corp. 16:30 Partner experience 1.- IT modernization, simplification and cost reduction: The case of a customer in Transportation & Logistics with custom applications and SAP. The Technological Renewal Model built by aligning the innovation of Oracle's Engineered Systems and Capgemini's service delivery excellence has resulted in significant cost savings for the client. Francisco Bermúdez, Country Leader Infrastructure Services, Capgemini, Spain 16:55 Partner experience 2.- The Nvision cloud project NCloud is an innovative design that combines advanced technical solutions, virtualization, and dynamic management of IT resources, providing a complete "as-a-Service" offering for Infrastructure, Database, Middleware, and Applications. Dmitry Krasilov, Head of Oracle Competence Center, Nvision Group, Russia 17:20 Partner experience 3.- From Exadata Ready to Exadata Optimized: An ISV Experience The experience of WeDo Technologies in the process and benefits that started as an Exadata Ready certification and ended up as an Exadata Optimized. Miguel Alves,  Product Business Solutions Manager, Wedo Technologies, Portugal 17:45 Next steps in engaging with Oracle Cengiz Yilmaz, Director Partner Strategy, Oracle EMEA Engineered SystemsPatrick Rood, Manageability Partner Business, Oracle EMEA 18:00 Wrap-up & Networking Time and Location:Monday, October 1st, 2012, 15:30 - 18:00 PST Grand Hyatt San Francisco, 345 Stockton Street, San Francisco (Conference Theater) (It is a 15 minute walk from OOW Moscone Center. See directions here)  

    Read the article

  • When should a method of a class return the same instance after modifying itself?

    - by modiX
    I have a class that has three methods A(), B() and C(). Those methods modify the own instance. While the methods have to return an instance when the instance is a separate copy (just as Clone()), I got a free choice to return void or the same instance (return this;) when modifying the same instance in the method and not returning any other value. When deciding for returning the same modified instance, I can do neat method chains like obj.A().B().C();. Would this be the only reason for doing so? Is it even okay to modify the own instance and return it, too? Or should it only return a copy and leave the original object as before? Because when returning the same modified instance the user would maybe admit the returned value is a copy, otherwise it would not be returned? If it's okay, what's the best way to clarify such things on the method?

    Read the article

  • Arrow ECS: VAD mit Weitblick

    - by A&C Redaktion
    Die Arrow ECS unterstützt Oracle Partner dabei, sich dauerhaft erfolgreich zu etablieren. Als Value Added Distributor, kurz VAD, für das Oracle Soft- und Hardware Portfolio bietet Arrow wertvolle Mehrwertdienstleistungen für Partner an, etwa in den Bereichen Consulting, Vertrieb und Produktmarketing. Der Vorteil: Die Partner können sich voll auf ihr Kerngeschäft konzentrieren. Wie die Zusammenarbeit genau funktioniert, erklären Martin Wilhelm, Manager Business Unit Enterprise Solutions, Herbert Varga vom Product Management und die Sales-Expertin für Oracle Produkte, Maria Keller, im Video. Arrow ECS steht für kompetente und zuverlässige Zusammenarbeit mit dem Partner und wurde bereits mehrfach zum Oracle Global Value Added Distributor des Jahres gekürt

    Read the article

  • What is the best book on Silverlight 4?

    - by mbcrump
    Silverlight/Expression 4 Books! I recently stumbled upon a post asking, “What is the best book on Silverlight 4?” In the age of the internet, it can be hard for anyone searching for a good book to actually find it. I have read a few Silverlight 4/Expression books in 2010 and decided to post the “best of” collection. Instead of reading multiple books, you can cut your list down to whatever category that you fit in. With Silverlight 5 coming soon, now is the time to get up to speed with what Silverlight 4 can offer. Be sure to read the full review at the bottom of each section. For the “Beginner” Silverlight Developer: Both of these books contains very simple applications and will get you started very fast. and Book Review: Microsoft Silverlight 4 Step by Step For the guy/gal that wants to “Master” Expression Blend 4: This is a hands-on kind of book. Victor get you started early on with some sample application and quickly deep dives into Storyboard and other Animations. If you want to learn Blend 4 then this is the place to start. Book Review: Foundation Expression Blend 4 by Victor Gaudioso If you are aiming to learn more about the Business side of Silverlight then check out the following two books: and Finally, For the Silverlight 4 guy/gal that wants to “Master” Silverlight 4, it really boils down to the following two books: and   Book Review: Silverlight 4 Unleashed by Laurent Bugnion Book Review: Silverlight 4 in Action by Pete Brown I can’t describe how much that I’ve actually learned from both of these books. I would also recommend you read these books if you are preparing for your Silverlight 4 Certification. For a complete list of all Silverlight 4 books then check out http://www.silverlight.net/learn/books/ and don’t forget to subscribe to my blog.  Subscribe to my feed CodeProject

    Read the article

  • BOEING, EA and UNDERWRITER LABS @ Oracle Open World 2012 General Session (GEN9504): Innovation Platform for Oracle Apps, including Oracle Fusion Applications

    - by Sanjeev Sharma
     What does it take to deliver social, mobile, cloud and business analytic      capabilities? Oracle Fusion Middleware is the leading innovation platform for today’s new    business  applications and the building-block of Oracle Fusion Applications.  Join Amit Zavery,  Vice President of Fusion Middleware Product Management discuss Oracle Fusion  Applications’ architecture and the strategy roadmap for Oracle Fusion Middleware. Underwriter Laboratories is the world’s leading provider of product safety and certification testing services.  To support its business growth from $1B to $5B in the next 5 years Underwriter Laboratories is undergoing a major business transformation. Underpinning Underwriter Laboratories's growth plans and associated business transformation is a major Datacenter Modernization effort to consolidate its existing Oracle Applications (E-Business Suite, Siebel CRM, BI etc.) and middleware components (Oracle SOA Suite, Oracle AIA etc.) on a standardized application platform. Underwriter Labs has identified Oracle Engineered Systems (Exalogic and Exadata) as the cornerstone of its Datacenter Modernization endeavor which will eventually support 10,000 employees, 87,000 manufactures and 600,000 catalog items.  Hear senior business leaders from Boeing, Electronic Arts and Underwriters  Laboratories discuss how their organizations are leveraging Oracle Fusion Middleware and  Oracle Applications to improve productivity, lower IT costs and lay a  foundation for business  innovation at the following general session at Oracle Open World 2012: Session:  GEN9504 - General Session: Innovation Platform for Oracle Apps, Including Oracle Fusion ApplicationsDate: Monday, 1 Oct, 2012Time: 10:45 am - 11:45 am (PST)Venue: Moscone West (3002 / 3004)

    Read the article

  • IE9 Loses Some CSS After Particular Form Submit [migrated]

    - by Asherion
    The site I am editing has a search form. For the record, there are several other forms on the site, contact and the like. This is the only one with an issue. Upon submission of the form, SOME of the styling is lost in IE9 (possibly other versions of IE, haven't tested that yet). Primarily, the margins and colors set in html and body appear to have been lost. Menus, banner, text, etc all appear to retain styles. All styles are on one sheet, that are used here... Any helpful advice? Here is the contents of the search page and the php used to check for the form, if that helps, and the css that I think is lost. THE HTML: <div id="search"> <br /> <div style="float:right;font-size:.8em;"> <form name="form_sidesearch" action="search.html" method="post"> <input type="hidden" name="action" value="search" /> <input type="text" name="search_value" value="<?php echo $systems_primary->search_value ?>" /> <input type="submit" name="submit_search" value="Search Website" /> </form> <br /> </div> </div> <?php echo stripslashes($search_results); THE PHP: <?php // -- Begin Search -------------------------------------------------------------------------------------- if($_REQUEST["action"] === "search") { if(strlen($_REQUEST["pg"]) <= 0) { $_REQUEST["pg"] = 1; } $search_results = $systems_primary->search_website("index",urldecode($_REQUEST["search_value"]),"<div class=\"listing ui-corner-all\"><a href=\"{ENTRY_URL}\" title=\"{ENTRY_TITLE}\" class=\"listing_title\">{ENTRY_TITLE}</a>{ENTRY_CONTENT} <a href=\"{ENTRY_URL}\" title=\"{ENTRY_TITLE}\" style=\"font-size:.8em;\">...read more</a></div><br /><br />",345,"all",10,$_REQUEST["pg"]); } // -- End Search ---------------------------------------------------------------------------------------- ?> THE LOST CSS (could be more): html { background-color:#F6E6C8; font-size:16px; font:Helvetica; } body { width:1027px; margin:0 auto; background-color:#ffffff; font: arial, times new roman, sans-serif; }

    Read the article

  • How to Customize Fonts and Colors for Gnome Panels in Ubuntu Linux

    - by The Geek
    Earlier this week we showed you how to make the Gnome Panels totally transparent, but you really need some customized fonts and colors to make the effect work better. Here’s how to do it. This article is the first part of a multi-part series on how to customize the Ubuntu desktop, written by How-To Geek reader and ubergeek, Omar Hafiz. Changing the Gnome Colors the Easy Way You’ll first need to install Gnome Color Chooser which is available in the default repositories (the package name is gnome-color-chooser). Then go to System > Preferences > Gnome Color Chooser to launch the program. When you see all these tabs you immediately know that Gnome Color Chooser does not only change the font color of the panel, but also the color of the fonts all over Ubuntu, desktop icons, and many other things as well. Now switch to the panel tab, here you can control every thing about your panels. You can change font, font color, background and background color of the panels and start menus. Tick the “Normal” option and choose the color you want for the panel font. If you want you can change the hover color of the buttons on the panel by too. A little below the color option is the font options, this includes the font, font size, and the X and Y positioning of the font. The first two options are pretty straight forward, they change the typeface and the size. The X-Padding and Y-Padding may confuse you but they are interesting, they may give a nice look for your panels by increasing the space between items on your panel like this: X-Padding:   Y-Padding:   The bottom half of the window controls the look of your start menus which is the Applications, Places, and Systems menus. You can customize them just the way you did with the panel.   Alright, this was the easy way to change the font of your panels. Changing the Gnome Theme Colors the Command-Line Way The other hard (not so hard really) way will be changing the configuration files that tell your panel how it should look like. In your Home Folder, press Ctrl+H to show the hidden files, now find the file “.gtkrc-2.0”, open it and insert this line in it. If there are any other lines in the file leave them intact. include “/home/<username>/.gnome2/panel-fontrc” Don’t forget to replace the <user_name> with you user account name. When done close and save the file. Now navigate the folder “.gnome2” from your Home Folder and create a new file and name it “panel-fontrc”. Open the file you just created with a text editor and paste the following in it: style “my_color”{fg[NORMAL] = “#FF0000”}widget “*PanelWidget*” style “my_color”widget “*PanelApplet*” style “my_color” This text will make the font red. If you want other colors you’ll need to replace the Hex value/HTML Notation (in this case #FF0000) with the value of the color you want. To get the hex value you can use GIMP, Gcolor2 witch is available in the default repositories or you can right-click on your panel > Properties > Background tab then click to choose the color you want and copy the Hex value. Don’t change any other thing in the text. When done, save and close. Now press Alt+F2 and enter “killall gnome-panel” to force it to restart or you can log out and login again. Most of you will prefer the first way of changing the font and color for it’s ease of applying and because it gives you much more options but, some may not have the ability/will to download and install a new program on their machine or have reasons of their own for not to using it, that’s why we provided the two way. Latest Features How-To Geek ETC How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware The Splendiferous Array of Culinary Tools [Infographic] Add a Real-Time Earth Wallpaper App to Ubuntu with xplanetFX The Citroen GT – An Awesome Video Game Car Brought to Life [Video] Final Man vs. Machine Round of Jeopardy Unfolds; Watson Dominates Give Chromium-Based Browser Desktop Notifications a Native System Look in Ubuntu Chrome Time Track Is a Simple Task Time Tracker

    Read the article

  • Ubuntu 14.04 and Dell E6440 - sound and mouse working after suspend/resume

    - by slawek.mikula
    I've installed fresh Ubuntu 14.04 on Dell Lattitude E6440 and i've encountered two strange issues when doing full system start/restart: - mouse scroll when full restart is working very fast. One turn of the wheel cause very large change of value (window scroll, sound change etc.) But when i suspend and resume laptop it starts working correctly (one turn of the wheel and small change of value - the same as in previous versions of Ubuntu) - sound from speakers - the same issue. When full restart the sound does not come from internal laptop speakers. It works though through headphones. After suspend/resume internal speakers starts to work. What can be a cause of these issues ?

    Read the article

  • How to display values from another website to an new html page?

    - by user3098728
    How to display the value in a new html file from different website? This an example field of values that need to display into new html file and I want to display the said values in the input box (Contract ID) of this page JSFiddle. I have 2 JS code that would display that values, but unfortunately its not working and I dont know how to display that value in html input box. Please help me. Thank you I want to display the said value in this input box: Here the JS file to read the values: function scanLapVerification() { try { var page_title = "Title"; var el = getElement(document, "class", "view-operator-verification-title", ""); if (!el || el.length == 0) return; if (el[0].innerText != page_title) return; var page_title = ''; var el = getElement(document, "class", "workflowActivityDetailPanel", ""); if (el && el.length > 0) { var eltr = getElement(el[0], "tag", "tr", ""); if (eltr && eltr.length > 0) { //Read Contract ID var contractId = { CI: { id: null } }; var con_id = null; for (var i = 0; i < eltr.length; i++) { tr_text = eltr[i].innerText; if (tr_text.substr(0, "Contract ID".length) == "Contract ID") con_id = "CI"; if (con_id && tr_text.substr(0, "Contract ID".length) == "Contract ID") { contractId[con_id].id = tr_text.substr("Contract ID".length + 1, tr_text.length - "Contract ID".length - 1); } } var contract_id = contractId.CI.id; return { content: "cid_check", con_id: con_id }; } return { status: "KO" }; } catch (e) { alert("Exception: scanLapVerification\n" + e.Description); return { status: "KO", message: e }; } }; And here's the 2nd JS that display to a new html page: function scanLapVerification() { chrome.tabs.sendRequest(tabLapVerification, { method: "scanLapVerification" }, function (response) { msgbox("receiveResponse: scanLapVerification " + jsonToString(response, "JSON")); //maintaining state in the background if (response.data.content == "cid_check") { //Popup window features var popupWindow = null; var name; var width = 550; var height = 200; var left = parseInt((screen.availWidth / 2) - (width / 2)); var top = parseInt((screen.availHeight / 2) - (height / 2)); var windowFeatures = "width=" + width + ",height=" + height + ",left=" + left + ",top=" + top + "screenX=" + left + ",screenY=" + top; //Input new address with popup window if (confirm("Does the client has new address?") == true) { popupWindow = window.open('/htmlname.htm', "title", windowFeatures + encodeURIComponent(response.data.contract_id)); popupWindow.focus(); } else { name = ""; } }); }

    Read the article

  • Oracle Enterprise Content Management 11gR1 Patch Set 3 Released

    - by michelle.huff
    We're pleased to announce an updated patch set for Oracle Enterprise Content Management 11gR1 PS3 (11.1.1.4.0). Patch Set 3 (PS3) supports additional platforms and applications, and adds several new features to the products. Highlights include: Content Server (repository for UCM, URM & I/PM): New security capabilities, file store provider updates. Desktop Integration Suite: Windows 7 64-bit and Office 2010 (32 & 64-bit) support and new "Recent Content Items" menu. Universal Content Management (UCM): Site Studio Manager for Site Studio for External Applications, new template management options and ability to run Site Studio & Site Studio for External Applications 11g components on Content Server 10gR3. Imaging and Process Management (I/PM): Now certified with Oracle Business Process Management (BPM) 11g, Oracle Single Sign On (OSSO) 10g and Oracle Access Manager (OAM) 10g, export search results to Microsoft Excel. ECM Adapter for PeopleSoft: Support for UCM 11g Managed Attachments (support for 10g released earlier in 2010) and certification with PeopleTools 8.50. Information Rights Management (IRM): Desktop support for Microsoft Office 2010, Adobe Reader X and Microsoft SharePoint 2010. Customer Webcast We'll be covering this new release in our Quarterly Customer Update Webcast scheduled for this week, January 19/20, 2011. Register today. More Information Downloads now available on Oracle Technology Network (OTN) - it will be available via eDelivery soon. Read the updated ECM documentation for 11.1.1.4.0 Review the ECM 11.1.1.4.0 Upgrade & Patch Guides See the Release Notes

    Read the article

  • Is there a constant for "end of time"?

    - by Nick Rosencrantz
    For some systems, the time value 9999-12-31 is used as the "end of time" as the end of the time that the computer can calculate. But what if it changes? Wouldn't it be better to define this time as a builtin variable? In C and other programming languages there usually is a variable such as MAX_INT or similar to get the largest value an integer could have. Why is there not a similar function for MAX_TIME i.e. set the variable to the "end of time" which for many systems usually is 9999-12-31. To avoid the problem of hardcoding to a wrong year (9999) could these systems introduce a variable for the "end of time"?

    Read the article

  • Why enumerator structs are a really bad idea

    - by Simon Cooper
    If you've ever poked around the .NET class libraries in Reflector, I'm sure you would have noticed that the generic collection classes all have implementations of their IEnumerator as a struct rather than a class. As you will see, this design decision has some rather unfortunate side effects... As is generally known in the .NET world, mutable structs are a Very Bad Idea; and there are several other blogs around explaining this (Eric Lippert's blog post explains the problem quite well). In the BCL, the generic collection enumerators are all mutable structs, as they need to keep track of where they are in the collection. This bit me quite hard when I was coding a wrapper around a LinkedList<int>.Enumerator. It boils down to this code: sealed class EnumeratorWrapper : IEnumerator<int> { private readonly LinkedList<int>.Enumerator m_Enumerator; public EnumeratorWrapper(LinkedList<int> linkedList) { m_Enumerator = linkedList.GetEnumerator(); } public int Current { get { return m_Enumerator.Current; } } object System.Collections.IEnumerator.Current { get { return Current; } } public bool MoveNext() { return m_Enumerator.MoveNext(); } public void Reset() { ((System.Collections.IEnumerator)m_Enumerator).Reset(); } public void Dispose() { m_Enumerator.Dispose(); } } The key line here is the MoveNext method. When I initially coded this, I thought that the call to m_Enumerator.MoveNext() would alter the enumerator state in the m_Enumerator class variable and so the enumeration would proceed in an orderly fashion through the collection. However, when I ran this code it went into an infinite loop - the m_Enumerator.MoveNext() call wasn't actually changing the state in the m_Enumerator variable at all, and my code was looping forever on the first collection element. It was only after disassembling that method that I found out what was going on The MoveNext method above results in the following IL: .method public hidebysig newslot virtual final instance bool MoveNext() cil managed { .maxstack 1 .locals init ( [0] bool CS$1$0000, [1] valuetype [System]System.Collections.Generic.LinkedList`1/Enumerator CS$0$0001) L_0000: nop L_0001: ldarg.0 L_0002: ldfld valuetype [System]System.Collections.Generic.LinkedList`1/Enumerator EnumeratorWrapper::m_Enumerator L_0007: stloc.1 L_0008: ldloca.s CS$0$0001 L_000a: call instance bool [System]System.Collections.Generic.LinkedList`1/Enumerator::MoveNext() L_000f: stloc.0 L_0010: br.s L_0012 L_0012: ldloc.0 L_0013: ret } Here, the important line is 0002 - m_Enumerator is accessed using the ldfld operator, which does the following: Finds the value of a field in the object whose reference is currently on the evaluation stack. So, what the MoveNext method is doing is the following: public bool MoveNext() { LinkedList<int>.Enumerator CS$0$0001 = this.m_Enumerator; bool CS$1$0000 = CS$0$0001.MoveNext(); return CS$1$0000; } The enumerator instance being modified by the call to MoveNext is the one stored in the CS$0$0001 variable on the stack, and not the one in the EnumeratorWrapper class instance. Hence why the state of m_Enumerator wasn't getting updated. Hmm, ok. Well, why is it doing this? If you have a read of Eric Lippert's blog post about this issue, you'll notice he quotes a few sections of the C# spec. In particular, 7.5.4: ...if the field is readonly and the reference occurs outside an instance constructor of the class in which the field is declared, then the result is a value, namely the value of the field I in the object referenced by E. And my m_Enumerator field is readonly! Indeed, if I remove the readonly from the class variable then the problem goes away, and the code works as expected. The IL confirms this: .method public hidebysig newslot virtual final instance bool MoveNext() cil managed { .maxstack 1 .locals init ( [0] bool CS$1$0000) L_0000: nop L_0001: ldarg.0 L_0002: ldflda valuetype [System]System.Collections.Generic.LinkedList`1/Enumerator EnumeratorWrapper::m_Enumerator L_0007: call instance bool [System]System.Collections.Generic.LinkedList`1/Enumerator::MoveNext() L_000c: stloc.0 L_000d: br.s L_000f L_000f: ldloc.0 L_0010: ret } Notice on line 0002, instead of the ldfld we had before, we've got a ldflda, which does this: Finds the address of a field in the object whose reference is currently on the evaluation stack. Instead of loading the value, we're loading the address of the m_Enumerator field. So now the call to MoveNext modifies the enumerator stored in the class rather than on the stack, and everything works as expected. Previously, I had thought enumerator structs were an odd but interesting feature of the BCL that I had used in the past to do linked list slices. However, effects like this only underline how dangerous mutable structs are, and I'm at a loss to explain why the enumerators were implemented as structs in the first place. (interestingly, the SortedList<TKey, TValue> enumerator is a struct but is private, which makes it even more odd - the only way it can be accessed is as a boxed IEnumerator!). I would love to hear people's theories as to why the enumerators are implemented in such a fashion. And bonus points if you can explain why LinkedList<int>.Enumerator.Reset is an explicit implementation but Dispose is implicit... Note to self: never ever ever code a mutable struct.

    Read the article

  • Stairway to XML: Level 3 - Working with Typed XML

    You can enforce the validation of an XML data type, variable or column by associating it with an XML Schema Collection. SQL Server validates a typed XML value against the rules defined in the schema collection so that INSERT or UPDATE operations will succeed only if the value being inserted or updated is valid as per the rules defined in the Schema Collection. NEW! Deployment Manager Early Access ReleaseDeploy SQL Server changes and .NET applications fast, frequently, and without fuss, using Deployment Manager, the new tool from Red Gate. Try the Early Access Release to get a 20% discount on Version 1. Download the Early Access Release.

    Read the article

  • From physics to Java programmer?

    - by inovaovao
    I'm a physics phd with little actual programming experience. I've always liked programming and played around with Basic and Pascal (also VB and Delphi) as a teen, but the largest actual project I completed was an assignement for the introductory computer science class in university where I wrote a nice little program (about 1500 lines of pascal) to display functions of 2 variables in 3D. I've had also a couple other projects of a few hundred lines range, but during my phd I didn't have (or take) the time to program more (string theory is hard guys!), beside playing around with ruby. Now I've decided that I'm more interested in programming than in physics and started to learn Java (hoping to pass the certification exam next week) and OO design. Still, I have trouble deciding on what to focus next (Java EE? Web development? algorithms and C programming?) in order to maximize my employement chances. Bear in mind that I'm aiming (mostly) at the swedish job market and that I'm 30 years old. So for the questions: Do you think that I have any chances to start and make a career in IT and programming coming from physics? What would be the best strategy to maximize my value in the field? Do you have suggestions as to where my physics background might be useful?

    Read the article

  • Informed TDD &ndash; Kata &ldquo;To Roman Numerals&rdquo;

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/28/informed-tdd-ndash-kata-ldquoto-roman-numeralsrdquo.aspxIn a comment on my article on what I call Informed TDD (ITDD) reader gustav asked how this approach would apply to the kata “To Roman Numerals”. And whether ITDD wasn´t a violation of TDD´s principle of leaving out “advanced topics like mocks”. I like to respond with this article to his questions. There´s more to say than fits into a commentary. Mocks and TDD I don´t see in how far TDD is avoiding or opposed to mocks. TDD and mocks are orthogonal. TDD is about pocess, mocks are about structure and costs. Maybe by moving forward in tiny red+green+refactor steps less need arises for mocks. But then… if the functionality you need to implement requires “expensive” resource access you can´t avoid using mocks. Because you don´t want to constantly run all your tests against the real resource. True, in ITDD mocks seem to be in almost inflationary use. That´s not what you usually see in TDD demonstrations. However, there´s a reason for that as I tried to explain. I don´t use mocks as proxies for “expensive” resource. Rather they are stand-ins for functionality not yet implemented. They allow me to get a test green on a high level of abstraction. That way I can move forward in a top-down fashion. But if you think of mocks as “advanced” or if you don´t want to use a tool like JustMock, then you don´t need to use mocks. You just need to stand the sight of red tests for a little longer ;-) Let me show you what I mean by that by doing a kata. ITDD for “To Roman Numerals” gustav asked for the kata “To Roman Numerals”. I won´t explain the requirements again. You can find descriptions and TDD demonstrations all over the internet, like this one from Corey Haines. Now here is, how I would do this kata differently. 1. Analyse A demonstration of TDD should never skip the analysis phase. It should be made explicit. The requirements should be formalized and acceptance test cases should be compiled. “Formalization” in this case to me means describing the API of the required functionality. “[D]esign a program to work with Roman numerals” like written in this “requirement document” is not enough to start software development. Coding should only begin, if the interface between the “system under development” and its context is clear. If this interface is not readily recognizable from the requirements, it has to be developed first. Exploration of interface alternatives might be in order. It might be necessary to show several interface mock-ups to the customer – even if that´s you fellow developer. Designing the interface is a task of it´s own. It should not be mixed with implementing the required functionality behind the interface. Unfortunately, though, this happens quite often in TDD demonstrations. TDD is used to explore the API and implement it at the same time. To me that´s a violation of the Single Responsibility Principle (SRP) which not only should hold for software functional units but also for tasks or activities. In the case of this kata the API fortunately is obvious. Just one function is needed: string ToRoman(int arabic). And it lives in a class ArabicRomanConversions. Now what about acceptance test cases? There are hardly any stated in the kata descriptions. Roman numerals are explained, but no specific test cases from the point of view of a customer. So I just “invent” some acceptance test cases by picking roman numerals from a wikipedia article. They are supposed to be just “typical examples” without special meaning. Given the acceptance test cases I then try to develop an understanding of the problem domain. I´ll spare you that. The domain is trivial and is explain in almost all kata descriptions. How roman numerals are built is not difficult to understand. What´s more difficult, though, might be to find an efficient solution to convert into them automatically. 2. Solve The usual TDD demonstration skips a solution finding phase. Like the interface exploration it´s mixed in with the implementation. But I don´t think this is how it should be done. I even think this is not how it really works for the people demonstrating TDD. They´re simplifying their true software development process because they want to show a streamlined TDD process. I doubt this is helping anybody. Before you code you better have a plan what to code. This does not mean you have to do “Big Design Up-Front”. It just means: Have a clear picture of the logical solution in your head before you start to build a physical solution (code). Evidently such a solution can only be as good as your understanding of the problem. If that´s limited your solution will be limited, too. Fortunately, in the case of this kata your understanding does not need to be limited. Thus the logical solution does not need to be limited or preliminary or tentative. That does not mean you need to know every line of code in advance. It just means you know the rough structure of your implementation beforehand. Because it should mirror the process described by the logical or conceptual solution. Here´s my solution approach: The arabic “encoding” of numbers represents them as an ordered set of powers of 10. Each digit is a factor to multiply a power of ten with. The “encoding” 123 is the short form for a set like this: {1*10^2, 2*10^1, 3*10^0}. And the number is the sum of the set members. The roman “encoding” is different. There is no base (like 10 for arabic numbers), there are just digits of different value, and they have to be written in descending order. The “encoding” XVI is short for [10, 5, 1]. And the number is still the sum of the members of this list. The roman “encoding” thus is simpler than the arabic. Each “digit” can be taken at face value. No multiplication with a base required. But what about IV which looks like a contradiction to the above rule? It is not – if you accept roman “digits” not to be limited to be single characters only. Usually I, V, X, L, C, D, M are viewed as “digits”, and IV, IX etc. are viewed as nuisances preventing a simple solution. All looks different, though, once IV, IX etc. are taken as “digits”. Then MCMLIV is just a sum: M+CM+L+IV which is 1000+900+50+4. Whereas before it would have been understood as M-C+M+L-I+V – which is more difficult because here some “digits” get subtracted. Here´s the list of roman “digits” with their values: {1, I}, {4, IV}, {5, V}, {9, IX}, {10, X}, {40, XL}, {50, L}, {90, XC}, {100, C}, {400, CD}, {500, D}, {900, CM}, {1000, M} Since I take IV, IX etc. as “digits” translating an arabic number becomes trivial. I just need to find the values of the roman “digits” making up the number, e.g. 1954 is made up of 1000, 900, 50, and 4. I call those “digits” factors. If I move from the highest factor (M=1000) to the lowest (I=1) then translation is a two phase process: Find all the factors Translate the factors found Compile the roman representation Translation is just a look-up. Finding, though, needs some calculation: Find the highest remaining factor fitting in the value Remember and subtract it from the value Repeat with remaining value and remaining factors Please note: This is just an algorithm. It´s not code, even though it might be close. Being so close to code in my solution approach is due to the triviality of the problem. In more realistic examples the conceptual solution would be on a higher level of abstraction. With this solution in hand I finally can do what TDD advocates: find and prioritize test cases. As I can see from the small process description above, there are two aspects to test: Test the translation Test the compilation Test finding the factors Testing the translation primarily means to check if the map of factors and digits is comprehensive. That´s simple, even though it might be tedious. Testing the compilation is trivial. Testing factor finding, though, is a tad more complicated. I can think of several steps: First check, if an arabic number equal to a factor is processed correctly (e.g. 1000=M). Then check if an arabic number consisting of two consecutive factors (e.g. 1900=[M,CM]) is processed correctly. Then check, if a number consisting of the same factor twice is processed correctly (e.g. 2000=[M,M]). Finally check, if an arabic number consisting of non-consecutive factors (e.g. 1400=[M,CD]) is processed correctly. I feel I can start an implementation now. If something becomes more complicated than expected I can slow down and repeat this process. 3. Implement First I write a test for the acceptance test cases. It´s red because there´s no implementation even of the API. That´s in conformance with “TDD lore”, I´d say: Next I implement the API: The acceptance test now is formally correct, but still red of course. This will not change even now that I zoom in. Because my goal is not to most quickly satisfy these tests, but to implement my solution in a stepwise manner. That I do by “faking” it: I just “assume” three functions to represent the transformation process of my solution: My hypothesis is that those three functions in conjunction produce correct results on the API-level. I just have to implement them correctly. That´s what I´m trying now – one by one. I start with a simple “detail function”: Translate(). And I start with all the test cases in the obvious equivalence partition: As you can see I dare to test a private method. Yes. That´s a white box test. But as you´ll see it won´t make my tests brittle. It serves a purpose right here and now: it lets me focus on getting one aspect of my solution right. Here´s the implementation to satisfy the test: It´s as simple as possible. Right how TDD wants me to do it: KISS. Now for the second equivalence partition: translating multiple factors. (It´a pattern: if you need to do something repeatedly separate the tests for doing it once and doing it multiple times.) In this partition I just need a single test case, I guess. Stepping up from a single translation to multiple translations is no rocket science: Usually I would have implemented the final code right away. Splitting it in two steps is just for “educational purposes” here. How small your implementation steps are is a matter of your programming competency. Some “see” the final code right away before their mental eye – others need to work their way towards it. Having two tests I find more important. Now for the next low hanging fruit: compilation. It´s even simpler than translation. A single test is enough, I guess. And normally I would not even have bothered to write that one, because the implementation is so simple. I don´t need to test .NET framework functionality. But again: if it serves the educational purpose… Finally the most complicated part of the solution: finding the factors. There are several equivalence partitions. But still I decide to write just a single test, since the structure of the test data is the same for all partitions: Again, I´m faking the implementation first: I focus on just the first test case. No looping yet. Faking lets me stay on a high level of abstraction. I can write down the implementation of the solution without bothering myself with details of how to actually accomplish the feat. That´s left for a drill down with a test of the fake function: There are two main equivalence partitions, I guess: either the first factor is appropriate or some next. The implementation seems easy. Both test cases are green. (Of course this only works on the premise that there´s always a matching factor. Which is the case since the smallest factor is 1.) And the first of the equivalence partitions on the higher level also is satisfied: Great, I can move on. Now for more than a single factor: Interestingly not just one test becomes green now, but all of them. Great! You might say, then I must have done not the simplest thing possible. And I would reply: I don´t care. I did the most obvious thing. But I also find this loop very simple. Even simpler than a recursion of which I had thought briefly during the problem solving phase. And by the way: Also the acceptance tests went green: Mission accomplished. At least functionality wise. Now I´ve to tidy up things a bit. TDD calls for refactoring. Not uch refactoring is needed, because I wrote the code in top-down fashion. I faked it until I made it. I endured red tests on higher levels while lower levels weren´t perfected yet. But this way I saved myself from refactoring tediousness. At the end, though, some refactoring is required. But maybe in a different way than you would expect. That´s why I rather call it “cleanup”. First I remove duplication. There are two places where factors are defined: in Translate() and in Find_factors(). So I factor the map out into a class constant. Which leads to a small conversion in Find_factors(): And now for the big cleanup: I remove all tests of private methods. They are scaffolding tests to me. They only have temporary value. They are brittle. Only acceptance tests need to remain. However, I carry over the single “digit” tests from Translate() to the acceptance test. I find them valuable to keep, since the other acceptance tests only exercise a subset of all roman “digits”. This then is my final test class: And this is the final production code: Test coverage as reported by NCrunch is 100%: Reflexion Is this the smallest possible code base for this kata? Sure not. You´ll find more concise solutions on the internet. But LOC are of relatively little concern – as long as I can understand the code quickly. So called “elegant” code, however, often is not easy to understand. The same goes for KISS code – especially if left unrefactored, as it is often the case. That´s why I progressed from requirements to final code the way I did. I first understood and solved the problem on a conceptual level. Then I implemented it top down according to my design. I also could have implemented it bottom-up, since I knew some bottom of the solution. That´s the leaves of the functional decomposition tree. Where things became fuzzy, since the design did not cover any more details as with Find_factors(), I repeated the process in the small, so to speak: fake some top level, endure red high level tests, while first solving a simpler problem. Using scaffolding tests (to be thrown away at the end) brought two advantages: Encapsulation of the implementation details was not compromised. Naturally private methods could stay private. I did not need to make them internal or public just to be able to test them. I was able to write focused tests for small aspects of the solution. No need to test everything through the solution root, the API. The bottom line thus for me is: Informed TDD produces cleaner code in a systematic way. It conforms to core principles of programming: Single Responsibility Principle and/or Separation of Concerns. Distinct roles in development – being a researcher, being an engineer, being a craftsman – are represented as different phases. First find what, what there is. Then devise a solution. Then code the solution, manifest the solution in code. Writing tests first is a good practice. But it should not be taken dogmatic. And above all it should not be overloaded with purposes. And finally: moving from top to bottom through a design produces refactored code right away. Clean code thus almost is inevitable – and not left to a refactoring step at the end which is skipped often for different reasons.   PS: Yes, I have done this kata several times. But that has only an impact on the time needed for phases 1 and 2. I won´t skip them because of that. And there are no shortcuts during implementation because of that.

    Read the article

  • How to handle lookup data in a C# ASP.Net MVC4 application?

    - by Jim
    I am writing an MVC4 application to track documents we have on file for our clients. I'm using code first, and have created models for my objects (Company, Document, etc...). I am now faced with the topic of document expiration. Business logic dictates certain documents will expire a set number of days past the document date. For example, Document A might expire in 180 days, Document 2 in 365 days, etc... I have a class for my documents as shown below (simplified for this example). What is the best way for me to create a lookup for expiration values? I want to specify documents of type DocumentA expire in 30 days, type DocumentB expire in 75 days, etc... I can think of a few ways to do this: Lookup table in the database I can query New property in my class (DaysValidFor) which has a custom getter that returns different values based on the DocumentType A method that takes in the document type and returns the number of days and I'm sure there are other ways I'm not even thinking of. My main concern is a) not violating any best practices and b) maintainability. Are there any pros/cons I need to be aware of for the above options, or is this a case of "just pick one and run with it"? One last thought, right now the number of days is a value that does not need to be stored anywhere on a per-document basis -- however, it is possible that business logic will change this (i.e., DocumentA's are 30 days expiration by default, but this DocumentA associated with Company XYZ will be 60 days because we like them). In that case, is a property in the Document class the best way to go, seeing as I need to add that field to the DB? namespace Models { // Types of documents to track public enum DocumentType { DocumentA, DocumentB, DocumentC // etc... } // Document model public class Document { public int DocumentID { get; set; } // Foreign key to companies public int CompanyID { get; set; } public DocumentType DocumentType { get; set; } // Helper to translate enum's value to an integer for DB storage [Column("DocumentType")] public int DocumentTypeInt { get { return (int)this.DocumentType; } set { this.DocumentType = (DocumentType)value; } } [DataType(DataType.Date)] [DisplayFormat(DataFormatString = "{0:MM-dd-yyyy}", ApplyFormatInEditMode = true)] public DateTime DocumentDate { get; set; } // Navigation properties public virtual Company Company { get; set; } } }

    Read the article

< Previous Page | 438 439 440 441 442 443 444 445 446 447 448 449  | Next Page >