Search Results

Search found 11823 results on 473 pages for 'save as'.

Page 345/473 | < Previous Page | 341 342 343 344 345 346 347 348 349 350 351 352  | Next Page >

  • TomEE Integration in NetBeans Next

    - by Geertjan
    At JavaOne 2013, there was a lot of buzz around the TomEE server, e.g., many Tweets, nice party, and a new TomEE consulting company. For those tracking TomEE developments, it is interesting to note that recently the NetBeans IDE development builds have had added to them... TomEE support. Note: The TomEE support described here is not in NetBeans IDE 7.4, but in development builds for the next release of NetBeans IDE.For example, with NetBeans IDE development builds you're able to: register TomEE as a server in the Services window (TomEE has several distributions, e.g., one can use the "with JAX-RS" one, for example) create a Java EE 6 web project (e.g., Maven based) against this server create JPA entities from database create JAX-RS classes from JPA entities create JSF pages from JPA entities the IDE lets you create a new data source for TomEE and deploy it to the server the IDE figures out the components that are already packaged in TomEE, and the fact that (unlike with regular Tomcat), it does not need to package any components such as JSF implementation, persistence provider, or JAX-RS runtime, so that the resulting WAR file is very small the IDE can also do "deploy on save" with TomEE, so that your development cycle is very fast Adam Bien blogged about how he set up TomEE sometime ago, here. The official support in NetBeans IDE will be much more tightly integrated, simplifying the steps Adam describes. For example, the IDE does step 2 from Adam's blog for you, i.e., it sets up TomEE deployment roles. Moreover, it knows about all the technologies included in TomEE so that it can optimize the packaging; it knows about TomEE's persistence setup; it can work with TomEE data sources, etc. Below you see a Maven-based Java EE 6 PrimeFaces application (all entities and JSF pages generated from a database) deployed to TomEE in NetBeans IDE: And here's the management console for configuring and finetuning TomEE in NetBeans IDE: When I tried out the NetBeans IDE development build and TomEE, to see how everything fits together, I was surprised at how fast TomEE started up. Not sure what they did to it, but seems like a server on steroids. And setting it up in NetBeans IDE was trivial. Add the simple set up of TomEE in NetBeans IDE to the many benefits that the widely praised out of the box NetBeans Maven tools make possible, together with the fact that not one single plugin had to be installed to get everything you see described here up and running... and you have a really powerful combination of dev tools, all for free.

    Read the article

  • How to Modify a Signature for Use in Plain Text Emails in Outlook 2013

    - by Lori Kaufman
    If you’ve created a signature with an image, links, text formatting, or special characters, the signature will not look the same in plain text formatted emails as it does in HTML format. As the name suggests, Plain Text does not support any type of formatting. For example, if you include an image in your signature, as shown below, the plain text version will be blank. Active links in HTML signatures will be converted to just the text of the link in plain text emails. The How-To Geek link in the image below will become simply How-To Geek and will look like the rest of the text in the signature. The same thing is true in the following example. The active links are stripped from the text. The picture of the envelope that was inserted using the Wingdings font will only display as the plain text character associated with it. There are times you may need to send email in Plain Text format, but still include your signature. You can edit the plain text version of your signature to make it look good in plain text emails by manually editing the text file. To do this, click the File tab. Click Options in the menu list on the left side of the Account Information screen. On the Outlook Options dialog box, click Mail in the list of options on the left side of the dialog box. In the Compose messages section, press and hold the Ctrl key and click the Signatures button. This opens the Signatures folder containing the files used to insert signatures into emails. The .txt file version of each signature is used when inserting a signature into a plain text email. Double-click on a .txt file for the signature you want to edit to open it in Notepad, or your default text editor. Notice that the links on “How-To Geek” and “Email me” are gone and the envelope typed using the Wingdings font was converted to an “H.” Edit the text file to remove extra characters, replace images, and provide full web and email links. Save the text file. Create a new mail message and select the edited signature, if it’s not the default signature for the current email account. To convert the email to plain text, click the Format Text tab and click Plain Text in the Format section. The Microsoft Outlook Compatibility Checker displays telling you that Formatted text will become plain text. Click Continue. The HTML version of your signature is converted to the plain text version. NOTE: You should make a backup of the .txt signature file you edited, as this file will change again when you change your signature in the Signature Editor.     

    Read the article

  • Alternatives for comparing data from different databases

    - by Alex
    I have two huge tables on separate databases. One of them has the information of all the SMS that passed through the company's servers while the other one has the information of the actual billing of those SMS. My job is to compare samples of both of these tables (for example, the records between 1 and 2 pm) to see if there are any differences: SMS that were sent but not charged to the user for whatever reason that may be happening. The columns I will be using to compare are the remitent's phone number and the exact date the SMS was sent. An issue here is that dates usually are the same on both sides, but in many cases differ by 1 or 2 seconds. I have, so far, two alternatives to do this: (PL/SQL) Create two tables where i'm going to temporarily store all the records of that 1hour sample. One for each of the main tables. Then, for each distinct phone number, select the time of every SMS sent from that phone from both my temporary tables and start comparing one by one using cursors. In this case, the procedure would be ran on the server where one of the sources is so the contents of the other one would be looked up using a dblink. (sqlplus + c++) Instead of storing the 1hour samples in new tables, output the query to a text file. I will have two text files, one for each source. Then, open the first file and load all of it's content on a hash_map (key-value) using c++, where the key will be the phone number and the value a list of times of SMS sent from that phone. Finally, open the second file, grab each line (in this format: numberX timeX), look for numberX's entry on the hash_map (wich will be a list of times) and then check if timeX is on that list. If it isn't, save it somewhere to finally store it on a "uncharged" table (this would also be the final step on case 1) My main concern is efficiency. These samples have about 2 million records on each source, so just grabbing one record on one side and looking it up on the other would not be possible. That's the reason I wanted to use hash_maps Which do you think is a better option?

    Read the article

  • Create Outlook Appointments from PowerShell

    - by BuckWoody
    I've been toying around with a script to create a special set of calendar objects in Outlook that show when my SQL Server Agent Jobs are scheduled to run. I haven't finished yet, but I thought I would share the part that creates the Outlook Appointments.I have yet to fill a variable with the start and end times, and then loop through that to create the appointments. I'm thinking I'll make the script below into a function, and feed it those variables in a loop. The script below creates a whole new Calendar Folder in Outlook called "SQL Server Agent Jobs". I also use categories quite a bit, so you'll see that too. Caution: If you plan to play with this script, do it on an isolated workstation, not on your "regular" Outlook calendar. Otherwise, you'll have lots of appointments in there that you don't care about!  # Add a new calendar item to a new Outlook folder called "SQL Server Agent Jobs" $outlook = new-object -com Outlook.Application $calendar = $outlook.Session.folders.Item(1).Folders.Item("SQL Server Agent Jobs") $appt = $calendar.Items.Add(1) # == olAppointmentItem $appt.Start = [datetime]"03/11/2010 11:00" $appt.End = [datetime]"03/11/2009 12:00" $appt.Subject = "JobName" $appt.Location = "ServerName" $appt.Body = "Job Details" $appt.Categories = "SQL server Agent Job" $appt.Save()   Script Disclaimer, for people who need to be told this sort of thing: Never trust any script, including those that you find here, until you understand exactly what it does and how it will act on your systems. Always check the script on a test system or Virtual Machine, not a production system. All scripts on this site are performed by a professional stunt driver on a closed course. Your mileage may vary. Void where prohibited. Offer good for a limited time only. Keep out of reach of small children. Do not operate heavy machinery while using this script. If you experience blurry vision, indigestion or diarrhea during the operation of this script, see a physician immediately.   Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Principles of an extensible data proxy

    - by Wesley
    There is a growing industry now with more than 30 companies playing in the Backend-As-A-Service (BaaS) market. The principle is simple: give companies a secure way of exposing data housed on premises and behind the firewall publicly. This can include database data, as well as Legacy PC data through established connectors; SAP for example provides a connector for transacting with their legacy systems. Early attempts were fixed providers for specific systems like SAP, IBM or Oracle, but the new breed is extensible, allowing Channel Partners and Consultants to build robust integration applications that can consume whatever data sources the client wants to expose. I just happen to be close to finishing a Cloud Based HTML5 application platform that provides robust integration services, and I would like to break ground on an extensible data proxy to complete the system. From what I can gather, I need to provide either an installable web service of some kind, or a Cloud service which the client can configure with VPN for interactions. Then I can build in connectors, which can be activated with a service account, and expose those transactions via web services of some kind (JSON, SOAP, etc). I can also provide a framework that allows people to build in their own connectors, and use some kind of schema to hook those connectors into the proxy. The end result is some kind of public facing web service that could securely be consumed by applications to show data through HTML5 on any device. My gut is, this isn't as hard as it sounds. Almost all of the 30+ companies (With more popping up almost weekly) have all come into existence in the last 18 months or so, which tells me either the root technology, or the skillset to create the technology is in abundance right now. Where should I start on this? Are there some open source projects I can leverage? A specific group of developers I can hire? I'm confident someone here can set me on the right path and save me some time. You don't see this many companies spring up this rapidly if they are all starting from scratch with proprietary technology. The Register: WTF is BaaS One Minute Video from Kony on their BaaS

    Read the article

  • x axis detection issues platformer starter kit

    - by dbomb101
    I've come across a problem with the collision detection code in the platformer starter kit for xna.It will send up the impassible flag on the x axis despite being nowhere near a wall in either direction on the x axis, could someone could tell me why this happens ? Here is the collision method. /// <summary> /// Detects and resolves all collisions between the player and his neighboring /// tiles. When a collision is detected, the player is pushed away along one /// axis to prevent overlapping. There is some special logic for the Y axis to /// handle platforms which behave differently depending on direction of movement. /// </summary> private void HandleCollisions() { // Get the player's bounding rectangle and find neighboring tiles. Rectangle bounds = BoundingRectangle; int leftTile = (int)Math.Floor((float)bounds.Left / Tile.Width); int rightTile = (int)Math.Ceiling(((float)bounds.Right / Tile.Width)) - 1; int topTile = (int)Math.Floor((float)bounds.Top / Tile.Height); int bottomTile = (int)Math.Ceiling(((float)bounds.Bottom / Tile.Height)) - 1; // Reset flag to search for ground collision. isOnGround = false; // For each potentially colliding tile, for (int y = topTile; y <= bottomTile; ++y) { for (int x = leftTile; x <= rightTile; ++x) { // If this tile is collidable, TileCollision collision = Level.GetCollision(x, y); if (collision != TileCollision.Passable) { // Determine collision depth (with direction) and magnitude. Rectangle tileBounds = Level.GetBounds(x, y); Vector2 depth = RectangleExtensions.GetIntersectionDepth(bounds, tileBounds); if (depth != Vector2.Zero) { float absDepthX = Math.Abs(depth.X); float absDepthY = Math.Abs(depth.Y); // Resolve the collision along the shallow axis. if (absDepthY < absDepthX || collision == TileCollision.Platform) { // If we crossed the top of a tile, we are on the ground. if (previousBottom <= tileBounds.Top) isOnGround = true; // Ignore platforms, unless we are on the ground. if (collision == TileCollision.Impassable || IsOnGround) { // Resolve the collision along the Y axis. Position = new Vector2(Position.X, Position.Y + depth.Y); // Perform further collisions with the new bounds. bounds = BoundingRectangle; } } //This is the section which deals with collision on the x-axis else if (collision == TileCollision.Impassable) // Ignore platforms. { // Resolve the collision along the X axis. Position = new Vector2(Position.X + depth.X, Position.Y); // Perform further collisions with the new bounds. bounds = BoundingRectangle; } } } } } // Save the new bounds bottom. previousBottom = bounds.Bottom; }

    Read the article

  • Export all SSIS packages from msdb using Powershell

    - by jamiet
    Have you ever wanted to dump all the SSIS packages stored in msdb out to files? Of course you have, who wouldn’t? Right? Well, at least one person does because this was the subject of a thread (save all ssis packages to file) on the SSIS forum earlier today. Some of you may have already figured out a way of doing this but for those that haven’t here is a nifty little script that will do it for you and it uses our favourite jack-of-all tools … Powershell!!   Imagine I have the following package folder structure on my Integration Services server (i.e. in [msdb]): There are two packages in there called “20110111 Chaining Expression components” & “Package”, I want to export those two packages into a folder structure that mirrors that in [msdb]. Here is the Powershell script that will do that:   Param($SQLInstance = "localhost") #####Add all the SQL goodies (including Invoke-Sqlcmd)##### add-pssnapin sqlserverprovidersnapin100 -ErrorAction SilentlyContinue add-pssnapin sqlservercmdletsnapin100 -ErrorAction SilentlyContinue cls $Packages = Invoke-Sqlcmd -MaxCharLength 10000000 -ServerInstance $SQLInstance -Query "WITH cte AS ( SELECT cast(foldername as varchar(max)) as folderpath, folderid FROM msdb..sysssispackagefolders WHERE parentfolderid = '00000000-0000-0000-0000-000000000000' UNION ALL SELECT cast(c.folderpath + '\' + f.foldername as varchar(max)), f.folderid FROM msdb..sysssispackagefolders f INNER JOIN cte c ON c.folderid = f.parentfolderid ) SELECT c.folderpath,p.name,CAST(CAST(packagedata AS VARBINARY(MAX)) AS VARCHAR(MAX)) as pkg FROM cte c INNER JOIN msdb..sysssispackages p ON c.folderid = p.folderid WHERE c.folderpath NOT LIKE 'Data Collector%'" Foreach ($pkg in $Packages) { $pkgName = $Pkg.name $folderPath = $Pkg.folderpath $fullfolderPath = "c:\temp\$folderPath\" if(!(test-path -path $fullfolderPath)) { mkdir $fullfolderPath | Out-Null } $pkg.pkg | Out-File -Force -encoding ascii -FilePath "$fullfolderPath\$pkgName.dtsx" }   To run it simply change the “localhost” parameter of the server you want to connect to either by editing the script or passing it in when the script is executed. It will create the folder structure in C:\Temp (which you can also easily change if you so wish – just edit the script accordingly). Here’s the folder structure that it created for me: Notice how it is a mirror of the folder structure in [msdb]. Hope this is useful! @Jamiet

    Read the article

  • Is there any good hosting for asp.net and MySQL

    - by HAJJAJ
    HI every one ,I have account with one of the hosting company, and i did my project in asp.net and I used MySQL for the database. the hosting company is not giving me the full privileges to create new user or to create new stored procedure!!! this is what they said for me: Due to the shared nature of our environment we had to make some modifications to your procedure (namely the definer). We also had to review your procedure to determine if it would be compatible with our environment. While your procedures will work (via phpMyAdmin or some other interface), it is unlikely they will be accessible via the Connector/.NET (ADO.NET) that your application is likely using. This is due to a security restriction with how that connector works in shared environments. http://dev.mysql.com/doc/refman/5.0/en/connector-net-programming-stored.html "Note When you call a stored procedure, the command object makes an additional SELECT call to determine the parameters of the stored procedure. You must ensure that the user calling the procedure has the SELECT privilege on the mysql.proc table to enable them to verify the parameters. Failure to do this will result in an error when calling the procedure." Unfortunately, giving read privileges on the mysql.proc table will give you access to the data of our other customers and that is not an acceptable risk. If your application can only work using stored procedures, then MSSQL will probably be the better option for your site. I apologize for the inconvenience and the wait to have this ticket completed. So is there any good hosting that any body already used it to publish his asp.net and mysql project ??? this is one of my stored procedure and i think it's sample and it will not harm any other uses!!: -- -------------------------------------------------------------------------------- -- Routine DDL -- Note: comments before and after the routine body will not be stored by the server -- -------------------------------------------------------------------------------- DELIMITER $$ CREATE DEFINER=`root`@`localhost` PROCEDURE `SpcategoriesRead`( IN PaRactioncode VARCHAR(5), IN PaRCatID BIGINT, IN PaRSearchText TEXT ) BEGIN -- CREATING TEMPORARY TABLE TO SAVE DATA FROM THE ACTIONCODE SELECTS -- DROP TEMPORARY TABLE IF EXISTS TEMP; CREATE temporary table tmp ( CatID BIGINT primary key not null, CatTitle TEXT, CatDescription TEXT, CatTitleAr TEXT, CatDescriptionAr TEXT, PictureID BIGINT, Published BOOLEAN, DisplayOrder BIGINT, CreatedOn DATE ); IF PaRactioncode = 1 THEN -- Retrive all DATA from the database -- INSERT INTO tmp SELECT CatID,CatTitle,CatDescription,CatTitleAr,CatDescriptionAr,PictureID,Published,DisplayOrder,CreatedOn FROM tbcategories; ELSEIF PaRactioncode = 2 THEN -- Retrive all from the database By ID -- INSERT INTO tmp SELECT CatID,CatTitle,CatDescription,CatTitleAr,CatDescriptionAr,PictureID,Published,DisplayOrder,CreatedOn FROM tbcategories WHERE CatID=PaRCatID; ELSEIF PaRactioncode = 3 THEN -- NOSET YET -- INSERT INTO tmp SELECT CatID,CatTitle,CatDescription,CatTitleAr,CatDescriptionAr,PictureID,Published,DisplayOrder,CreatedOn FROM tbcategories WHERE Published=1 ORDER BY DisplayOrder; END IF; IF PaRSearchText IS NOT NULL THEN set PaRSearchText=concat('%', PaRSearchText ,'%'); SELECT CatID,CatTitle,CatDescription,CatTitleAr,CatDescriptionAr,PictureID,Published,DisplayOrder,CreatedOn FROM tmp WHERE Concat(CatTitle, CatDescription, CatTitleAr, CatDescriptionAr) LIKE PaRSearchText; ELSE SELECT CatID,CatTitle,CatDescription,CatTitleAr,CatDescriptionAr,PictureID,Published,DisplayOrder,CreatedOn FROM tmp; END IF; DROP TEMPORARY TABLE IF EXISTS tmp; END

    Read the article

  • OSB 11g & SAP – Single Channel/Program ID for Multiple IDOCs

    - by Shub Lahiri, A-Team
    Background This note is a supplement to the blog entry, SOA 11g & SAP – Single Channel/Program ID for Multiple IDOCs by Greg Mally. Greg has shown how a single SOA Suite composite can be used with iWay Adapters to receive multiple IDOC types via a single channel in the adapter, corresponding to a single programID on the SAP system. We will try to address the same requirements within the OSB framework here. Project Built - Design Time The basic build of an OSB project with iWay SAP Adapter, as seen in another entry in this blog, consists of working in OSB Design console and Application Explorer. OSB Design Time - Part 1 We will create a placeholder project first in OSB with a proper directory structure, so that we can export the WSDL, XSD and the JCA binding information from Application Explorer directly into this project. Application Explorer - iWay Design Time Tool Receiving IDOCs is classified as an inbound event within Application Explorer. For setting up events, a channel is first defined (e.g. iDoc_Channel) using the same PROGRAMID (RFC destination), as defined within SAP for the OSB server. Next, the same channel is used to export the JCA Inbound Event artifacts for the candidate IDOC, e.g. DEBMAS06 directly to the pre-created OSB project. Note that the validation for schema has been turned off. As a result, this will allow the adapter, at runtime, to use a single channel to receive multiple IDOC types from SAP and pass them on to the OSB runtime engine without any validation. In other words, we do not have to repeat the above step for each IDOC type. OSB Design Time - Part 2 Create 2 simple XML based Business Services to write to a file, e.g.  SAP_DEBMAS_File and SAP_MATMAS_File. Next, generate a Proxy Service using the JCA binding file exported from Application Explorer in the previous section. In the generated proxy service, edit the message flow and add a route node. Add a routing table in the route node with the following routing function. fn:local-name-from-QName(fn:node-name($body/*[1])) This function takes advantage of the fact that the XML payload at runtime, after translation by adapter, has the IDOC type as the top element. With the routing function in place, build the routing table to add 2 branches to route the IDOCs to the appropriate Business Service for writing the XML payload to files in separate directories. This completes the build of the OSB project. Testing - Run-Time After deployment and activation, the SAP adapter will wait to receive multiple types of IDOCs sent from the SAP system using a single channel. Upon receipt of the IDOCs, the OSB project will route them appropriately to save the corresponding XML payloads for different IDOC types in different directories.

    Read the article

  • CIO's Corner: Achieving a Balance

    - by Michelle Kimihira
    Author: Rick Beers Senior Director, Product Management, Oracle Fusion Middleware All too often, a CIO is unfairly characterized as either technology-focused or business-focused; as more concerned with either infrastructure performance or business excellence. It seems to me that this completely misses the point. I have long thought that a CIO has probably the most complex C-level position in an enterprise, one that requires an artful balance among four entirely different constituencies, often with competing values and needs. How a CIO balances these is the single largest determinant of success. I was reminded of this while reading the excellent interview of Mark Hurd by CNBC’s Maria Bartiromo in a recent issue of USATODAY (Bartiromo: Oracle's Hurd is in tech sweet spot). The interview covers topics such as Big Data, Leadership and Oracle’s growth strategy. But the topic that really got my interest, and reminded me of the need for balance, was on IT spending trends, in which Mark Hurd observed, “…budgets are tight. What most of our customers have today is both an austerity plan to save money and at the same time a plan to reapply that money to innovation. There isn't a customer we have that doesn't have an austerity plan and an innovation plan.” In an era of economic uncertainty, and an accelerating pace of business change, this is probably the toughest balance a CIO must achieve. Yet for far too many IT organizations, operating costs consume over 75% of their budgets, leaving precious little for innovation and investment in business-critical technology programs. I have found that many CIO’s are trapped by their enterprise systems platforms, which were originally architected for Standardization, Compliance and tightly integrated linear Workflows. Yes, these traits are still required for specific reasons and cannot be compromised. But they are no longer enough. New demands are emerging: the explosion in the volume and diversity of Data, the Consumerization of IT, the rise of Social Media, and the need for continual Business Process Reengineering. These were simply not the design criteria for Enterprise 1.0 and attempting to leverage them with current systems platforms results in an escalation in complexity and a resulting increase in operating costs for many IT organizations. This is the cost vs investment trap and what most constrains CIO’s from achieving the balance they need. But there is a way out of this trap. Enterprise 2.0 represents an entirely new enterprise systems architecture, one that is ‘Business-Centric’ rather than ‘ERP Centric’, which defined the architecture of Enterprise 1.0. Oracle’s best in class suite of Fusion Middleware Products enables a layered approach to enterprise systems architectures that provides the balance that an enterprise needs. The most exciting part of all this? The bottom two layers are focused upon reducing costs and the upper two layers provide business value and innovation. Finally, the Balance a CIO needs.  Additional Information Product Information on Oracle.com: Oracle Fusion Middleware Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • Choose Custom New Tab Pages in Chrome

    - by Asian Angel
    For most people the default New Tab Page in Chrome works perfectly well for their purposes. But if you would prefer to choose what opens in a new tab for yourself then you will definitely want to have a look at the “Define your own new tab!” extension for Google Chrome. Before Unless you are using a Speed Dial (or similar) extension each time you click on the “New Tab Button” you get the same old page. It would certainly be a lot more satisfying if you could choose custom webpage(s) to open as new tabs. After Once you have the extension installed the best thing to do is click on the “New Tab Button”. That will open up the “Options Page” where you can enter one or two custom website URLs of your choosing. Once you have your custom URLs entered click on “Save”. As soon as you click on the “New Tab Button” your new custom webpage(s) will open. If you chose two webpages the first choice will open focused on the “right side” instead of the “left”. Clicking on the “Home Button” will also open the webpage(s) that you chose. The webpage(s) that you chose will also open as your starting “Home Pages” each time that you start your browser. Conclusion If you have wanted to choose your own custom “New Tab Page” then this is the extension that you have been waiting for. Links Download the Define your own new tab! extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Find Similar Websites in Google ChromeAccess Google Chrome’s Special Pages the Easy WayEnable Vista Black Style Theme for Google Chrome in XPSet Custom Reload Times for Individual Webpages in ChromeEnable Auto-Paging Goodness in Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Play Music in Chrome by Simply Dragging a File 15 Great Illustrations by Chow Hon Lam Easily Sync Files & Folders with Friends & Family Amazon Free Kindle for PC Download Stretch popurls.com with a Stylish Script (Firefox) OldTvShows.org – Find episodes of Hitchcock, Soaps, Game Shows and more

    Read the article

  • New Slides - and a discussion about Dictionary Statistics

    - by Mike Dietrich
    First of all we have just upoaded a new version of the Upgrade and Migration Workshop slides with some added information. So please feel free to download them from here.The slides have one new interesting information which lead to a discussion I've had in the past days with a very large customer regarding their upgrades - and internally on the mailing list targeting an EBS database upgrade from Oracle 10.2 to Oracle 11.2. Why are we creating dictionary statistics during upgrade? I'd believe this forced dictionary statistics creation got introduced with the desupport of the Rule Based Optimizer in Oracle 10g. The goal: as RBO is not supported anymore we have to make sure that the data dictionary has fresh and non-stale statistics. Actually that would have led in Oracle 9i to strange behaviour in some databases - so in Oracle 9i this was strongly disrecommended. The upgrade scripts got hardcoded to create these stats. But during tests we had the following findings: It's important to create dictionary statistics the night before the upgrade. Not two weeks before, not 60 minutes before your downtime begins. But very close to the upgrade. From Oracle 10g onwards you'd just say: $ execute DBMS_STATS.GATHER_DICTIONARY_STATS; This is important to make sure you have fresh dictionary statistics during upgrade for performance reasons. Tests have shown that running an upgrade without valid dictionary statistics might slow down the whole upgrade by factors of 2x-3x. And it would be also a great idea post upgrade to create again fresh dictionary statistics when you've did suppress the stats creation during the upgrade process. Suppress? Yes, you could set this underscore parameter in the init.ora: _optim_dict_stats_at_db_cr_upg=FALSE to suppress the forced dictionary statistics collection during an upgrade. We believe strongly that (a) people using the default statistics creation process which will create dictionary statistics by default and (b) create fresh stats before upgrade on the dictionary. Therefore we find it save once you have followed our advice to use the underscore during upgrade. And we've taken out that forced statistics collection during upgrade in the next release of the database. Please note: If you are using the DBUA for the upgrade it will remove underscore parameters for the upgrade run to improve performance - which is generally a good idea. So you'll have to start the DBUA with that call: $ dbua -initParam "_optim_dict_stats_at_cb_cr_upg"=FALSE -Mike

    Read the article

  • Stencil buffer appears to not be decrementing values correctly

    - by Alex Ames
    I'm attempting to use the stencil buffer as a clipper for my UI system, but I'm having trouble debugging a problem I'm running in to. This is what I'm doing: A widget can pass a rectangle to the the stencil clipper functions, which will increment the stencil buffer values that it covers. Then it will draw its children, which will only get drawn in the stencilled area (so that if they extend outside they'll be clipped). After a widget is done drawing its children, it pops that rectangle from the stack and in the process decrements the values in the stencil buffer that it has previously incremented. The slightly simplified code is below: static void drawStencil(Rect& rect, unsigned int ref) { // Save previous values of the color and depth masks GLboolean colorMask[4]; GLboolean depthMask; glGetBooleanv(GL_COLOR_WRITEMASK, colorMask); glGetBooleanv(GL_DEPTH_WRITEMASK, &depthMask); // Turn off drawing glColorMask(0, 0, 0, 0); glDepthMask(0); // Draw vertices here ... // Turn everything back on glColorMask(colorMask[0], colorMask[1], colorMask[2], colorMask[3]); glDepthMask(depthMask); // Only render pixels in areas where the stencil buffer value == ref glStencilFunc(GL_EQUAL, ref, 0xFF); glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP); } void pushScissor(Rect rect) { // increment things only at the current stencil stack level glStencilFunc(GL_EQUAL, s_scissorStack.size(), 0xFF); glStencilOp(GL_KEEP, GL_INCR, GL_INCR); s_scissorStack.push_back(rect); drawStencil(rect, states, s_ScissorStack.size()); } void popScissor() { // undo what was done in the previous push, // decrement things only at the current stencil stack level glStencilFunc(GL_EQUAL, s_scissorStack.size(), 0xFF); glStencilOp(GL_KEEP, GL_DECR, GL_DECR); Rect rect = s_scissorStack.back(); s_scissorStack.pop_back(); drawStencil(rect, states, s_scissorStack.size()); } And this is how it's being used by the Widgets if (m_clip) pushScissor(m_rect); drawInternal(target, states); for (auto child : m_children) target.draw(*child, states); if (m_clip) popScissor(); This is the result of the above code: There are two things on the screen, a giant test button, and a window with some buttons and text areas on it. The text area scroll box is set to clip its children (so that the text doesn't extend outside the scroll box). The button is drawn after the window and should be on top of it completely. However, for some reason the text area is appearing on top of the button. The only reason I can think of that this would happen is if the stencil values were not getting decremented in the pop, and when it comes time to render the button, since those pixels don't have the right stencil value it doesn't draw over. But I can't figure out whats wrong with my code that would cause that to happen.

    Read the article

  • Developing for 2005 using VS2008!

    - by Vincent Grondin
    I joined a fairly large project recently and it has a particularity… Once finished, everything has to be sent to the client under VS2005 using VB.Net and can target either framework 2.0 or 3.0… A long time ago, the decision to use VS2008 and to target framework 3.0 was taken but people knew they would need to establish a few rules to ensure that each dev would use VS2008 as if it was VS2005… Why is that so? Well simply because the compiler in VS2005 is different from the compiler inside VS2008…  I thought it might be a good idea to note the things that you cannot use in VS2008 if you plan on going back to VS2005. Who knows, this might save someone the headache of going over all their code to fix errors… -        Do not use LinQ keywords (from, in, select, orderby…).   -        Do not use LinQ standard operators under the form of extension methods.   -        Do not use type inference (in VB.Net you can switch it OFF in each project properties). o   This means you cannot use XML Literals.   -        Do not use nullable types under the following declarative form:    Dim myInt as Integer? But using:   Dim myInt as Nullable(Of Integer)     is perfectly fine.   -        Do not test nullable types with     Is Nothing    use    myInt.HasValue     instead.   -        Do not use Lambda expressions (there is no Lambda statements in VB9) so you cannot use the keyword “Function”.   -        Pay attention not to use relaxed delegates because this one is easy to miss in VS2008   -        Do not use Object Initializers   -        Do not use the “ternary If operator” … not the IIf method but this one     If(confition, truepart, falsepart).   As a side note, I talked about not using LinQ keyword nor the extension methods but, this doesn’t mean not to use LinQ in this scenario. LinQ is perfectly accessible from inside VS2005. All you need to do is reference System.Core, use namespace System.Linq and use class “Enumerable” as a helper class… This is one of the many classes containing various methods that VS2008 sees as extensions. The trick is you can use them too! Simply remember that the first parameter of the method is the object you want to query on and then pass in the other parameters needed… That’s pretty much all I see but I could have missed a few… If you know other things that are specific to the VS2008 compiler and which do not work under VS2005, feel free to leave a comment and I’ll modify my list accordingly (and notify our team here…) ! Happy coding all!

    Read the article

  • Five hours of Task Flow Overview Recordings Available

    - by Frank Nimphius
    In addition to the ADF Controller task flow documentation in Oracle Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework 11g Release 1 http://download.oracle.com/docs/cd/E21764_01/web.1111/b31974/partpage3.htm#BABHIIAI The ADF Insider website … http://www.oracle.com/technetwork/developer-tools/adf/learnmore/adfinsider-093342.html … hosts five online videos that explain how to build and work with ADF Controller task flows in Oracle ADF. ADF Task Flow - Overview (Part 1) This 90 minute recording introduces the concept of ADF unbounded and bounded task flows, as well as other ADF Controller features. The session starts with an overview of unbounded task flows, bounded task flows and the different activities that exist for developers to build complex application flows. Exception handling and the Train navigation model is also covered in this first part of a two part series. By example of developing a sample application, the recording guides viewers through building unbounded and bounded task flows. This session is continued in a second part. http://download.oracle.com/otn_hosted_doc/jdeveloper/11gdemos/taskflow-overview-p1/taskflow-overview-p1.html ADF Task Flow - Overview (Part 2) This 75 minute session continues where part 1 ended and completes the sample application that guides viewers through different aspects of unbounded and bounded task flow development. In this recording, memory scopes, save for later, task flow opening in dialogs and remote task flow calls are explained and demonstrated. If you are new to ADF Task Flow, then it is recommended to first watch part 1 of this series to be able to follow the explanation guided by the sample application. http://download.oracle.com/otn_hosted_doc/jdeveloper/11gdemos/taskflow-overview-p2/taskflow-overview-p2.html ADF Region Interaction - An Overview This session covers most of the options that exist for communicating between regions. It briefly discusses what it takes to build regions from bounded task flows before going into details using slides and samples. The following interaction is explained: contextual events, queue action in region, input parameters and PPR, drag and drop, shared Data Controls, parent action and region navigation listener. http://download.oracle.com/otn_hosted_doc/jdeveloper/11gdemos/adf-region-interaction/adf-region-interaction.html ADF Region Interaction - Contextual Events Contextual event is used as a communication channel between a parent view and its contained regions, as well as between regions. By example, this session explains how to set up contextual events, how to define producers and event listeners and how to define the payload message. http://download.oracle.com/otn_hosted_doc/jdeveloper/11gdemos/AdfInsiderContextualEvents/AdfInsiderContextualEvents.html

    Read the article

  • Fraud and Anomaly Detection using Oracle Data Mining YouTube-like Video

    - by chberger
    I've created and recorded another YouTube-like presentation and "live" demos of Oracle Advanced Analytics Option, this time focusing on Fraud and Anomaly Detection using Oracle Data Mining.  [Note:  It is a large MP4 file that will open and play in place.  The sound quality is weak so you may need to turn up the volume.] Data is your most valuable asset. It represents the entire history of your organization and its interactions with your customers.  Predictive analytics leverages data to discover patterns, relationships and to help you even make informed predictions.   Oracle Data Mining (ODM) automatically discovers relationships hidden in data.  Predictive models and insights discovered with ODM address business problems such as:  predicting customer behavior, detecting fraud, analyzing market baskets, profiling and loyalty.  Oracle Data Mining, part of the Oracle Advanced Analytics (OAA) Option to the Oracle Database EE, embeds 12 high performance data mining algorithms in the SQL kernel of the Oracle Database. This eliminates data movement, delivers scalability and maintains security.  But, how do you find these very important needles or possibly fraudulent transactions and huge haystacks of data? Oracle Data Mining’s 1 Class Support Vector Machine algorithm is specifically designed to identify rare or anomalous records.  Oracle Data Mining's 1-Class SVM anomaly detection algorithm trains on what it believes to be considered “normal” records, build a descriptive and predictive model which can then be used to flags records that, on a multi-dimensional basis, appear to not fit in--or be different.  Combined with clustering techniques to sort transactions into more homogeneous sub-populations for more focused anomaly detection analysis and Oracle Business Intelligence, Enterprise Applications and/or real-time environments to "deploy" fraud detection, Oracle Data Mining delivers a powerful advanced analytical platform for solving important problems.  With OAA/ODM you can find suspicious expense report submissions, flag non-compliant tax submissions, fight fraud in healthcare claims and save huge amounts of money in fraudulent claims  and abuse.   This presentation and several brief demos will show Oracle Data Mining's fraud and anomaly detection capabilities.  

    Read the article

  • Are You an IT Geek? Why Not Write for How-To Geek?

    - by The Geek
    Are you a geek in the IT field that wants to share your skills with the world? We’re looking for an experienced Sysadmin / IT Admin / Webmaster geek with writing skills that wants to join our team on a part-time basis. Please apply if you have the following qualities: You must be a geek at heart, willing to try and make the boring world of IT sound glamorous and sexy. If that’s not possible, at least be willing to share your wisdom and skills to help other IT geeks save time and become better at what they do. You must be able to write articles that are easy to understand. Either Windows or Linux writers are welcome to apply. You must be able to follow our style guide. You must be creative. You must generate ideas for articles on your own, and take suggestions like a pro. You’re ambitious, looking to build your skills and your name, and are prepared to work hard. If you aren’t willing to work hard, put some dedication and pride into your work, or aren’t really interested in the topic, this job might not be for you. We’re looking for serious individuals that want to grow with us, and as we grow, you’ll grow as well. How To Apply If you think this job is a good fit for you, send an email to [email protected] and include some background information about yourself, why you’d be a good fit, some topic areas you are familiar with, and hopefully some examples of your work. Bonus points if you have a ninja costume and a keyboard strapped to your back. Similar Articles Productive Geek Tips What Topics Should The How-To Geek Write About?Got Awesome Skills? Why Not Write for How-To Geek?Got Awesome Geek Skills? The How-To Geek is Looking for WritersAbout the GeekThe How-To Geek Bounty Program TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 QuicklyCode Provides Cheatsheets & Other Programming Stuff Download Free MP3s from Amazon Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites Restore Hidden Updates in Windows 7 & Vista Iceland an Insurance Job?

    Read the article

  • How to Overcome or fix MIXED_DML_OPERATION error in Salesforce APEX without future method ?

    - by sathya
    How to Overcome or fix MIXED_DML_OPERATION error in Salesforce APEX without future method ?MIXED_DML_OPERATION :-one of the worst issues we have ever faced :)While trying to perform DML operation on a setup object and non-setup object in a single action you will face this error.Following are the solutions I tried and the final one worked out :-1. perform the 1st objects DML on normal apex method. Then Call the 2nd objects DML through a future method.    Drawback :- You cant get a response from the future method as its context is different and because its executing asynchronously and that its static.2. Tried the following option but it didnt work :-    1. perform the dml operation on the normal apex method.    2. tried calling the 2nd dml from trigger thinking that it would be in a different context. But it didnt work.    3. Some suggestions were given in some blogs that we could try System.runas()   Unfortunately that works only for test class.   4. Finally achieved it with response synchronously through the following solution :-    a. Created 2 apex:commandbuttons :-        1. <apex:commandButton value="Save and Send Activation Email" action="{!CreateContact}"  rerender="junkpanel" oncomplete="callSimulateUserSave()">            Note :- Oncomplete will not work if you dont have a rerender attribute. So just try refreshing a junk panel.        2. <apex:commandButton value="SimulateUserSave" id="SimulateUserSave" action="{!SaveUser}"  style="display:none;margin-left:5px;"/>        Have a junk panel as well just for rerendering  :-        <apex:outputPanel id="junkpanel"></apex:outputPanel>    b. Created this javascript function which is called from first button's oncomplete and clicks the second button :-                function callSimulateUserSave()                {                    // Specify the id of the button that needs to be clicked. This id is based on the Apex Component Hierarchy.                    // You will not get this value if you dont have the id attribute in the button which needs to be clicked from javascript                    // If you have any doubt in getting this value. Just hover over the button using Chrome developer tools to get the id.                    // But it will show like theForm:SimulateUserSave but you need to replace the colon with a dot here.                    // Note :- I have given display:none in the style of the second button to make sure that, it is not visible for the user.                    var mybtn=document.getElementById('{!$Component.theForm.SimulateUserSave}');                                    mybtn.click();                }    c. Apex Methods CreateContact and SaveUser are the pagereference methods which contains the code to create contact and user respectively.       After inserting the user inside the second apex method you can just set some public Properties in the page,        for ex:- created userid to get the user details and display in the page to show the acknowledgement to the users that the User is created.

    Read the article

  • Insufficient memory issue during Build Process Customization

    - by jehan
    Normal 0 false false false EN-US ZH-CN X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} When customizing the Build Process Template in Workflow designer, I came across the OutOfMemoryException errors while performing Save as Image and Copy operations: "Insufficient memory to continue execution of program"   Normal 0 false false false EN-US ZH-CN X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} There is a fix available on Microsoft Connect  which has resolved the issue.

    Read the article

  • Single Responsibility Principle Implementation

    - by Mike S
    In my spare time, I've been designing a CMS in order to learn more about actual software design and architecture, etc. Going through the SOLID principles, I already notice that ideas like "MVC", "DRY", and "KISS", pretty much fall right into place. That said, I'm still having problems deciding if one of two implementations is the best choice when it comes to the Single Responsibility Principle. Implementation #1: class User getName getPassword getEmail // etc... class UserManager create read update delete class Session start stop class Login main class Logout main class Register main The idea behind this implementation is that all user-based actions are separated out into different classes (creating a possible case of the aptly-named Ravioli Code), but following the SRP to a "tee", almost literally. But then I thought that it was a bit much, and came up with this next implementation class UserView extends View getLogin //Returns the html for the login screen getShortLogin //Returns the html for an inline login bar getLogout //Returns the html for a logout button getRegister //Returns the html for a register page // etc... as needed class UserModel extends DataModel implements IDataModel // Implements no new methods yet, outside of the interface methods // Haven't figured out anything special to go here at the moment // All CRUD operations are handled by DataModel // through methods implemented by the interface class UserControl extends Control implements IControl login logout register startSession stopSession class User extends DataObject getName getPassword getEmail // etc... This is obviously still very organized, and still very "single responsibility". The User class is a data object that I can manipulate data on and then pass to the UserModel to save it to the database. All the user data rendering (what the user will see) is handled by UserView and it's methods, and all the user actions are in one space in UserControl (plus some automated stuff required by the CMS to keep a user logged in or to ensure that they stay out.) I personally can't think of anything wrong with this implementation either. In my personal feelings I feel that both are effectively correct, but I can't decide which one would be easier to maintain and extend as life goes on (despite leaning towards Implementation #1.) So what about you guys? What are your opinions on this? Which one is better? What basics (or otherwise, nuances) of that principle have I missed in either design?

    Read the article

  • Are there any drawbacks to the Major.Minor.YMDD.Build version strategy?

    - by Chu
    I'm trying to come up with a good version strategy to fit our specific needs. We've proposed settling on this and I wanted to ask the question to see if anyone's experience would suggest avoiding this or altering it in any way. Here's our proposal: Versions are released in this format: MAJOR.MINOR.YMDD.BN. Here it is broken out: MAJOR & MINOR are typical; we'll increase MINOR when we feel code and new feature sets warrants it; once every few months most likely. MAJOR will increase ~yearly. YMDD: Y will be the last digit of the current year, so "1" for 2011, "2" for 2012, etc. A non-padded month will be used to keep the number smaller (9 instead of 09 for example). DD of course is the day, padded with a zero for days under 10. BN: BN is the build number and increases by one anytime we make a change to a branch of the code represented by the build, for example: If were to make a build today, our release would be version 5.0.1707.1. I release to QA today and 3 days from now QA finds that a change broke the save functionality on a page. Instead of me changing our current development code, I'd go back to the code that I used to create version 5.0.1707.1, make the fix there, then increase the BN portion of the version and would then re-release 5.0.1707.2 back to QA. In short, anytime a change is made to a branched version that isn't the active dev branch, we'd use the original version number and increase only the BN portion (even if the change happened 3 days, 3 weeks or 3 months from the initial release of that version). Anytime we make a new release from our Active dev branch, we'd come up with a new version based on the M/D of the release using the outlined strategy. We do this once every 2-3 weeks. Are there holes or pitfalls with this? If so, what are they? Thanks EDIT To clarify one point that I didn't get out very well - Oct/Nov/Dec will be two digits, it's only the year that won't be. So 9 for Sept, 10 for Oct, 11 for Nov, etc.

    Read the article

  • ArchBeat Link-o-Rama for 2012-09-07

    - by Bob Rhubart
    Oracle Technology Network Architect Day - Boston, MA - 9/12/2012 Sure, you could ask a voodoo priestess for help in improving your solution architecture skills. But there's the whole snake thing, and the zombie thing, and other complications. So why not keep it simple and register for Oracle Technology Network Architect Day in Boston, MA. There's no magic, just a full day of technical sessions covering Cloud, SOA, Engineered Systems, and more. Registration is free. Wednesday September 12, 2012 8:00 a.m. – 5:00 p.m. Boston Marriott Burlington, One Burlington Mall Road, Burlington, MA 01803 Attend OTN Architect Day in Los Angeles – by Architects, for Architects – October 25 The OTN Architect Day roadshow stops in Boston next week, then it's on to Los Angeles for another all architecture, all day event on Thursday October 25, 2012 at the Sofitel Los Angeles, 555 Beverly Boulevard, Los Angeles, CA 90048. Like all Architect Day events, this one is absolutley free, so register now. Webcast: Oracle WebCenter in Action: Hitachi Data Systems Catch this live webcast on Thursday, September 13, 2012 (10am PT / 1pm ET) to learn from speakers from Hitachi Data Systems, LingoTek, and Oracle about how Hitachi used Oracle WebCenter to improve the web experience for its international customers. Article Index: Architect Community Column in Oracle Magazine Did you know that Oracle Magazine features a regular column devoted specifically to the architect community? Every column includes insight and expertise from architects who regularly deal with the issues architects face. Click here to see a complete list of articles. ADF EMG Sunday at OOW 2012 (30. Sep 2012) - A day full of content | Frank Nimphius Frank Nimphius's shares details on Chris Muir's ADF EMG series of sessions during User Group Sunday at OOW, Sept 30, in Moscone West room 305. The Role of Oracle VM Server for SPARC in a Virtualization Strategy New OTN article from Matthias Pfützner. Countdown to Oracle OpenWorld 2012 | Oracle WebCenter Blog A helpful list of OOW sessions focused on Oracle WebCenter. Oracle Exalogic X2-2 walkthrough | Jan van Zoggel "For those of us not lucky enough to have one at home," Jan van Zoggel recommends this "very cool" video featuring "a detailed walkthrough explaining each component of a Oracle Exalogic X2-2 machine," presented by Oracle Exalogic VP Development Brad Cameron. September OTN Member Offers | OTN Blog Save big on books from top tech publishers with these discounts for OTN members. Thought for the Day "Only Robinson Crusoe had everything done by Friday." — Unknown Source: Quote Garden

    Read the article

  • Using jQuery to Make a Field Read Only in SharePoint

    - by Mark Rackley
    Okay… this will be my shortest blog post EVER. Very little rambling.. I promise, and I’m sure this has been blogged more than once, so I apologize for adding to the noise, but like I always say, I blog for myself so I have a global bookmark. So,let’s say you have a field on a SharePoint Form and you want to make it read only. You COULD just open it up in SPD and easily make it read only, but some people are purists and don’t like use SPD or modify the default new/edit/disp forms. Put me in the latter camp, I try to avoid modifying these forms and it seemed like such a simple task that I didn’t want to create a new un-ghosted form.  So.. how do you do it? It’s only one line of jQuery. All you need to do is find the id for your input field and capture the keypress on it so that it cannot be modified (you should probably capture clicks for dropdowns/checkboxes/etc. but I didn’t need to).  Anyway, here’s the entire script: <script src="jquery.min.js" type="text/javascript"></script> <script type="text/javascript"> jQuery(document).ready(function($){ //capture keypress on our read only field and return false $('#idOfInputField').keypress(function() { return false; }); }) </script>   You can find the ID of your input field by viewing the source, this ID stays consistent as long as you don’t muck with the list or form in the wrong way.  Please note, you CANNOT disable the input field as an alternative to capturing the keypress. If you do this and save the form, any data in the disabled fields will be wiped out. There are probably a dozen ways to make a field read-only and if all you are using jQuery for is to make a field read-only, then you might want to question your use of adding the overhead (although it’s really not that much). Hey.. it’s another tool for your tool belt.

    Read the article

  • Becoming the well-integrated content company (and combating AIUTLVFS)

    - by Lance Shaw
    Every single day, each of us create more and more content. Sometimes it is brand new material and many times it is iterations of existing content, but no one would argue that information and content growth is growing at an almost exponential rate. With all this content being created and stored, a number of problems naturally arise. One of the most common issues that users run into is "Am I Using The Latest Version of this File Syndrome", or AIUTLVFS. This insidious syndrome is all too common and results in ineffective, poor or downright wrong business decisions being made.  When content or files are unavailable or incorrect within the scope of key business processes, the chance for erroneous and costly business decisions is magnified even further. For many companies, the ideal scenario is to be able to connect multiple business systems, both old and new, into one common content repository.  Not only does this reduce content duplication, it also helps guarantee that everyone in various departments is working off the proverbial "same page".  Sounds simple - but for many organizations, the proliferation of file shares, SharePoint sites, and other storage silos of content keep the dream of a more efficient business a distant one. We've created some online assets to help you in your evaluation and eventual improvement of your current content management and delivery systems. Take a few minutes to check out our Online Assessment Tool.  It's quick, easy and just might provide you with insights into how you can improve your current content ecosystem. While you are there, check out our new Infographic that outlines common issues faced by companies today. Feel free to save our informative Infographic PDF and share it with business colleagues and your management to help them understand the business costs and impact of inaction. Together we can stop AIUTLVFS in its tracks and run our businesses more effectively than ever. Additionally, we hope you will take a few minutes to visit our new and informative webpages dedicated to the value of a well connected, fully integrated content management system. It's a great place to learn more about how integrating WebCenter Content into your infrastructure can lower your operational costs while boosting process and worker efficiency.

    Read the article

  • ArchBeat Link-o-Rama for July 2, 2013

    - by Bob Rhubart
    One Week To Go: OTN Architect Day: Cloud Computing - July 9, 2013, Redwood Shores, CA. The first OTN Architect Day event of 2013 happens in just one week, on Tuesday July 9 at the Oracle Conference Center in Redwood Shores, CA. Registration is free and you get three sessions by three experts on cloud computing in the real world — plus a panel Q&A for answers to all of your questions. Register now! Oracle Database 12c: Flashback Moving Forward | Lucas Jellema Oracle ACE Director Lucas Jellema's latest of several recent blog posts dealing with various aspects of the recently released Oracle Database 12c. Detroit, Embracing New Auto Technologies, Seeks App Builders This story from the New York Times paints a rosy picture indeed for app developers as the internet of things continues to evolve. Advanced View Criteria Implementation in ADF BC | Andrejus Baranovskis Oracle ACE Director Andrejus Baranovskis' post focuses on advanced declarative View Criteria features. JDeveloper: Showing a Popup when Selecting an af:selectOneRadio | Timo Hahn Oracle ACE Timo Hahn illustrates a use case in which a popup is displayed each time the user clicks on one of the radio buttons of a button group. Can Technology Innovation Save The New York Times? One of the standout keynotes from the recent QCon New York event, this presentation by New York Times Sr. VP/CIO Marc Frons and CTO/VP Rajiv Pant paints a detailed portrait of the complete transformation of an organization -- not just the IT. Enterprise architects will find this particularly interesting. Video: Meet Growing IT Demand for Databases with Private DBaaS Do you understand the difference between traditional database deployment and database as a service? If not, you'll want to check out this video, which includes an overview of Oracle Enterprise Manager's capabilities for rapid deployment of DBaaS. S Webcast: Zero-Downtime Migration to Oracle Exadata Using Oracle GoldenGate: A Customer Case Study Presenters Alok Pareek (VP, Product Management/Development, Oracle Data Integration) and John F. Martin (CEO of Emerging Markets and CTO IQNavigator) discuss how IQNavigator is using Oracle GoldenGate with Oracle Exadata. Free eBook: Building a Database Cloud for Dummies This free quick-reference guide, organized into six short chapters and supplemented with helpful illustrations, provides a clear overview of the cloud and step-by-step instructions on deploying database as a service. (Registration required.) Thought for the Day "My motto is: Live every day to the fullest – in moderation." — Lindsay Lohan (Born July 2, 1986) Source: brainyquote.com

    Read the article

< Previous Page | 341 342 343 344 345 346 347 348 349 350 351 352  | Next Page >