Search Results

Search found 22000 results on 880 pages for 'worker process'.

Page 360/880 | < Previous Page | 356 357 358 359 360 361 362 363 364 365 366 367  | Next Page >

  • Do You Develop Your PL/SQL Directly in the Database?

    - by thatjeffsmith
    I know this sounds like a REALLY weird question for many of you. Let me make one thing clear right away though, I am NOT talking about creating and replacing PLSQL objects directly into a production environment. Do we really need to talk about developers in production again? No, what I am talking about is a developer doing their work from start to finish in a development database. These are generally available to a development team for building the next and greatest version of your databases and database applications. And of course you are using a third party source control system, right? Last week I was in Tampa, FL presenting at the monthly Suncoast Oracle User’s Group meeting. Had a wonderful time, great questions and back-and-forth. My favorite heckler was there, @oraclenered, AKA Chet Justice.  I was in the middle of talking about how it’s better to do your PLSQL work in the Procedure Editor when Chet pipes up - Don’t do it that way, that’s wrong Just press play to edit the PLSQL directly in the database Or something along those lines. I didn’t get what the heck he was talking about. I had been showing how the Procedure Editor gives you much better feedback and support when working with PLSQL. After a few back-and-forths I got to what Chet’s main objection was, and again I’m going to paraphrase: You should develop offline in your SQL worksheet. Don’t do anything in the database until it’s done. I didn’t understand. Were developers expected to be able to internalize and mentally model the PL/SQL engine, see where their errors were, etc in these offline scripts? No, please give Chet more credit than that. What is the ideal Oracle Development Environment? If I were back in the ‘real world’ of database development, I would do all of my development outside of the ‘dev’ instance. My development process looks a little something like this: Do I have a program that already does something like this – copy and paste Has some smart person already written something like this – copy and paste Start typing in the white-screen-of-panic and bungle along until I get something that half-works Tweek, debug, test until I have fooled my subconscious into thinking that it’s ‘good’ As you might understand, I don’t want my co-workers to see the evolution of my code. It would seriously freak them out and I probably wouldn’t have a job anymore (don’t remind me that I already worked myself out of development.) So here’s what I like to do: Run a Local Instance of Oracle on my Machine and Develop My Code Privately I take a copy of development – that’s what source control is for afterall – and run it where no one else can see it. I now get to be my own DBA. If I need a trace – no problem. If I want to run an ASH report, no worries. If I need to create a directory or run some DataPump jobs, that’s all on me. Now when I get my code ‘up to snuff,’ then I will check it into source control and compile it into the official development instance. So my teammates suddenly go from seeing no program, to a mostly complete program. Is this right? If not, it doesn’t seem wrong to me. And after talking to Chet in the car on the way to the local cigar bar, it seems that he’s of the same opinion. So what’s so wrong with coding directly into a development instance? I think ‘wrong’ is a bit strong here. But there are a few pitfalls that you might want to look out for. A few come to mind – and I’m sure Chet could add many more as my memory fails me at the moment. But here goes: Development instance isn’t properly backed up – would hate to lose that work Development is wiped once a week and copied over from Prod – don’t laugh Someone clobbers your code You accidentally on purpose clobber someone else’s code The more developers you have in a single fish pond, the greater chance something ‘bad’ will happen This Isn’t One of Those Posts Where I Tell You What You Should Be Doing I realize many shops won’t be open to allowing developers to stage their own local copies of Oracle. But I would at least be aware that many of your developers are probably doing this anyway – with or without your tacit approval. SQL Developer can do local file tracking, but you should be using Source Control too! I will say that I think it’s imperative that you control your source code outside the database, even if your development team is comprised of a single developer. Store your source code in a file, and control that file in something like Subversion. You would be shocked at the number of teams that do not use a source control system. I know I continue to be shocked no matter how many times I meet another team running by the seat-of-their-pants. I’d love to hear how your development process works. And of course I want to know how SQL Developer and the rest of our tools can better support your processes. And one last thing, if you want a fun and interactive presentation experience, be sure to have Chet in the room

    Read the article

  • Mysql my.cnf as simbolic link in Ubuntu 12.04

    - by Juan Cruz
    I am not able to use symlink for my.cnf file (Ubuntu 12.04 server). I added the alias to /etc/apparmor.d/tunables/alias file (as I did for 10.04 and worked) but I get: May 30 16:00:01 ip-10-242-209-203 kernel: [176926.213403] type=1400 audit(1338393601.350:244): apparmor="DENIED" operation="open" parent=1 profile="/usr/sbin/mysqld" name="/opt/data/my.cnf" pid=18128 comm="mysqld" requested_mask="r" denied_mask="r" fsuid=0 ouid=0 May 30 16:00:01 ip-10-242-209-203 kernel: [176926.222016] init: mysql main process (18128) terminated with status 1 May 30 16:00:01 ip-10-242-209-203 kernel: [176926.222084] init: mysql respawning too fast, stopped As a workaround I added the following line /etc/mysql/my.cnf r, to the /etc/apparmor.d/local/usr.sbin.mysqld file. The default configuration is /etc/mysql/*.cnf r, Is this a bug? is an apparmor bug or a mysql bug? It seems that that configuration has changed since MySql 5.1 (https://bugs.launchpad.net/ubuntu/+source/mysql-5.1/+bug/619172) but now worked for me. Thanks!

    Read the article

  • Determine web page draw time via a program

    - by Kevin Burke
    Google Chrome has a nice tool to determine the time the page begins drawing, in the Network tab in Developer Tools. Similarly sites like webpagetest.org can tell you the draw time and give you the whole waterfall of page loads for a given web page. I was wondering if I could automate the process of finding the time it took to the first page draw, for all of the pages on my site, so I can share this data within my company. Obviously the page draw time will depend on the latency and throughput of your connection, but I'm more concerned with the relative data about pages on our site. Can I get this data from Selenium or another tool? Thanks, Kevin

    Read the article

  • Which GPU with my CPU for 1080p flash?

    - by oshirowanen
    Based on the following site: http://www.adobe.com/products/flashplayer/systemreqs/ I need the following minimum spec to play 1080p flash video via a browser: CPU: 1.8GHz Intel Core Duo, AMD Athlon 64 X2 4200+, or faster processor RAM: 512MB of RAM GPU: 64MB of graphics memory I only have a 2.8GHz Pentium 4 process which is no where near as good as the processor listed above. I don't want to upgrade my processor as I think it will mean I have to change the motherboard etc. So, my question is, what is the cheapest PCI-E GPU I can buy which will allow me to play smooth 1080p flash video via a browser. I think the cheapest I can get is the 8400GS, but am not sure if that will be able to handle 1080p with the processor I have. I have looked at the GT520 and was wondering if this is the cheapest GPU which I need, or if there is something cheaper which will do 1080p with a 2.8GHz Pentium 4. Or, will I have to get something better than a GT520?

    Read the article

  • Month in Geek: January 2011 Edition

    - by Asian Angel
    With the end of the first month in 2011 upon us it is time to look back at our best and brightest for the month. Join us as we present the ten hottest articles from January for your reading enjoyment Latest Features How-To Geek ETC How To Colorize Black and White Vintage Photographs in Photoshop How To Get SSH Command-Line Access to Windows 7 Using Cygwin The How-To Geek Video Guide to Using Windows 7 Speech Recognition How To Create Your Own Custom ASCII Art from Any Image How To Process Camera Raw Without Paying for Adobe Photoshop How Do You Block Annoying Text Message (SMS) Spam? Battlestar Galactica – Caprica Map of the 12 Colonies (Wallpaper Also Available) View Enlarged Versions of Thumbnail Images with Thumbnail Zoom for Firefox IntoNow Identifies Any TV Show by Sound Walk Score Calculates a Neighborhood’s Pedestrian Friendliness Factor Fantasy World at Twilight Wallpaper Hack a Wireless Doorbell into a Snail Mail Indicator

    Read the article

  • Best practices when creating/modeling databases?

    - by Oscar Mederos
    Hello, I learned at the University some steps to model a database: Model the problem using the Extended Entity-Relationship Model. Extract the functional dependencies Apply some algorithms to normalize the database (3NF or Boyce-Codd) Create the database I'm studying Computer Science and since I received that course I'm wondering if I always need to do those steps when creating a complex database for an specified problem. For example, do PHP / .NET / .. programmers always do that? or there are some tools to simplify that process, maybe using another way of represent the problem instead of the EERM?

    Read the article

  • Best way of Javascript web development in Netbeans (Hot deployment)

    - by marcelocbf
    I'm beginning Javascript development and as a beginner in JavaScript I make a lot of mistakes. The way I'm developing is very counter-productive because every mistake I fix I have to shutdown Glassfish, re-build the app and re-deploy it. My app is a Java back-end with REST services and the Html, JavaScript, CSS for the frontend. Everything is packed in a .ear file. As of right now, I'm just working with the frontend but I do have to make this whole process to update the files. My question is ... is there a better way of doing this? Can somebody tell how do you guys work in a similar setup to do the everyday development?

    Read the article

  • DIY Glowing Easter Eggs Ripe for After Hours Easter Egg Hunt

    - by Jason Fitzpatrick
    This DIY project mixes up LEDS, plastic Easter Eggs, and candy, for delicious and glow-in-the-dark fun. How do you get from a plain plastic egg to a glowing one? All you need to do is craft some simple LED “throwies” and tuck them inside the eggs. Check out the video above to see the entire process from start to finish. [via Make] How to Own Your Own Website (Even If You Can’t Build One) Pt 3 How to Sync Your Media Across Your Entire House with XBMC How to Own Your Own Website (Even If You Can’t Build One) Pt 2

    Read the article

  • The Ins and Outs of Effective Smart Grid Data Management

    - by caroline.yu
    Oracle Utilities and Accenture recently sponsored a one-hour Web cast entitled, "The Ins and Outs of Effective Smart Grid Data Management." Oracle and Accenture created this Web cast to help utilities better understand the types of data collected over smart grid networks and the issues associated with mapping out a coherent information management strategy. The Web cast also addressed important points that utilities must consider with the imminent flood of data that both present and next-generation smart grid components will generate. The three speakers, including Oracle Utilities' Brad Williams, focused on the key factors associated with taking the millions of data points captured in real time and implementing the strategies, frameworks and technologies that enable utilities to process, store, analyze, visualize, integrate, transport and transform data into the information required to deliver targeted business benefits. The Web cast replay is available here. The Web cast slides are available here.

    Read the article

  • Install ubuntu-11.10-desktop-amd64 with Logitech diNovo Edge

    - by MyGGaN
    I know there are problems (and have been for quite some time) with the diNovo Edge keyboard. The solution could be found easily and often you can use the on-screen keyboard to fix it. Now I'm installing Ubuntu from scratch and the only keyboard I have is my diNovo, am I screwed? You are able to get to System Config during the install process and from there to Accessibility menu where you usually can start the on-screen keyboard. This doesn't work, no keyboard in sight... What if I can do some ninja copy-paste with the mouse?? Nope, no pasting allowed in the feilds: username, password, etc. Is there a chance I can fix this somehow within the .iso file before I create the bootable USB drive I use to install from? Or do I have to buy a keyboard to be able to install Ubuntu?

    Read the article

  • Oracle Demantra BAL Migration

    - by Annemarie Provisero
    ADVISOR WEBCAST: Oracle Demantra BAL Migration PRODUCT FAMILY:  Manufacturing December 7, 2011 at 8 am PT, 9 am MT, 11 am ET This one-hour live advisor webcast provides Application Consultants, Administrative and Technical Users, and Customers with and introduction to the BAL Migration utility and details steps involved in migration process. The information provided trains you on leveraging BAL migration utility to migrate Demantra application objects from one environment to another. TOPICS WILL INCLUDE: Introduction to BAL Migration BAL Migration Steps Schema Setups BAL Repository Object Migration A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Click here to register for this session ------------------------------------------------------------------------------------------------------------- The above webcast is a service of the E-Business Suite Communities in My Oracle Support. For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • Welch's Juices-up Its Inventory Management with Oracle Supply Chain

    - by [email protected]
    Supply & Demand Chain Executive published recently a great success story about Welch's implementation of "Take Supply Chain and G.SI to work with Oracle Process Manufacturing". The company says it's been able to improve operational control, inventory accuracy, visibility and order fulfillment by automating its processes across three production/warehousing locations nationwide. Improving warehouse and inventory management operations creates efficiencies across a high-velocity nationwide supply chain Welch's production facilities were collecting more information than ever before on the flow of materials and inventory, but the company needed an effective and accurate method to organize and manage these data.   Article found at: http://www.sdcexec.com/publication/article.jsp?pubId=1&id=12256&pageNum=2     

    Read the article

  • Welch's Juices-up Its Inventory Management with Oracle Supply Chain

    - by [email protected]
    Supply & Demand Chain Executive published recently a great success story about Welch's implementation of "Take Supply Chain and G.SI to work with Oracle Process Manufacturing". The company says it's been able to improve operational control, inventory accuracy, visibility and order fulfillment by automating its processes across three production/warehousing locations nationwide. Improving warehouse and inventory management operations creates efficiencies across a high-velocity nationwide supply chain Welch's production facilities were collecting more information than ever before on the flow of materials and inventory, but the company needed an effective and accurate method to organize and manage these data.   Article found at: http://www.sdcexec.com/publication/article.jsp?pubId=1&id=12256&pageNum=2     

    Read the article

  • Welch's Juices-up Its Inventory Management with Oracle Supply Chain

    - by [email protected]
    Supply & Demand Chain Executive published recently a great success story about Welch's implementation of "Take Supply Chain and G.SI to work with Oracle Process Manufacturing". The company says it's been able to improve operational control, inventory accuracy, visibility and order fulfillment by automating its processes across three production/warehousing locations nationwide. Improving warehouse and inventory management operations creates efficiencies across a high-velocity nationwide supply chain Welch's production facilities were collecting more information than ever before on the flow of materials and inventory, but the company needed an effective and accurate method to organize and manage these data.   Article found at: http://www.sdcexec.com/publication/article.jsp?pubId=1&id=12256&pageNum=2     

    Read the article

  • Downloads killing internet on my home network

    - by Travis
    I am currently having a problem with my wireless. Whenever I try to download anything it kills the internet for every other application(tabs within the same browser, browsers on other computers on the same network) except the process doing the download. This occurs with everything from downloading updates to iso's. I am not using a torrent. It happens when downloading upgrades, browser downloads, or anything else. This problem does not occur when I use Windows 7 on the same computer and it stops killing the internet for other computers if I turn the download/Ubuntu off. I am using an ASUS G74SX laptop running Ubuntu 12.10 with Gnome 3.6. My wireless card is an Intel Corporation Centrino Wireless-N + WiMAX 6150 (rev 67) Thanks!

    Read the article

  • Fusion HCM SaaS – Integration

    - by Kiran Mundy
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Fusion HCM SaaS – Integration A typical implementation pattern we’re seeing with Fusion Apps early adopters is implementing a few Fusion HCM applications that bring the most benefit to their company with the least disruption to existing programs and interfaces. Very often this ends up being Fusion Goals & Performance, Talent, Compensation or Benefits, often with Taleo for recruiting. The implementation picture looks like what you see below: Here, you can see that all the “downstream integrations” from the On-Premise Core HR, are unaffected because the master for employee data is still your On-Premise Core HR system – all updates and new hires are made here (although they may be fed in from Taleo to start with). As a second phase when customers migrate Core HR to Fusion HCM, they have to come up with a strategy to manage integrations to all their downstream applications that require employee details. For customers coming from EBS HR, a short term strategy that allows for minimal impact, is to extract employee data from Fusion (Via HCM Extract), and load the shared EBS HR tables (which are part of an EBS Financials install anyways), and let your downstream integrations continue to function based on this data as shown below. If you are not coming from EBS HR and there are license implications, you may want to consider: Creating an On-Premise warehouse for extracting data from Fusion Apps. Leveraging Fusion Apps Web Services (available to SaaS customers starting R7) to directly retrieve/write data to Fusion Apps. Integration Tools File Based Loader This is the primary mechanism for loading HCM data (both initial load and incremental updates) into Fusion HCM. Employee & related data can be uploaded into Fusion HCM using File Based Loader. Note that ability to schedule File Based Loader to run on a pre-defined schedule will be available as a patch on top of Rel 5. Hr2Hr has been deprecated in favor of File Based Loader, but for existing customers using Hr2Hr, here are some sample scripts that show how to get more informative error messages. They can be run by creating data model sql queries in BI Publisher. The scripts currently have hard coded values for request id and loader batch id, which your developer will need to update to the correct values for you. The BI Publisher Training Session recorded on Apr 18th is available here (under "Recordings"). This will enable a somewhat technical resource to create a data model sql query. Links to Documentation & Traning Reference documentation for File Based Loader on docs.oracle.com FBL 1.1 MOS Doc Id 1533860.1 Sample demo data files for File Based Loader HCM SaaS Integrations ppt and recording. EBS api's Loading Information into EBS Full or Shared HCM This could be candidate information being loaded from Taleo into EBS or  Employee information being loaded from Fusion HCM into an EBS shared HR install (for downstream applications & EBS Financials). Oracle HRMS Product Family Publicly Callable Business Process APIs (A Reference Consolidation) [ID 216838.1] This is a guide to the EBS R12 Integration Repository accessible from an EBS instance. EBS HRMS Publicly Callable Business Process APIs in Release 11i & 12 [ID 121964.1] Fusion HCM Extract Fusion HCM Extract is the primary mechanism used to extract employee information from Fusion HCM. Refer to the "Configure Identity Sync" doc on MOS  for additional mechanisms. Additional documentation (you'll need an oracle.com account to access) HCM Extracts User Guides (Rel 4 & 5) HCM Extract Entity/Attributes (Rel 5) HCM Extract User Guide (Rel 5) If you don’t have an oracle.com account, download the zipped HCM Extract Rel 5 Docs (Click on File --> Download on next screen). View Training Recordings on Fusion HCM Extract Benefits Extract To setup the benefits extract, refer to the following guide. Page 2-15 of the User Documentation describes how to use the benefits extract. Benefit enrollments can also be uploaded into Fusion Benefits. Instructions are here along with a sample upload file. However, if the defined benefits extract does not meet your requirements, you can use BI Publisher (Link to BI Publisher presentation recording from Apr 18th) to create your own version of Benefits extract. You can start with the data model query underlying the benefits extract. Payroll Interface Fusion Payroll Interface enables you to capture personal payroll information, such as earnings and deductions, along with other data from Oracle Fusion Human Capital Management, and send that information to a third-party payroll provider. Documentation: Payroll interface guide Sample file DBI's used for the payroll interface.Usage Patterns always accessible @ http://www.finapps.com Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

  • Best way to throw exception and avoid code duplication

    - by JF Dion
    I am currently writing code and want to make sure all the params that get passed to a function/method are valid. Since I am writing in PHP I don't have access to all the facilities of other languages like C, C++ or Java to check for parameters values and types public function inscriptionExists($sectionId, $userId) // PHP vs. public boolean inscriptionExists(int sectionId, int userId) // Java So I have to rely on exceptions if I want to make sure that my params are both integers. Since I have a lot of places where I need to check for param validity, what would be the best way to create a validation/exception machine and avoid code duplication? I was thinking on a static factory (since I don't want to pass it to all of my classes) with a signature like: public static function factory ($value, $valueType, $exceptionType = 'InvalidArgumentException'); Which would then call the right sub process to validate based on the type. Am I on the right way, or am I going completely off the road and overthinking my problem?

    Read the article

  • Detecting 404 errors after a new site design

    - by James Crowley
    We recently re-designed Developer Fusion and as part of that we needed to ensure that any external links were not broken in the process. In order to monitor this, we used the awesome LogParser tool. All you need to do is open up a command prompt, navigate to the directory with your web site's log files in, and run a query like this: "c:\program files (x86)\log parser 2.2\logparser" "SELECT top 500 cs-uri-stem,count(*) FROM u_ex*.log WHERE sc-status=404 GROUP BY cs-uri-stem order by count(*) desc" -rtp:-1 topMissingUrls.txt And you've got a text file with the top 500 requested URLs that are returning 404. Simple!

    Read the article

  • Use Drive Mirroring for Instant Backup in Windows 7

    - by Trevor Bekolay
    Even with the best backup solution, a hard drive crash means you’ll lose a few hours of work. By enabling drive mirroring in Windows 7, you’ll always have an up-to-date copy of your data. Windows 7’s mirroring – which is only available in Professional, Enterprise, and Ultimate editions – is a software implementation of RAID 1, which means that two or more disks are holding the exact same data. The files are constantly kept in sync, so that if one of the disks fails, you won’t lose any data. Note that mirroring is not technically a backup solution, because if you accidentally delete a file, it’s gone from both hard disks (though you may be able to recover the file). As an additional caveat, having mirrored disks requires changing them to “dynamic disks,” which can only be read within modern versions of Windows (you may have problems working with a dynamic disk in other operating systems or in older versions of Windows). See this Wikipedia page for more information. You will need at least one empty disk to set up disk mirroring. We’ll show you how to mirror an existing disk (of equal or lesser size) without losing any data on the mirrored drive, and how to set up two empty disks as mirrored copies from the get-go. Mirroring an Existing Drive Click on the start button and type partitions in the search box. Click on the Create and format hard disk partitions entry that shows up. Alternatively, if you’ve disabled the search box, press Win+R to open the Run window and type in: diskmgmt.msc The Disk Management window will appear. We’ve got a small disk, labeled OldData, that we want to mirror in a second disk of the same size. Note: The disk that you will use to mirror the existing disk must be unallocated. If it is not, then right-click on it and select Delete Volume… to mark it as unallocated. This will destroy any data on that drive. Right-click on the existing disk that you want to mirror. Select Add Mirror…. Select the disk that you want to use to mirror the existing disk’s data and press Add Mirror. You will be warned that this process will change the existing disk from basic to dynamic. Note that this process will not delete any data on the disk! The new disk will be marked as a mirror, and it will starting copying data from the existing drive to the new one. Eventually the drives will be synced up (it can take a while), and any data added to the E: drive will exist on both physical hard drives. Setting Up Two New Drives as Mirrored If you have two new equal-sized drives, you can format them to be mirrored copies of each other from the get-go. Open the Disk Management window as described above. Make sure that the drives are unallocated. If they’re not, and you don’t need the data on either of them, right-click and select Delete volume…. Right-click on one of the unallocated drives and select New Mirrored Volume…. A wizard will pop up. Click Next. Click on the drives you want to hold the mirrored data and click Add. Note that you can add any number of drives. Click Next. Assign it a drive letter that makes sense, and then click Next. You’re limited to using the NTFS file system for mirrored drives, so enter a volume label, enable compression if you want, and then click Next. Click Finish to start formatting the drives. You will be warned that the new drives will be converted to dynamic disks. And that’s it! You now have two mirrored drives. Any files added to E: will reside on both physical disks, in case something happens to one of them. Conclusion While the switch from basic to dynamic disks can be a problem for people who dual-boot into another operating system, setting up drive mirroring is an easy way to make sure that your data can be recovered in case of a hard drive crash. Of course, even with drive mirroring, we advocate regular backups to external drives or online backup services. Similar Articles Productive Geek Tips Rebit Backup Software [Review]Disabling Instant Search in Outlook 2007Restore Files from Backups on Windows Home ServerSecond Copy 7 [Review]Backup Windows Home Server Folders to an External Hard Drive TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 Acronis Online Backup Windows Firewall with Advanced Security – How To Guides Sculptris 1.0, 3D Drawing app AceStock, a Tiny Desktop Quote Monitor Gmail Button Addon (Firefox) Hyperwords addon (Firefox) Backup Outlook 2010

    Read the article

  • Friday Fun: Daisy in Wonderland

    - by Asian Angel
    Are you suffering the effects of another grinding week at work? Then it is time for you to relax for a little bit and have some fun! In this week’s game you get to engage in inter-dimensional travel as you help Daisy try to return home Latest Features How-To Geek ETC How To Create Your Own Custom ASCII Art from Any Image How To Process Camera Raw Without Paying for Adobe Photoshop How Do You Block Annoying Text Message (SMS) Spam? How to Use and Master the Notoriously Difficult Pen Tool in Photoshop HTG Explains: What Are the Differences Between All Those Audio Formats? How To Use Layer Masks and Vector Masks to Remove Complex Backgrounds in Photoshop Bring Summer Back to Your Desktop with the LandscapeTheme for Chrome and Iron The Prospector – Home Dash Extension Creates a Whole New Browsing Experience in Firefox KinEmote Links Kinect to Windows Why Nobody Reads Web Site Privacy Policies [Infographic] Asian Temple in the Snow Wallpaper 10 Weird Gaming Records from the Guinness Book

    Read the article

  • StreamInsight/SSIS Integration White Paper

    - by Roman Schindlauer
    This has been tweeted all over the place, but we still want to give it proper attention here in our blog: SSIS (SQL Server Integration Service) is widely used by today’s customers to transform data from different sources and load into a SQL Server data warehouse or other targets. StreamInsight can process large amount of real-time as well as historical data, making it easy to do temporal and incremental processing.  We have put together a white paper to discuss how to bring StreamInsight and SSIS together and leverage both platforms to get crucial insights faster and easier. From the paper’s abstract: The purpose of this paper is to provide guidance for enriching data integration scenarios by integrating StreamInsight with SQL Server Integration Services. Specifically, we looked at the technical challenges and solutions for such integration, by using a case study based on a customer scenarios in the telecommunications sector. Please take a look at this paper and send us your feedback! Using SQL Server Integration Services and StreamInsight Together Regards, Ping Wang

    Read the article

  • Call for Papers Ends March 21

    - by jack.flynn
    Have Something to Say? Better Say So Now. The Call for Papers for Oracle OpenWorld and the Develop Stream of JavaOne+Develop ends at midnight on Sunday, March 21. So if you want to be a part of the most influential IT events of the year, don't let this chance pass you by. This year offers opportunities to speak out about some new subjects: Oracle OpenWorld adds a whole new Server and Storage Systems stream, including Sun servers, Sun storage and tape, and Oracle Solaris operating system. And the Develop audience should be larger and more energetic than ever now that it's co-located with JavaOne. If you have something important to say, this is the time to let us know. Find all the information on the Call for Papers process, timeline, and guidelines here.

    Read the article

  • Ranking - an Introduction

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved Ranking Ranking is quite common in the internet. Readers are asked to rank their latest reading by clicking on one of 5 (sometimes 10) stars. The number of stars is then converted to a number and the average number of stars as selected by all the readers is proudly (or shamefully) displayed for future readers. SharePoint 2007 lacked this feature altogether. SharePoint 2010 allows the users to rank items in a list or documents in a library (the two are actually the same because a library is actually a list). But in SP2010 the computation of the average is done later on a timer rather than on-the-spot as it should be. I suspect that the reason for this shortcoming is that they did not involve a mathematician! Let me explain. Ranking is kept in a related list. When a user rates a document the rank-list is added an item with the item id, the user name, and his number of stars. The fact that a user already ranked an item prevents him from ranking it again. This prevents the creator of the item from asking his mother to rank it a 5 and do it 753 times, thus stacking the ballot. Some systems will allow a user to change his rating and this will be done by updating the rank-list item. Now, when the timer kicks off, the list is spanned and for each item the rank-list items containing this id are summed up and divided by the number of votes thus yielding the new average. This is obviously very time consuming and very server intensive. In the 18th century an early actuary named James Dodson used what the great Augustus De Morgan (of De Morgan’s law) later named Commutation tables. The labor involved in computing a life insurance premium was staggering and also very error prone. Clerks with pencil and paper would multiply and add mountains of numbers to do the task. The more steps the greater the probability of error and the more expensive the process. Commutation tables created a “summary” of many steps and reduced the work 100 fold. So had Microsoft taken a lesson in the history of computation, they would have developed a much faster way for rating that may be done in real-time and is also 100 times faster and less CPU intensive. How do we do this? We use a form of commutation. We always keep the number of votes and the total of stars. One simple division gives us the average. So we write an event receiver. When a vote is added, we just add the stars to the total-stars and 1 to the number of votes. We then recomputed the average. When a vote is updated, we reduce the total by the old vote, increase it by the new vote and leave the number of votes the same. Again we do the division to get the new average. When a vote is deleted (highly unlikely and maybe even prohibited), we reduce the total by that vote and reduce the number of votes by 1… Gone are the days of spanning lists, counting items, and tallying votes and we have no need for a timer process to run it all. This is the first of a few treatises on ranking. Even though I discussed the math and the history thereof, in here I am only going to solve the presentation issue. I wanted to create the CSS and Jscript needed to display the stars, create the various effects like hovering and clicking (onmouseover, onmouseout, onclick, etc.) and I wanted to create a general solution with any number of stars. When I had it all done, I created the ranking game so that I could test it. The game is interesting in and on itself, so here it is (or go to the games page and select “rank the stars”). BTW, when you play it, look at the source code and see how it was all done.  Next, how the 5 stars are displayed in the New and Update forms. When the whole set of articles will be done, you’ll be able to create the complete solution. That’s all folks!

    Read the article

  • Autoscaling in a modern world&hellip;. last chapter

    - by Steve Loethen
    As we all know as coders, things like logging are never important.  Our code will work right the first time.  So, you can understand my surprise when the first time I deployed the autoscaling worker role to the actual Azure fabric, it did not scale.  I mean, it worked on my machine.  How dare the datacenter argue with that.  So, how did I track down the problem?  (turns out, it was not so much code as lack of the right certificate)  When I ran it local in the developer fabric, I was able to see a wealth of information.  Lots of periodic status info every time the autoscalar came around to check on my rules and decide to act or not.  But that information was not making it to Azure storage.  The diagnostics were not being transferred to where I could easily see and use them to track down why things were not being cooperative.  After a bit of digging, I discover the problem.  You need to add a bit of extra configuration code to get the correct information stored for you.  I added the following to my app.config: Code Snippet <system.diagnostics>     <sources>         <source name="Autoscaling General"switchName="SourceSwitch"           switchType="System.Diagnostics.SourceSwitch" >         <listeners>           <add name="AzureDiag" />             <remove name="Default"/>         </listeners>       </source>         <source name="Autoscaling Updates"switchName="SourceSwitch"           switchType="System.Diagnostics.SourceSwitch" >         <listeners>           <add name="AzureDiag" />             <remove name="Default"/>         </listeners>       </source>     </sources>     <switches>       <add name="SourceSwitch"           value="Verbose, Information, Warning, Error, Critical" />     </switches>     <sharedListeners>       <add type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener,Microsoft.WindowsAzure.Diagnostics, Version=1.0.0.0, Culture=neutral,PublicKeyToken=31bf3856ad364e35" name="AzureDiag"/>     </sharedListeners>     <trace>       <listeners>         <add             type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener,Microsoft.WindowsAzure.Diagnostics, Version=1.0.0.0, Culture=neutral,PublicKeyToken=31bf3856ad364e35" name="AzureDiagnostics">           <filter type="" />         </add>       </listeners>     </trace>   </system.diagnostics> Suddenly all the rich tracing info I needed was filling up my storage account.  After a few cycles of trying to attempting to scale, I identified the cert problem, uploaded a correct certificate, and away it went.  I hope this was helpful.

    Read the article

  • The Beginner’s Guide to Customizing Your Android Home Screen

    - by Chris Hoffman
    If you’re just getting started with Android, its customizability can seem a bit daunting. We’ll walk you through customizing your Android home-screen, taking advantage of widgets, and getting third-party launchers with more features. The screenshots for this article were taken on Android 4.2. If you’re using an older device, the exact process will look a little different, but you should be able to follow along anyway. Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder? Why Your Android Phone Isn’t Getting Operating System Updates and What You Can Do About It How To Delete, Move, or Rename Locked Files in Windows

    Read the article

< Previous Page | 356 357 358 359 360 361 362 363 364 365 366 367  | Next Page >