Search Results

Search found 34654 results on 1387 pages for 'start job'.

Page 239/1387 | < Previous Page | 235 236 237 238 239 240 241 242 243 244 245 246  | Next Page >

  • Does seeding torrents affect the harddisk RAM caching?

    - by satuon
    I've downloaded a lot of torrent files and while I'm seeding them, I've noticed that very often when I start the browser it's slow and the hard disk activity indicator is on. Usually when I start a program it gets cached in RAM and starting it again is very quick, and I have 3 GB of RAM so usually it stays cached nearly forever. But when my torrent client is seeding it seems that after an hour programs that I ran are no longer cached in RAM. I was thinking maybe it's because of the disk reads which the torrent client performs are cached and fill up RAM eventually. But I don't think they need to be, as they are read only once and are unlikely to be read again soon. So my questions are - is this the way I think, and if so is it possible in principle to prevent the disk reads from being cached? I can try to edit the source code of the program.

    Read the article

  • Questions about Microsoft's new Cloud certification

    - by makerofthings7
    I'm evaluating taking the cloud certification exams from Microsoft, and have a few questions How highly do you think employers will value this exam? What job roles would require this cert? In your personal experience, how would this certification be weighed against other factors such as real world experience, other certifications, and having a Bachelors degree? If you mention that other certifications are more valued, which ones are they?

    Read the article

  • New Features and Changes in OIM11gR2

    - by Abhishek Tripathi
    WEB CONSOLEs in OIM 11gR2 ** In 11gR1 there were 3 Admin Web Consoles : ·         Self Service Console ·         Administration Console and ·         Advanced Administration Console accessible Whereas in OIM 11gR2 , Self Service and Administration Console have are now combined and now called as Identity Self Service Console http://host:port/identity  This console has 3 features in it for managing self profile (My Profile), Managing Requests like requesting for App Instances and Approving requests (Requests) and General Administration tasks of creating/managing users, roles, organization, attestation etc (Administration) ** In OIM 11gR2 – new console sysadmin has been added Administrators which includes some of the design console functions apart from general administrations features. http://host:port/sysadmin   Application Instances Application instance is the object that is to be provisioned to a user. Application Instances are checked out in the catalog and user can request for application instances via catalog. ·         In OIM 11gR2 resources and entitlements are bundled in Application Instance which user can select and request from catalog.  ·         Application instance is a combination of IT Resource and RO. So, you cannot create another App Instance with the same RO & IT Resource if it already exists for some other App Instance. One of these ( RO or IT Resource) must have a different name. ·         If you want that users of a particular Organization should be able to request for an Application instances through catalog then App Instances must be attached to that particular Organization. ·         Application instance can be associated with multiple organizations. ·         An application instance can also have entitlements associated with it. Entitlement can include Roles/Groups or Responsibility. ·         Application Instance are published to the catalog by a scheduled task “Catalog Synchronization Job” ·         Application Instance can have child/ parent application instance where child application instance inherits all attributes of parent application instance. Important point to remember with Application Instance If you delete the application Instance in OIM 11gR2 and create a new one with the same name, OIM will not allow doing so. It throws error saying Application Instance already exists with same Resource Object and IT resource. This is because there is still some reference that is not removed in OIM for deleted application Instance.  So to completely delete your application Instance from OIM, you must: 1. Delete the app Instance from sysadmin console. 2. Run the App Instance Post Delete Processing Job in Revoke/Delete mode. 3. Run the Catalog Synchronization job. Once done, you should be able to create a new App instance with the previous RO & IT Resouce name.   Catalog  Catalog allows users to request Roles, Application Instance, and Entitlements in an Application. Catalog Items – Roles, Application Instance and Entitlements that can be requested via catalog are called as catalog items. Detailed Information ( attributes of Catalog item)  Category – Each catalog item is associated with one and only one category. Catalog Administrators can provide a value for catalog item. ·         Tags – are search keywords helpful in searching Catalog. When users search the Catalog, the search is performed against the tags. To define a tag, go to Catalog->Search the resource-> select the resource-> update the tag field with custom search keyword. Tags are of three types: a) Auto-generated Tags: The Catalog synchronization process auto-tags the Catalog Item using the Item Type, Item Name and Item Display Name b) User-defined Tags: User-defined Tags are additional keywords entered by the Catalog Administrator. c) Arbitrary Tags: While defining a metadata if user has marked that metadata as searchable, then that will also be part of tags.   Sandbox  Sanbox is a new feature introduced in OIM11gR2. This serves as a temporary development environment for UI customizations so that they don’t affect other users before they are published and linked to existing OIM UI. All UI customizations should be done inside a sandbox, this ensures that your changes/modifications don’t affect other users until you have finalized the changes and customization is complete. Once UI customization is completed, the Sandbox must be published for the customizations to be merged into existing UI and available to other users. Creating and activating a sandbox is mandatory for customizing the UI by .Without an active sandbox, OIM does not allow to customize any page. a)      Before you perform any activity in OIM (like Create/Modify Forms, Custom Attribute, creating application instances, adding roles/attributes to catalog) you must create a Sand Box and activate it. b)      One can create multiple sandboxes in OIM but only one sandbox can be active at any given time. c)      You can export/import the sandbox to move the changes from one environment to the other. Creating Sandbox To create sandbox, login to identity manager self service (/identity) or System Administration (/sysadmin) and click on top right of link “Sandboxes” and then click on Create SandBox. Publishing Sandbox Before you publish a sandbox, it is recommended to backup MDS. Use /EM to backup MDS by following the steps below : Creating MDS Backup 1.      Login to Oracle Enterprise Manager as the administrator. 2.      On the landing page, click oracle.iam.console.identity.self-service.ear(V2.0). 3.      From the Application Deployment menu at the top, select MDS configuration. 4.      Under Export, select the Export metadata documents to an archive on the machine where this web browser is running option, and then click Export. All the metadata is exported in a ZIP file.   Creating Password Policy through Admin Console : In 11gR1 and previous versions password policies could be created & applied via OIM Design Console only. From OIM11gR2 onwards, Password Policies can be created and assigned using Admin Console as well.  

    Read the article

  • Where are my sub templates?

    - by Tim Dexter
    This one is for standalone/BIEE uses of Publisher. All the ERP/CRM/HCM folks are already catered for and can tuck into a nut cutlet and arugala salad. Sorry, I have just watched Food Inc and even if only half of it is true; Im still on a crusade in my house against mass produced food. Wake up World! If you have ventured into the world of sub templating, you'll be reaping some development benefit. In terms of shared report components and calculations they are very useful. Just exporting all of your report headers and footers to a single sub template can potentially save you hours and hours of work and make you look like a star. If someone in management gets it into their head that they would like Comic San Serif font rather than Arial in their report headers, its a 10 min job rather than 100 hours! What about the rest of the report content? I hear you cry. Its coming in 11g, full master template support. Your management wants bright blue borders with yellow backgrounds for all the tables in your reports, 5 minute job! Getting back to sub templates and my comment about all the ERP/CRM/HCM folks be catered for. In the standalone release there is no out of the box directory for you to drop your sub templates. Dropping them into the main report directory would make sense but they are not accessible there via a URL. An oversight on our part and something that will be addressed in 11g. Sub templates are now a first class citizen in the world of BIP, you can upload them and BIP will know what to do with them. But what do you do right now? The easiest place to put them where BIP can 'see' them is to create a directory under the xmlpserver install directory in the J2EE container e.g. $J2EE_HOME/xmlpserver/xmlpserver/subtemplates You can call it whatever you want but when the server is started up, that directory is accessible via a URL i.e. http://tdexter:9704/xmlpserver/subtemplates/mysub.rtf. You can therefore put it into the top of your main templates and call the sub template. <?import: http://tdexter:9704/xmlpserver/subtemplates/mysub.rtf?> Of course, you can drop them anywhere you want, they just need to be in a web server mountable directory. Enjoy the arugala!

    Read the article

  • Good Practices in Software Outsourcing

    When an organization has dedicated to outsource a job to other company outside its organizational structure, it may have to deal with risks pertaining to software outsourcing. Hiring a stranger to ge... [Author: Chirag Vyas - Web Design and Development - April 09, 2010]

    Read the article

  • Have I fixed my partition problem with os x 10.5.8? Are my GPT and MBR back to normal?

    - by David Schaap
    I'm new to linux and I have overstepped by abilities. I tried dual booting os x 10.5.8 with ubuntu 11.10 with rEFIt, but I been having problems with partitioning. Instead of enduring more headaches, I've made the decision to simply use ubuntu on VirtualBox. I've tried to return my HDD to normal, but I am looking for confirmation that my partitions are ok. Here is the report from partition inspector: *** Report for internal hard disk *** Current GPT partition table: # Start LBA End LBA Type 1 409640 233917359 Mac OS X HFS+ Current MBR partition table: # A Start LBA End LBA Type 1 1 234441647 ee EFI Protective MBR contents: Boot Code: GRUB Partition at LBA 409640: Boot Code: None File System: HFS Extended (HFS+) Listed in GPT as partition 1, type Mac OS X HFS+ Also, my HDD directory has a bunch of extra folders in them and they appear to be ubuntu related, although it is no longer installed. folders like bin, sbin, cores, var, user, and so on. Those folders aren't supposed to be there, right? Thanks in advance.

    Read the article

  • First 10 programs in a new scripting languge

    - by pro_metedor
    When a peron is learning a new scripting language like: bash python perl pike What kind of simple (yet practical) problem solutions to get through to make say that a person is comprehend with this scripting language enough to approach some complex yet still practical problems encountered in everyday job. In other words, which problems would you give that person to solve to make sure that he/she is familiar with the scripting language.

    Read the article

  • How to autohide Unity 2D?

    - by ph1b
    In the regular Unity 3D, I used Compiz Config Manager where I could config the Unity-Plugin to autohide the left start-bar and not to show it when moving my mouse to the left. So it was perfect for using Docky. But with Unity 2D the Unity settings don't have any effect. How can I completely hide the start-panel and just have it shown when pressing the windows-key? With the following: dconf write /com/canonical/unity-2d/launcher/hide-mode 1 dconf write /com/canonical/unity-2d/launcher/use-strut false it still opens the bar when going with my mouse to the left.

    Read the article

  • Supercharge Your Website and See Traffic Flooding at Your Door

    Thanks to your tips and tactics, my website is supercharged and I have more visitors than I can handle (of course he is able to handle), exclaimed my friend when I entered his home for the super success party that he threw. It was not long ago when after losing his job to the economic meltdown, he started an online business. Since he was still under the influence of stress, he wanted to give it all up, because initially his website remained almost obsolete.

    Read the article

  • Friday Fun: Mummy Blaster

    - by Asian Angel
    In this week’s game your mission is to destroy all the mummies waiting for you in this long forgotten temple. All that you have is a limited amount of explosives and your strategic skills to win the day! Can you get the job done? How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using? HTG Explains: What The Windows Event Viewer Is and How You Can Use It

    Read the article

  • Does learning to develop for iOS create a lock-in?

    - by Jungle Hunter
    If I begin my career (first job) with developing on the iOS platform, does that lock me in into iOS and Mac OS X development only? By locking me in I mean will that create barriers for me to switch technologies as I would be mainly working with Objective-C. If yes, does that make my career choices limited? I'm interested in comparing this with Android development, which if pursued will leave me with Java skills (correct me if I'm wrong) which I can use elsewhere.

    Read the article

  • Problem with apt-get [install, purge, autoremove, upgrade]

    - by Ark
    I am trying to install openvpn using apt-get, but I get an error during the process [As far as I understand the issue is with unattended-updates, which I could not upgrade due to apt-get porblem :/]. I cannot upgrade, install, autoremove or purge anything else [which I need to do since /boot is full. For upgrading I did update before trying].... Trace [on sudo apt-get autoremove]: 0 upgraded, 0 newly installed, 79 to remove and 421 not upgraded. 35 not fully installed or removed. Need to get 24.1 kB of archives. After this operation, 140 MB disk space will be freed. Do you want to continue [Y/n]? y Get:1 http://us.archive.ubuntu.com/ubuntu/ oneiric/main unattended-upgrades all 0.73ubuntu1 [24.1 kB] Fetched 24.1 kB in 0s (31.2 kB/s) Preconfiguring packages ... (Reading database ... 287206 files and directories currently installed.) Removing flashplugin-downloader:i386 ... Removing libasound2-plugins:i386 ... Removing libpulse0:i386 ... Removing libsndfile1:i386 ... Removing libvorbisenc2:i386 ... Removing libvorbis0a:i386 ... Processing triggers for libc-bin ... ldconfig deferred processing now taking place (Reading database ... 287174 files and directories currently installed.) Preparing to replace unattended-upgrades 0.73ubuntu1 (using .../unattended-upgrades_0.73ubuntu1_all.deb) ... Checking for running unattended-upgrades: Traceback (most recent call last): File "/usr/share/unattended-upgrades/unattended-upgrade-shutdown", line 27, in <module> import apt_pkg ImportError: No module named apt_pkg invoke-rc.d: initscript unattended-upgrades, action "stop" failed. dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Checking for running unattended-upgrades: Traceback (most recent call last): File "/usr/share/unattended-upgrades/unattended-upgrade-shutdown", line 27, in <module> import apt_pkg ImportError: No module named apt_pkg invoke-rc.d: initscript unattended-upgrades, action "stop" failed. dpkg: error processing /var/cache/apt/archives/unattended-upgrades_0.73ubuntu1_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 update-rc.d: warning: unattended-upgrades start runlevel arguments (2 3 4 5) do not match LSB Default-Start values (none) update-rc.d: warning: unattended-upgrades stop runlevel arguments (0 1 6) do not match LSB Default-Stop values (0 6) Checking for running unattended-upgrades: Traceback (most recent call last): File "/usr/share/unattended-upgrades/unattended-upgrade-shutdown", line 27, in <module> import apt_pkg ImportError: No module named apt_pkg invoke-rc.d: initscript unattended-upgrades, action "start" failed. dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/unattended-upgrades_0.73ubuntu1_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) Maybe I missed something basic, but I would appreciate pointers on solving the issue. Thanks

    Read the article

  • Python 3.1 books still directly applicable to learning Python 2.7?

    - by jaysun
    I need to learn Python (v2.7) for my job, and looking for the best intro book for professional programmers. I found (via amazon) that "The Quick Python Book" is the best, but it's for Python 3.1 I know there's a lot of similarities in 2.7 and 3.1, and somewhere read that you can mostly use 3.1 syntax in 2.7 as a good "future practice". Can someone with experience please verify that a book for learning Python3 would still be directly applicable for 2.7? Thank you very much. edit: "The Quick Python Book" is for 3.1

    Read the article

  • Summit Time!

    - by Ajarn Mark Caldwell
    Boy, how time flies!  I can hardly believe that the 2011 PASS Summit is just one week away.  Maybe it snuck up on me because it’s a few weeks earlier than last year.  Whatever the cause, I am really looking forward to next week.  The PASS Summit is the largest SQL Server conference in the world and a fantastic networking opportunity thrown in for no additional charge.  Here are a few thoughts to help you maximize the week. Networking As Karen Lopez (blog | @DataChick) mentioned in her presentation for the Professional Development Virtual Chapter just a couple of weeks ago, “Don’t wait until you need a new job to start networking.”  You should always be working on your professional network.  Some people, especially technical-minded people, get confused by the term networking.  The first image that used to pop into my head was the image of some guy standing, awkwardly, off to the side of a cocktail party, trying to shmooze those around him.  That’s not what I’m talking about.  If you’re good at that sort of thing, and you can strike up a conversation with some stranger and learn all about them in 5 minutes, and walk away with your next business deal all but approved by the lawyers, then congratulations.  But if you’re not, and most of us are not, I have two suggestions for you.  First, register for Don Gabor’s 2-hour session on Tuesday at the Summit called Networking to Build Business Contacts.  Don is a master at small talk, and at teaching others, and in just those two short hours will help you with important tips about breaking the ice, remembering names, and smooth transitions into and out of conversations.  Then go put that great training to work right away at the Tuesday night Welcome Reception and meet some new people; which is really my second suggestion…just meet a few new people.  You see, “networking” is about meeting new people and being friendly without trying to “work it” to get something out of the relationship at this point.  In fact, Don will tell you that a better way to build the connection with someone is to look for some way that you can help them, not how they can help you. There are a ton of opportunities as long as you follow this one key point: Don’t stay in your hotel!  At the least, get out and go to the free events such as the Tuesday night Welcome Reception, the Wednesday night Exhibitor Reception, and the Thursday night Community Appreciation Party.  All three of these are perfect opportunities to meet other professionals with a similar job or interest as you, and you never know how that may help you out in the future.  Maybe you just meet someone to say HI to at breakfast the next day instead of eating alone.  Or maybe you cross paths several times throughout the Summit and compare notes on different sessions you attended.  And you just might make new friends that you look forward to seeing year after year at the Summit.  Who knows, it might even turn out that you have some specific experience that will help out that other person a few months’ from now when they run into the same challenge that you just overcame, or vice-versa.  But the point is, if you don’t get out and meet people, you’ll never have the chance for anything else to happen in the future. One more tip for shy attendees of the Summit…if you can’t bring yourself to strike up conversation with strangers at these events, then at the least, after you sit through a good session that helps you out, go up to the speaker and introduce yourself and thank them for taking the time and effort to put together their presentation.  Ideally, when you do this, tell them WHY it was beneficial to you (e.g. “Now I have a new idea of how to tackle a problem back at the office.”)  I know you think the speakers are all full of confidence and are always receiving a ton of accolades and applause, but you’re wrong.  Most of them will be very happy to hear first-hand that all the work they put into getting ready for their presentation is paying off for somebody. Training With over 170 technical sessions at the Summit, training is what it’s all about, and the training is fantastic!  Of course there are the big-name trainers like Paul Randall, Kimberly Tripp, Kalen Delaney, Itzik Ben-Gan and several others, but I am always impressed by the quality of the training put on by so many other “regular” members of the SQL Server community.  It is amazing how you don’t have to be a published author or otherwise recognized as an “expert” in an area in order to make a big impact on others just by sharing your personal experience and lessons learned.  I would rather hear the story of, and lessons learned from, “some guy or gal” who has actually been through an issue and came out the other side, than I would a trained professor who is speaking just from theory or an intellectual understanding of a topic. In addition to the three full days of regular sessions, there are also two days of pre-conference intensive training available.  There is an extra cost to this, but it is a fantastic opportunity.  Think about it…you’re already coming to this area for training, so why not extend your stay a little bit and get some in-depth training on a particular topic or two?  I did this for the first time last year.  I attended one day of extra training and it was well worth the time and money.  One of the best reasons for it is that I am extremely busy at home with my regular job and family, that it was hard to carve out the time to learn about the topic on my own.  It worked out so well last year that I am doubling up and doing two days or “pre-cons” this year. And then there are the DVDs.  I think these are another great option.  I used the online schedule builder to get ready and have an idea of which sessions I want to attend and when they are (much better than trying to figure this out at the last minute every day).  But the problem that I have run into (seems this happens every year) is that nearly every session block has two different sessions that I would like to attend.  And some of them have three!  ACK!  That won’t work!  What is a guy supposed to do?  Well, one option is to purchase the DVDs which are recordings of the audio and projected images from each session so you can continue to attend sessions long after the Summit is officially over.  Yes, many (possibly all) of these also get posted online and attendees can access those for no extra charge, but those are not necessarily all available as quickly as the DVD recording are, and the DVDs are often more convenient than downloading, especially if you want to share the training with someone who was not able to attend in person. Remember, I don’t make any money or get any other benefit if you buy the DVDs or from anything else that I have recommended here.  These are just my own thoughts, trying to help out based on my experiences from the 8 or so Summits I have attended.  There is nothing like the Summit.  It is an awesome experience, fantastic training, and a whole lot of fun which is just compounded if you’ll take advantage of the first part of this article and make some new friends along the way.

    Read the article

  • Creating a python android application

    - by Harry
    I need help creating a android app on python. I'm creating an actual android game on python to use on my phone. I need suggestions of what app people would prefer. Anything you have have always wanted on your phone but no ones made it? Please post some suggestions below. I will start writing the code soon and will keep updating this post or creating a new ones asking new questions, so please keep an eye out. I also need help and software on how to start writing the code and how to test it. Thanks in advance.

    Read the article

  • User-Defined Customer Events & their impact (FA Type Profile)

    - by Rajesh Sharma
    CC&B automatically creates field activities when a specific Customer Event takes place. This depends on the way you have setup your Field Activity Type Profiles, the templates within, and associated SP Condition(s) on the template. CC&B uses the service point type, its state and referenced customer event to determine which field activity type to generate.   Customer events available in the base product include: Cut for Non-payment (CNP) Disconnect Warning (DIWA) Reconnect for Payment (REPY) Reread (RERD) Stop Service (STOP) Start Service (STRT) Start/Stop (STSP)   Note the Field values/codes defined for each event.   CC&B comes with a flexibility to define new set of customer events. These can be defined in the Look Up - CUST_EVT_FLG. Values from the Look Up are used on the Field Activity Type Profile Template page.     So what's the use of having user-defined Customer Events? And how will the system detect such events in order to create field activity(s)?   Well, system can only detect such events when you reference a user-defined customer event on a Severance Event Type for an event type Create Field Activities.     This way you can create additional field activities of a specific field activity type for user-defined customer events.   One of our customers adopted this feature and created a user-defined customer event CNPW - Cut for Non-payment for Water Services. This event was then linked on a Field Activity Type Profile and referenced on a Severance Event - CUT FOR NON PAY-W. The associated Severance Process was configured to trigger a reconnection process if it was cancelled (done by defining a Post Cancel Algorithm). Whenever this Severance Event was executed, a specific type of Field Activity was generated for disconnection purposes. The Field Activity type was determined by the system from the Field Activity Type Profile referenced for the SP Type, SP's state and the referenced user-defined customer event. All was working well until the time when they realized that in spite of the Severance Process getting cancelled (when a payment was made); the Post Cancel Algorithm was not executed to start a Reconnection Severance Process for the purpose of generating a reconnection field activity and reconnecting the service.   Basically, the Post Cancel algorithm (if specified on a Severance Process Template) is triggered when a Severance Process gets cancelled because a credit transaction has affected/relieved a Service Agreement's debt.   So what exactly was happening? Now we come to actual question as to what is the impact in having a user-defined customer event.   System defined/base customer events are hard-coded across the entire system. There is an impact even if you remove any customer event entry from the Look Up. User-defined customer events are not recognized by the system anywhere else except in the severance process, as described above.   There are few programs which have routines to first validate the completion of disconnection field activities, which were raised as a result of customer event CNP - Cut for Non-payment in order to perform other associated actions. One such program is the Post Cancel Algorithm, referenced on a Severance Process Template, generally used to reconnect services which were disconnected from other Severance Event, specifically CNP - Cut for Non-Payment. Post cancel algorithm provided by the product - SEV POST CAN does the following (below is the algorithm's description):   This algorithm is called after a severance process has been cancelled (typically because the debt was paid and the SA is no longer eligible to be on the severance process). It checks to see if the process has a completed 'disconnect' event and, if so, starts a reconnect process using the Reconnect Severance Process Template defined in the parameter.    Notice the underlined text. This algorithm implicitly checks for Field Activities having completed status, which were generated from Severance Events as a result of CNP - Cut for Non-payment customer event.   Now if we look back to the customer's issue, we can relate that the Post Cancel algorithm was triggered, but was not able to find any 'Completed' CNP - Cut for Non-payment related field activity. And hence was not able to start a reconnection severance process. This was because a field activity was generated and completed for a customer event CNPW - Cut for Non-payment of Water Services instead.   To conclude, if you introduce new customer events that extend or simulate base customer events, the ones that are included in the base product, ensure that there is no other impact either direct or indirect to other business functions that the application has to offer.  

    Read the article

  • PASS Summit 2011 &ndash; Part III

    - by Tara Kizer
    Well we’re about a month past PASS Summit 2011, and yet I haven’t finished blogging my notes! Between work and home life, I haven’t been able to come up for air in a bit.  Now on to my notes… On Thursday of the PASS Summit 2011, I attended Klaus Aschenbrenner’s (blog|twitter) “Advanced SQL Server 2008 Troubleshooting”, Joe Webb’s (blog|twitter) “SQL Server Locking & Blocking Made Simple”, Kalen Delaney’s (blog|twitter) “What Happened? Exploring the Plan Cache”, and Paul Randal’s (blog|twitter) “More DBA Mythbusters”.  I think my head grew two times in size from the Thursday sessions.  Just WOW! I took a ton of notes in Klaus' session.  He took a deep dive into how to troubleshoot performance problems.  Here is how he goes about solving a performance problem: Start by checking the wait stats DMV System health Memory issues I/O issues I normally start with blocking and then hit the wait stats.  Here’s the wait stat query (Paul Randal’s) that I use when working on a performance problem.  He highlighted a few waits to be aware of such as WRITELOG (indicates IO subsystem problem), SOS_SCHEDULER_YIELD (indicates CPU problem), and PAGEIOLATCH_XX (indicates an IO subsystem problem or a buffer pool problem).  Regarding memory issues, Klaus recommended that as a bare minimum, one should set the “max server memory (MB)” in sp_configure to 2GB or 10% reserved for the OS (whichever comes first).  This is just a starting point though! Regarding I/O issues, Klaus talked about disk partition alignment, which can improve SQL I/O performance by up to 100%.  You should use 64kb for NTFS cluster, and it’s automatic in Windows 2008 R2. Joe’s locking and blocking presentation was a good session to really clear up the fog in my mind about locking.  One takeaway that I had no idea could be done was that you can set a timeout in T-SQL code view LOCK_TIMEOUT.  If you do this via the application, you should trap error 1222. Kalen’s session went into execution plans.  The minimum size of a plan is 24k.  This adds up fast especially if you have a lot of plans that don’t get reused much.  You can use sys.dm_exec_cached_plans to check how often a plan is being reused by checking the usecounts column.  She said that we can use DBCC FLUSHPROCINDB to clear out the stored procedure cache for a specific database.  I didn’t know we had this available, so this was great to hear.  This will be less intrusive when an emergency comes up where I’ve needed to run DBCC FREEPROCCACHE. Kalen said one should enable “optimize for ad hoc workloads” if you have an adhoc loc.  This stores only a 300-byte stub of the first plan, and if it gets run again, it’ll store the whole thing.  This helps with plan cache bloat.  I have a lot of systems that use prepared statements, and Kalen says we simulate those calls by using sp_executesql.  Cool! Paul did a series of posts last year to debunk various myths and misconceptions around SQL Server.  He continues to debunk things via “DBA Mythbusters”.  You can get a PDF of a bunch of these here.  One of the myths he went over is the number of tempdb data files that you should have.  Back in 2000, the recommendation was to have as many tempdb data files as there are CPU cores on your server.  This no longer holds true due to the numerous cores we have on our servers.  Paul says you should start out with 1/4 to 1/2 the number of cores and work your way up from there.  BUT!  Paul likes what Bob Ward (twitter) says on this topic: 8 or less cores –> set number of files equal to the number of cores Greater than 8 cores –> start with 8 files and increase in blocks of 4 One common myth out there is to set your MAXDOP to 1 for an OLTP workload with high CXPACKET waits.  Instead of that, dig deeper first.  Look for missing indexes, out-of-date statistics, increase the “cost threshold for parallelism” setting, and perhaps set MAXDOP at the query level.  Paul stressed that you should not plan a backup strategy but instead plan a restore strategy.  What are your recoverability requirements?  Once you know that, now plan out your backups. As Paul always does, he talked about DBCC CHECKDB.  He said how fabulous it is.  I didn’t want to interrupt the presentation, so after his session had ended, I asked Paul about the need to run DBCC CHECKDB on your mirror systems.  You could have data corruption occur at the mirror and not at the principal server.  If you aren’t checking for data corruption on your mirror systems, you could be failing over to a corrupt database in the case of a disaster or even a planned failover.  You can’t run DBCC CHECKDB against the mirrored database, but you can run it against a snapshot off the mirrored database.

    Read the article

  • When Your Favorite Video Game Characters go Trick-or-Treating [Video]

    - by Asian Angel
    Halloween has arrived and all of your favorite video game characters are out and about collecting lots of candy goodness. The question is whether or not all will be successful in collecting treats or if the tricks will be on them! Note: Video contains some language that may be considered inappropriate. Videogame Trick-or-Treating [Dorkly] 6 Start Menu Replacements for Windows 8 What Is the Purpose of the “Do Not Cover This Hole” Hole on Hard Drives? How To Log Into The Desktop, Add a Start Menu, and Disable Hot Corners in Windows 8

    Read the article

  • SQL University: What and why of database testing

    - by Mladen Prajdic
    This is a post for a great idea called SQL University started by Jorge Segarra also famously known as SqlChicken on Twitter. It’s a collection of blog posts on different database related topics contributed by several smart people all over the world. So this week is mine and we’ll be talking about database testing and refactoring. In 3 posts we’ll cover: SQLU part 1 - What and why of database testing SQLU part 2 - What and why of database refactoring SQLU part 2 – Tools of the trade With that out of the way let us sharpen our pencils and get going. Why test a database The sad state of the industry today is that there is very little emphasis on testing in general. Test driven development is still a small niche of the programming world while refactoring is even smaller. The cause of this is the inability of developers to convince themselves and their managers that writing tests is beneficial. At the moment they are mostly viewed as waste of time. This is because the average person (let’s not fool ourselves, we’re all average) is unable to think about lower future costs in relation to little more current work. It’s orders of magnitude easier to know about the current costs in relation to current amount of work. That’s why programmers convince themselves testing is a waste of time. However we have to ask ourselves what tests are really about? Maybe finding bugs? No, not really. If we introduce bugs, we’re likely to write test around those bugs too. But yes we can find some bugs with tests. The main point of tests is to have reproducible repeatability in our systems. By having a code base largely covered by tests we can know with better certainty what a small code change can break in other parts of the system. By having repeatability we can make code changes with confidence, since we know we’ll see what breaks in other tests. And here comes the inability to estimate future costs. By spending just a few more hours writing those tests we’d know instantly what broke where. Imagine we fix a reported bug. We check-in the code, deploy it and the users are happy. Until we get a call 2 weeks later about a certain monthly process has stopped working. What we don’t know is that this process was developed by a long gone coworker and for some reason it relied on that same bug we’ve happily fixed. There’s no way we could’ve known that. We say OK and go in and fix the monthly process. But what we have no clue about is that there’s this ETL job that relied on data from that monthly process. Now that we’ve fixed the process it’s giving unexpected (yet correct since we fixed it) data to the ETL job. So we have to fix that too. But there’s this part of the app we coded that relies on data from that exact ETL job. And just like that we enter the “Loop of maintenance horror”. With the loop eventually comes blame. Here’s a nice tip for all developers and DBAs out there: If you make a mistake man up and admit to it. All of the above is valid for any kind of software development. Keeping this in mind the database is nothing other than just a part of the application. But a big part! One reason why testing a database is even more important than testing an application is that one database is usually accessed from multiple applications and processes. This makes it the central and vital part of the enterprise software infrastructure. Knowing all this can we really afford not to have tests? What to test in a database Now that we’ve decided we’ll dive into this testing thing we have to ask ourselves what needs to be tested? The short answer is: everything. The long answer is: read on! There are 2 main ways of doing tests: Black box and White box testing. Black box testing means we have no idea how the system internals are built and we only have access to it’s inputs and outputs. With it we test that the internal changes to the system haven’t caused the input/output behavior of the system to change. The most important thing to test here are the edge conditions. It’s where most programs break. Having good edge condition tests we can be more confident that the systems changes won’t break. White box testing has the full knowledge of the system internals. With it we test the internal system changes, different states of the application, etc… White and Black box tests should be complementary to each other as they are very much interconnected. Testing database routines includes testing stored procedures, views, user defined functions and anything you use to access the data with. Database routines are your input/output interface to the database system. They count as black box testing. We test then for 2 things: Data and schema. When testing schema we only care about the columns and the data types they’re returning. After all the schema is the contract to the out side systems. If it changes we usually have to change the applications accessing it. One helpful T-SQL command when doing schema tests is SET FMTONLY ON. It tells the SQL Server to return only empty results sets. This speeds up tests because it doesn’t return any data to the client. After we’ve validated the schema we have to test the returned data. There no other way to do this but to have expected data known before the tests executes and comparing that data to the database routine output. Testing Authentication and Authorization helps us validate who has access to the SQL Server box (Authentication) and who has access to certain database objects (Authorization). For desktop applications and windows authentication this works well. But the biggest problem here are web apps. They usually connect to the database as a single user. Please ensure that that user is not SA or an account with admin privileges. That is just bad. Load testing ensures us that our database can handle peak loads. One often overlooked tool for load testing is Microsoft’s OSTRESS tool. It’s part of RML utilities (x86, x64) for SQL Server and can help determine if our database server can handle loads like 100 simultaneous users each doing 10 requests per second. SQL Profiler can also help us here by looking at why certain queries are slow and what to do to fix them.   One particular problem to think about is how to begin testing existing databases. First thing we have to do is to get to know those databases. We can’t test something when we don’t know how it works. To do this we have to talk to the users of the applications accessing the database, run SQL Profiler to see what queries are being run, use existing documentation to decipher all the object relationships, etc… The way to approach this is to choose one part of the database (say a logical grouping of tables that go together) and filter our traces accordingly. Once we’ve done that we move on to the next grouping and so on until we’ve covered the whole database. Then we move on to the next one. Database Testing is a topic that we can spent many hours discussing but let this be a nice intro to the world of database testing. See you in the next post.

    Read the article

  • Virtual Box - How to open a .VDI Virtual Machine

    - by [email protected]
    TUESDAY, APRIL 27, 2010 How to open a .VDI Virtual MachineSometimes someone share with us one Virtual machine with extension .VDI, after that we can wonder how and what with?Well the answer is... It is a VirtualBox - Virtual Machine. If you have not downloaded it you can do this easily just follow this post.http://listeningoracle.blogspot.com/2010/04/que-es-virtualbox.htmlorhttp://oracleoforacle.wordpress.com/2010/04/14/ques-es-virtualbox/Ok, Now with VirtualBox Installed open it and proceed with the following:1. Open the Virtual File Manager.2. Click on Actions ? Add and select the .VDI fileClick "Ok"3. Now we can register the new Virtual Machine - Click New, and Click Next4. Write down a Name for the virtual Machine a proceed to select a Operating System and Version. (In this case it is a Linux (Oracle Enterprise Linux or RedHat)Click Next5. Select the memory amount base for the Virtual Machine(Minimal 1280 for our case) - Click Next6. Select the Disk 11GR2_OEL5_32GB.vdi it was added in the virtual media manager in the step 2.Dont forget let selected Boot hard Disk (Primary Master) . Given it is the only disk assigned to the virtual machine.Click Next7. Click Finish8. This step is important. Once you have click on the settings Button. 9. On General option click the advanced settings. Here you must change the default directory to save your Snapshots; my recommendation set it to the same directory where the .Vdi file is. Otherwise you can have the same Virtual Machine and its snapshots in different paths.10. Now Click on System, and proceed to assign the correct memory (If you did not before)Note: Enable "Enable IO APIC" if you are planning to assign more than one CPU to the Virtual Machine.Define the processors for the Virtual machine. If you processor is dual core choose 211. Select the video memory amount you want to assign to the Virtual Machine12. Associated more storage disk to the Virtual machine, if you have more VDI files.(Not our case)The disk must be selected as IDE Primary Master.13. Well you can verify the other options, but with these changes you will be able to start the VM.Note: Sometime the VM owner may share some instructions, if so follow his instructions.14. Finally Start the Virtual Machine (Click > Start)

    Read the article

  • More Changes...

    - by MOSSLover
    Stuff has changed drastically for me in the past two to three years.  I moved over 1000 miles from Saint Louis.  I go outside and I get up in front of crowds with less issues.  Now I'm changing jobs again.  I'm not really sure what to say here.  I was obviously unhappy and I needed to do something different.  So quit two days ago and I guess it worked out that I end with B&R this Friday, then head to TEC and SPS Huntsville and a week from this Monday I start my new job at Gig Werks.  I'm not sure what to expect or where I'm heading, but I think it's a step in the right direction.  I won't really know what kind of impact this will have on my life for at least another 6 months to a year. For some reason I can't sleep tonight and I think it's really a reflection of my last day.  Tomorrow is an ending and a beginning at the same time.  So it's both kind of sad and exciting.  I don't know why I'm really excited to go to Disney Land for the second time ever in my life time.  I get to ride the Teacups.  For the longest time when I was a kid I wanted to go to Disney Land.  I wanted to ride the teacups.  In 2007, at the age of 25, I rode the teacups for my first ever visit to LA.  That was the start of finally syncing up with my childhood goals.  I wanted to live near a major city.  I wanted to visit all the major cities in the world.  I wanted to see everything and meet everyone.  This job change will probably turn into something great I just don't know it yet.  I'm walking again outside my comfort zone and stepping into uncharted territory.  In 2-3 years I'll probably write another blog post how this week lead to something great.  It just stinks when you have to leave behind something you know and love.  I will miss all my current colleagues, but I'm sure I'll gain some new ones and keep in touch with the old.  To 2010 being a great year for change and hopefully by the end of the year I can say I went to Europe.  To reaching my goals and my dreams.  Don't let anyone stop you from getting what you want in life (unless you are axe murderer please don't kill anyone that's just wrong).  Have a good weekend everyone!

    Read the article

  • Where do I have to paste an "xinput" command so that it executes it when GNOME is started?

    - by michuk
    On my Thinkpad I need to execute something like this in the terminal: xinput set-int-prop "TPPS/2 IBM TrackPoint" "Evdev Middle Button Emulation" 8 1 so that my the 2 buttons on my touchpad emulate the middle mouse click. Now I need this line to be executed each time I start GNOMe or X or whatever, so that it "just works". I tried ~/.xsession or ~/.bashrc but to no avail. Should I put it in GNOME start scripts or in /etc/X somewhere? I'm using Ubuntu 11.10.

    Read the article

  • How to run node.js app on port 80? Are processes blocking my port?

    - by Lucas
    I believe the port 80 on my remote instance is blocked, and I am trying to run a node.js app using port 80. I have experimented with ports 3000 and 3002, and both ports are working fine, but I get an error when running on port 80. I suspect port 80 is blocked from my output of netstat -an below, but how can I find the process id's of the addresses that are blocking port 80 below? [lucas@ecoinstance]~/node/nodetest1$ netstat -an Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:3002 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:27017 127.0.0.1:51108 ESTABLISHED tcp 0 0 127.0.0.1:51106 127.0.0.1:27017 ESTABLISHED tcp 0 0 127.0.0.1:27017 127.0.0.1:51106 ESTABLISHED tcp 0 0 127.0.0.1:51107 127.0.0.1:27017 ESTABLISHED tcp 0 0 10.240.241.116:3002 174.61.171.61:36583 TIME_WAIT tcp 0 0 127.0.0.1:27017 127.0.0.1:51109 ESTABLISHED tcp 0 0 10.240.241.116:42423 169.254.169.254:80 ESTABLISHED tcp 0 0 127.0.0.1:51108 127.0.0.1:27017 ESTABLISHED tcp 0 532 10.240.241.116:22 174.61.171.61:56824 ESTABLISHED tcp 0 0 127.0.0.1:27017 127.0.0.1:51107 ESTABLISHED tcp 0 0 10.240.241.116:42412 169.254.169.254:80 ESTABLISHED tcp 0 0 127.0.0.1:51109 127.0.0.1:27017 ESTABLISHED tcp 0 0 127.0.0.1:51105 127.0.0.1:27017 ESTABLISHED tcp 0 0 10.240.241.116:42422 169.254.169.254:80 TIME_WAIT tcp 0 0 127.0.0.1:27017 127.0.0.1:51105 ESTABLISHED tcp6 0 0 :::22 :::* LISTEN udp 0 0 0.0.0.0:49948 0.0.0.0:* udp 0 0 0.0.0.0:68 0.0.0.0:* udp 0 0 10.240.241.116:123 0.0.0.0:* udp 0 0 127.0.0.1:123 0.0.0.0:* udp 0 0 0.0.0.0:123 0.0.0.0:* udp6 0 0 :::12151 :::* udp6 0 0 :::123 :::* Active UNIX domain sockets (servers and established) Proto RefCnt Flags Type State I-Node Path unix 2 [ ACC ] STREAM LISTENING 405680 /tmp/ssh-KdkxJfFLpKTC/agent.22 813 unix 2 [ ACC ] STREAM LISTENING 408230 /tmp/ssh-ofUeNNEwAqtP/agent.22 243 unix 2 [ ACC ] STREAM LISTENING 416227 /tmp/mongodb-27017.sock unix 2 [ ACC ] SEQPACKET LISTENING 3692 /run/udev/control unix 7 [ ] DGRAM 5286 /dev/log unix 2 [ ACC ] STREAM LISTENING 5318 /var/run/acpid.socket unix 2 [ ACC ] STREAM LISTENING 16170 /tmp//tmux-1000/default unix 2 [ ACC ] STREAM LISTENING 414450 /var/run/dbus/system_bus_socke And here is the log when trying to run on port 80 with node.js: [lucas@ecoinstance]~/node/nodetest1$ npm start > [email protected] start /home/lucas/node/nodetest1 > node ./bin/www events.js:72 throw er; // Unhandled 'error' event ^ Error: listen EACCES at errnoException (net.js:904:11) at Server._listen2 (net.js:1023:19) at listen (net.js:1064:10) at Server.listen (net.js:1138:5) at Function.app.listen (/home/lucas/node/nodetest1/node_modules/express/lib/applicati on.js:532:24) at Object.<anonymous> (/home/lucas/node/nodetest1/bin/www:7:18) at Module._compile (module.js:456:26) at Object.Module._extensions..js (module.js:474:10) at Module.load (module.js:356:32) at Function.Module._load (module.js:312:12) npm ERR! [email protected] start: `node ./bin/www` npm ERR! Exit status 8 npm ERR! npm ERR! Failed at the [email protected] start script. npm ERR! This is most likely a problem with the nodetest1 package, npm ERR! not with npm itself. npm ERR! Tell the author that this fails on your system: npm ERR! node ./bin/www npm ERR! You can get their info via: npm ERR! npm owner ls nodetest1 npm ERR! There is likely additional logging output above. npm ERR! System Linux 3.13-0.bpo.1-amd64 npm ERR! command "/usr/local/bin/node" "/usr/local/bin/npm" "start" npm ERR! cwd /home/lucas/node/nodetest1 npm ERR! node -v v0.10.28 npm ERR! npm -v 1.4.9 npm ERR! code ELIFECYCLE npm ERR! npm ERR! Additional logging details can be found in: npm ERR! /home/lucas/node/nodetest1/npm-debug.log npm ERR! not ok code 0 And sudo netstat -lnp does not return any matching port 80's: [lucas@ecoinstance]~/node/nodetest1$ sudo netstat -lnp [48/648] Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Progr am name tcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN 29160/mon god tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1976/sshd tcp6 0 0 :::22 :::* LISTEN 1976/sshd udp 0 0 0.0.0.0:49948 0.0.0.0:* 1604/dhcl ient udp 0 0 0.0.0.0:68 0.0.0.0:* 1604/dhcl ient udp 0 0 10.240.241.116:123 0.0.0.0:* 2076/ntpd udp 0 0 127.0.0.1:123 0.0.0.0:* 2076/ntpd udp 0 0 0.0.0.0:123 0.0.0.0:* 2076/ntpd udp6 0 0 :::12151 :::* 1604/dhcl ient udp6 0 0 :::123 :::* 2076/ntpd Active UNIX domain sockets (only servers) Proto RefCnt Flags Type State I-Node PID/Program name Path unix 2 [ ACC ] STREAM LISTENING 405680 22814/ssh-agent /tmp/ssh-K dkxJfFLpKTC/agent.22813 unix 2 [ ACC ] STREAM LISTENING 408230 24049/ssh-agent /tmp/ssh-o fUeNNEwAqtP/agent.22243 unix 2 [ ACC ] STREAM LISTENING 416227 29160/mongod /tmp/mongo db-27017.sock unix 2 [ ACC ] SEQPACKET LISTENING 3692 284/udevd /run/udev/ control unix 2 [ ACC ] STREAM LISTENING 5318 1798/acpid /var/run/a cpid.socket unix 2 [ ACC ] STREAM LISTENING 16170 5177/tmux /tmp//tmux -1000/default unix 2 [ ACC ] STREAM LISTENING 414450 28213/dbus-daemon /var/run/d bus/system_bus_socket unix 2 [ ACC ] STREAM LISTENING 404225 22324/1 /tmp/ssh-9 TlDmu4bjl/agent.22324

    Read the article

< Previous Page | 235 236 237 238 239 240 241 242 243 244 245 246  | Next Page >