Search Results

Search found 3171 results on 127 pages for 'manual manuel'.

Page 32/127 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • Explaining Explain Plan Notes for Auto DOP

    - by jean-pierre.dijcks
    I've recently gotten some questions around "why do I not see a parallel plan" while Auto DOP is on (I think)...? It is probably worthwhile to quickly go over some of the ways to find out what Auto DOP was thinking. In general, there is no need to go tracing sessions and look under the hood. The thing to start with is to do an explain plan on your statement and to look at the parameter settings on the system. Parameter Settings to Look At First and foremost, make sure that parallel_degree_policy = AUTO. If you have that parameter set to LIMITED you will not have queuing and we will only do the auto magic if your objects are set to default parallel (so no degree specified). Next you want to look at the value of parallel_degree_limit. It is typically set to CPU, which in default settings equates to the Default DOP of the system. If you are testing Auto DOP itself and the impact it has on performance you may want to leave it at this CPU setting. If you are running concurrent statements you may want to give this some more thoughts. See here for more information. In general, do stick with either CPU or with a specific number. For now avoid the IO setting as I've seen some mixed results with that... In 11.2.0.2 you should also check that IO Calibrate has been run. Best to simply do a: SQL> select * from V$IO_CALIBRATION_STATUS; STATUS        CALIBRATION_TIME ------------- ---------------------------------------------------------------- READY         04-JAN-11 10.04.13.104 AM You should see that your IO Calibrate is READY and therefore Auto DOP is ready. In any case, if you did not run the IO Calibrate step you will get the following note in the explain plan: Note -----    - automatic DOP: skipped because of IO calibrate statistics are missing One more note on calibrate_io, if you do not have asynchronous IO enabled you will see:  ERROR at line 1: ORA-56708: Could not find any datafiles with asynchronous i/o capability ORA-06512: at "SYS.DBMS_RMIN", line 463 ORA-06512: at "SYS.DBMS_RESOURCE_MANAGER", line 1296 ORA-06512: at line 7 While this is changed in some fixes to the calibrate procedure, you should really consider switching asynchronous IO on for your data warehouse. Explain Plan Explanation To see the notes that are shown and explained here (and the above little snippet ) you can use a simple explain plan mechanism. There should  be no need to add +parallel etc. explain plan for <statement> SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY()); Auto DOP The note structure displaying why Auto DOP did not work (with the exception noted above on IO Calibrate) is like this: Automatic degree of parallelism is disabled: <reason> These are the reason codes: Parameter -  parallel_degree_policy = manual which will not allow Auto DOP to kick in  Hint - One of the following hints are used NOPARALLEL, PARALLEL(1), PARALLEL(MANUAL) Outline - A SQL outline of an older version (before 11.2) is used SQL property restriction - The statement type does not allow for parallel processing Rule-based mode - Instead of the Cost Based Optimizer the system is using the RBO Recursive SQL statement - The statement type does not allow for parallel processing pq disabled/pdml disabled/pddl disabled - For some reason (alter session?) parallelism is disabled Limited mode but no parallel objects referenced - your parallel_degree_policy = LIMITED and no objects in the statement are decorated with the default PARALLEL degree. In most cases all objects have a specific degree in which case Auto DOP will honor that degree. Parallel Degree Limited When Auto DOP does it works you may see the cap you imposed with parallel_degree_limit showing up in the note section of the explain plan: Note -----    - automatic DOP: Computed Degree of Parallelism is 16 because of degree limit This is an obvious indication that your are being capped for this statement. There is one quite interesting one that happens when you are being capped at DOP = 1. First of you get a serial plan and the note changes slightly in that it does not indicate it is being capped (we hope to update the note at some point in time to be more specific). It right now looks like this: Note -----    - automatic DOP: Computed Degree of Parallelism is 1 Dynamic Sampling With 11.2.0.2 you will start seeing another interesting change in parallel plans, and since we are talking about the note section here, I figured we throw this in for good measure. If we deem the parallel (!) statement complex enough, we will enact dynamic sampling on your query. This happens as long as you did not change the default for dynamic sampling on the system. The note looks like this: Note ----- - dynamic sampling used for this statement (level=5)

    Read the article

  • A little primer on using TFS with a small team

    - by johndoucette
    The scenario; A small team of 3 developers mostly in maintenance mode with traditional ASP.net, classic ASP, .Net integration services and utilities with the company’s third party packages, and a bunch of java-based Coldfusion web applications all under Visual Source Safe (VSS). They are about to embark on a huge SharePoint 2010 new construction project and wanted to use subversion instead VSS. TFS was a foreign word and smelled of “high cost” and of an “over complicated process”. Since they had no preconditions about the old TFS versions (‘05 & ‘08), it was fun explaining how simple it was to install a TFS server and get the ball rolling, with or without all the heavy stuff one sometimes associates with such a huge and powerful application management lifecycle product. So, how does a small team begin using TFS? 1. Start by using source control and migrate current VSS source trees into TFS. You can take the latest version or migrate the entire version history. It’s up to you on whether you want a clean start or need quick access to all the version notes and history of the bits. 2. Since most shops are mainly in maintenance mode with existing applications, begin using bug workitems for everything. When you receive an issue/bug from your current tracking system, manually enter the workitem in TFS right through Visual Studio. You can automate the integration to the current tracking system later or replace it entirely. Believe me, this thing is powerful and can handle even the largest of help desks. 3. With new construction, begin work with requirements and task workitems and follow the traditional sprint-based development lifecycle. Obviously, some minor training will be needed, but don’t fear, this is very intuitive and MSDN has a ton of lesson based labs and videos. 4. For the java developers, use the new Team Explorer Everywhere 2010 plugin (recently known as Teamprise). There is a seamless interface in Eclipse, but also a good command-line utility for other environments such as Dreamweaver. 5. Wait to fully integrate the whole workitem/project management/testing process until your team is familiar with the integrated workitems for bugs and code. After a while, you will see the team wanting more transparency into the work they are all doing and naturally, everyone will want workitems to help them organize the chaos! 6. Management will be limited in the value of the reports until you have a fully blown implementation of project planning, construction, build, deployment and testing. However, there are some basic “bug rate” reports and current backlog listings that can provide good information. Some notable explanations of TFS; Work Item Tracking and Project Management - A workitem represents the unit of work within the system which enables tracking of all activities produced by a user, whether it is a developer, business user, project manager or tester. The properties of a workitem such as linked changesets (checked-in code), who updated the data and when, the states and reasons for change, are all transitioned to a data warehouse within TFS for reporting purposes. A workitem can be defines as a "bug", "requirement", test case", or a "change request". They drive the work effort by the individual assigned to it and also provide a key role in defining what needs to be done. Workitems are the things the team needs to do to accomplish a goal. Test Case Management - Starting with a workitem known as a "test case", a tester (or developer) can now author and manage test cases within a formal test plan subsystem. Although TFS supports the test case workitem type, there is a new product known as the VS Test Professional 2010 which allows a tester to facilitate manual tests including fast forwarding steps in the process to arrive at the assertion point quickly. This repeatable process provides quick regression tests and can be conducted by the business user to ensure completeness during UAT. In addition, developers no longer can provide a response to a bug with the line "cannot reproduce". With every test run, attachments including the recorded session, captured environment configurations and settings, screen shots, intellitrace (debugging history), and in some cases if the lab manager is being used, a snapshot of the tested environment is available. Version Control - A modern system allowing shared check-in/check-out, excellent merge conflict resolution, Shelvesets (personal check-ins), branching/merging visualization, public workspaces, gated check-ins, security hierarchy capabilities, and changeset/workitem tracking. Knowing what was done with the code by any developer has become much easier to picture and resolve issues. Team Build - Automate the compilation process whether you need it to be whenever a developer checks-in code, periodically such as nightly builds for testers in the morning, or manual builds to be deployed into production. Each build can run through pre-determined tests, perform code analysis to see if the developer conforms to the team standards, and reject the build if either fails. Project Portal & Reporting - Provide management with a dashboard with insight into the project(s). "Where are we" in each step of the way including past iterations and the current burndown rate. Enabling this feature is easy as it seamlessly interfaces with existing SharePoint implementations.

    Read the article

  • data maintenance/migrations in image based sytems

    - by User
    Web applications usually have a database. The code and the database work hand in hand together. Therefore Frameworks like Ruby on Rails and Django create migration files Sure there are also servers written in Self or Smalltalk or other image-based systems that face the same problem: Code is not written on the server but in a separate image of the programmer. How do these systems deal with a changing schema, changing classes/prototypes. Which way do the migrations go? Example: What is the process of a new attribute going from programmer's idea to the server code and all objects? I found the Gemstone/S manual chapter 8 but it does not really talk about the process of shipping code to the server.

    Read the article

  • Help needed in customizing Ubuntu 13 distribution

    - by Bilal Wajid
    I have been using Ubuntu for over 3 years. I want to custom-build-ubuntu-distribution. I am trying to do the needful using Ubuntu 13.04. I have tried the following:- Remastersys - failed. Relinux - failed. Ubuntu Customization Kit - failed. LiveCDCustomizationFromScratch (help.ubuntu.com) - failed. Kindly guide me how to do this, based on the above failed attempts. I think a manual of how to do it would be very very helpful. Thanks and Best Wishes, B. Wajid

    Read the article

  • OBIEE 11.1.1 - Tips for In-place Upgrade from 11.1.1.6 to 11.1.1.7.x

    - by Ahmed Awan
    Tips: – Use the Test to Production (T2P) / cloning process (movement scripts). For example: – Clone up the existing 11.1.1.6 environment.– Move the cloned copy to the new location / host (same 11.1.1.6.0 version at this point).– Patch new location / host (11.1.1.6) to the 11.1.1.7 level.– Switch to Production. – How to use movement scripts for OBIEE: 20.1 Introduction to the Movement Scripts , for details refer to: http://docs.oracle.com/cd/E29542_01/core.1111/e10105/clone.htm#CACHFECE 21.4.7.1 Moving Oracle Business Intelligence to a New Target Environment, for details refer to: http://docs.oracle.com/cd/E29542_01/core.1111/e10105/testprod.htm#CHDIAEFA http://docs.oracle.com/cd/E29542_01/core.1111/e10105/testprod.htm#BABGJGCF – Perform in-place upgrade to 11.1.1.7.0 using manual steps / Upgrade wizard, refer to: http://docs.oracle.com/cd/E28280_01/upgrade.1111/e16452/bi_plan.htm#BABECJJH

    Read the article

  • Ubuntu One Windows Client 3.0.1 - Sync not connecting

    - by Tweezak
    I've got sync working at home on Natty and I need to access files at work. I've just installed client 3.0.1 at work on Windows 7. It's finding my account and sees what devices are already set up for sync but it just repeatedly tries to sync (File sync starting...) and after a while fails (File Sync is disconnected.). This cycle loops endlessly. I'm suspicious that it's a problem with my proxy setup. My employer doesn't use a manual proxy configuration but instead uses an automatic configuration script at a specific http address. Does U1 recognize that kind of setup or is it only looking for a proxy server in the form of an address/port?

    Read the article

  • Installing Ubuntu on a Windows CE 6.0 EzBook PC (ez-72a)

    - by nmacholl
    I recently inherited a cheap EzBook PC and decided it might be fun to throw Ubuntu on it. I have a USB all setup but for the life of me can't get it to boot from USB! I've looked online and can't seem to find the motherboard details anywhere to access the BIOS and more applications won't work on Windows CE so I'm stuck. Is there an obscure Windows CE installer out there? It's really frustrating when I can't get to the BIOS - and I don't have a manual. If anyone knows anything I'd appreciate it!

    Read the article

  • The Virtues and Challenges of Implementing Basel III: What Every CFO and CRO Needs To Know

    - by Jenna Danko
    The Basel Committee on Banking Supervision (BCBS) is a group tasked with providing thought-leadership to the global banking industry.  Over the years, the BCBS has released volumes of guidance in an effort to promote stability within the financial sector.  By effectively communicating best-practices, the Basel Committee has influenced financial regulations worldwide.  Basel regulations are intended to help banks: More easily absorb shocks due to various forms of financial-economic stress Improve risk management and governance Enhance regulatory reporting and transparency In June 2011, the BCBS released Basel III: A global regulatory framework for more resilient banks and banking systems.  This new set of regulations included many enhancements to previous rules and will have both short and long term impacts on the banking industry.  Some of the key features of Basel III include: A stronger capital base More stringent capital standards and higher capital requirements Introduction of capital buffers  Additional risk coverage Enhanced quantification of counterparty credit risk Credit valuation adjustments  Wrong  way risk  Asset Value Correlation Multiplier for large financial institutions Liquidity management and monitoring Introduction of leverage ratio Even more rigorous data requirements To implement these features banks need to embark on a journey replete with challenges. These can be categorized into three key areas: Data, Models and Compliance. Data Challenges Data quality - All standard dimensions of Data Quality (DQ) have to be demonstrated.  Manual approaches are now considered too cumbersome and automation has become the norm. Data lineage - Data lineage has to be documented and demonstrated.  The PPT / Excel approach to documentation is being replaced by metadata tools.  Data lineage has become dynamic due to a variety of factors, making static documentation out-dated quickly.  Data dictionaries - A strong and clean business glossary is needed with proper identification of business owners for the data.  Data integrity - A strong, scalable architecture with work flow tools helps demonstrate data integrity.  Manual touch points have to be minimized.   Data relevance/coverage - Data must be relevant to all portfolios and storage devices must allow for sufficient data retention.  Coverage of both on and off balance sheet exposures is critical.   Model Challenges Model development - Requires highly trained resources with both quantitative and subject matter expertise. Model validation - All Basel models need to be validated. This requires additional resources with skills that may not be readily available in the marketplace.  Model documentation - All models need to be adequately documented.  Creation of document templates and model development processes/procedures is key. Risk and finance integration - This integration is necessary for Basel as the Allowance for Loan and Lease Losses (ALLL) is calculated by Finance, yet Expected Loss (EL) is calculated by Risk Management – and they need to somehow be equal.  This is tricky at best from an implementation perspective.  Compliance Challenges Rules interpretation - Some Basel III requirements leave room for interpretation.  A misinterpretation of regulations can lead to delays in Basel compliance and undesired reprimands from supervisory authorities. Gap identification and remediation - Internal identification and remediation of gaps ensures smoother Basel compliance and audit processes.  However business lines are challenged by the competing priorities which arise from regulatory compliance and business as usual work.  Qualification readiness - Providing internal and external auditors with robust evidence of a thorough examination of the readiness to proceed to parallel run and Basel qualification  In light of new regulations like Basel III and local variations such as the Dodd Frank Act (DFA) and Comprehensive Capital Analysis and Review (CCAR) in the US, banks are now forced to ask themselves many difficult questions.  For example, executives must consider: How will Basel III play into their Risk Appetite? How will they create project plans for Basel III when they haven’t yet finished implementing Basel II? How will new regulations impact capital structure including profitability and capital distributions to shareholders? After all, new regulations often lead to diminished profitability as well as an assortment of implementation problems as we discussed earlier in this note.  However, by requiring banks to focus on premium growth, regulators increase the potential for long-term profitability and sustainability.  And a more stable banking system: Increases consumer confidence which in turn supports banking activity  Ensures that adequate funding is available for individuals and companies Puts regulators at ease, allowing bankers to focus on banking Stability is intended to bring long-term profitability to banks.  Therefore, it is important that every banking institution takes the steps necessary to properly manage, monitor and disclose its risks.  This can be done with the assistance and oversight of an independent regulatory authority.  A spectrum of banks exist today wherein some continue to debate and negotiate with regulators over the implementation of new requirements, while others are simply choosing to embrace them for the benefits I highlighted above. Do share with me how your institution is coping with and embracing these new regulations within your bank. Dr. Varun Agarwal is a Principal in the Banking Practice for Capgemini Financial Services.  He has over 19 years experience in areas that span from enterprise risk management, credit, market, and to country risk management; financial modeling and valuation; and international financial markets research and analyses.

    Read the article

  • How can I switch memory modules to 1600 Mhz?

    - by Salvador
    Some months ago I bought 4 Memory modules of 4GB DDR3 1600 KINGSTON HYPERX. The official Kingston manual says: *This module has been tested to run at DDR3-1600 at a low latency timing of 9-9-9-27 at 1.65V.The SPD is programmed to JEDEC standard latency DDR3-1333 timing of 9-9-9. I cannot find which is the real speed of my memory modules. I normally get from several tools that the real speed is 1333 Mhz srs@ubuntu:~$ sudo dmidecode -t memory # dmidecode 2.9 SMBIOS 2.6 present. Handle 0x005D, DMI type 16, 15 bytes Physical Memory Array Location: System Board Or Motherboard Use: System Memory Error Correction Type: None Maximum Capacity: 32 GB Error Information Handle: 0x005F Number Of Devices: 4 Handle 0x005C, DMI type 17, 28 bytes Memory Device Array Handle: 0x005D Error Information Handle: 0x0060 Total Width: 64 bits Data Width: 64 bits Size: 4096 MB Form Factor: DIMM Set: None Locator: ChannelA-DIMM0 Bank Locator: BANK 0 Type: <OUT OF SPEC> Type Detail: Synchronous Speed: 1333 MHz (0.8 ns) Manufacturer: Kingston Serial Number: 07288F23 Asset Tag: 9876543210 Part Number: 9905403-439.A00LF How can I switch memory modules to 1600 Mhz?

    Read the article

  • What is really happening when we change encoding in a string?

    - by Jim Thio
    http://php.net/manual/en/function.mb-convert-encoding.php Say I do: $encoded = mb_convert_encoding ($original); That looks like simple enough. WHat I am imagining is the following $original has a pointer to the way the string is actually encoded. Something like char * kind of thing. And then there are things like what the character actually encoded. It's probably somewhere along UTF-64 kind of thing where each glyph is indeed a character. Now when we do $encoded = mb_convert_encoding ($original); several thing can happen: the original internal representation doesn't change however it is REINTERPRETED so that the code that show up differs the original string that it represent doesn't change however the ENCODING change. Which one is right?

    Read the article

  • Is there an easy and automatic way of converting a Windows XNA project into a Monotouch Monogame project?

    - by Krumelur
    I have just started with XNA development on Windows. But as I'm a fan of iOS I had to try porting my test code over to Monotouch on the Mac. I used these instructions: http://www.facepuncher.com/blogs/10parameters/?p=42 But this is so much (manual) work! And it really doesn't answer open topics like: why would I copy all the XNB files and in addition all the resources, like PNGs? Is there maybe a tool that automatically converts a Windows XNA project into a Monotouch iOS project or at least creates the correct folder structure?

    Read the article

  • Got back Hibernation option, but cannot resume from Hibernate

    - by harisibrahimkv
    In my Ubuntu 12.04, the hibernation option was working well and fine. However, I installed Debian on another partition recently and when I again tried to boot to Ubuntu, I got a message on the boot splash screen saying : The disk drive for / is not ready yet or not present. Continue to wait; or press s to skip mounting or M for manual recovery. After logging into Ubuntu, I find that my hibernation option has gone missing. Is there anyway to recover the hibernation option? EDIT: I solved the disk drive problem and I got the hibernation option back. When I did "sudo pm-hibernate", my system went to hibernation. However, when powering on again, it booted up normally and thus there was no effect of hibernation. How can this be rectified? EDIT1: System - Lenovo ideapad s10-2. EDIT2: /etc/fstab

    Read the article

  • Using JPA 2.0 with WebLogic Server 10.3.4 and Eclipse

    - by greg.stachnick
    Beginning in Oracle Enterprise Pack for Eclipse (OEPE) 11.1.1.6.1, we introduced a new feature for WebLogic Server configuration called Server Extensions. Similar in concept to project facets, Server Extensions allow us to install additional technologies, libraries, configurations, etc into an existing server runtime. WebLogic Server 10.3.4 introduces new support for Java Persistence 2.0, the new JEE 6 standard entity access. In order to start developing JPA 2.0 applications with WebLogic Server 10.3.4, a SmartUpdate patch must be applied to add and configure the EclipseLink libraries. More information on the manual EclipseLink installation and configuration can be found here. OEPE provides a Server Extension for JPA 2.0, making the addition and configuration of JPA 2.0 and EclipseLink much easier. When defining a new WebLogic Server 10.3.4 configuration, simply click the Install link for Java Persistence 2.0 and OEPE will take care of the WebLogic Server enablement for JPA 2.0.  

    Read the article

  • Configuring Eclipse Xubuntu 12.04

    - by kyng
    Just installed Eclipse 3.7.1 on my xubuntu 12.04. I used to have eclipse installed on my 10.04. You could choose new java project, java source files etc. but this version doesn't have these options. If i make a .java file, it's just plane text, no highlighting and no chance to compile. I have installed eclipse-jdt. I looked https://help.ubuntu.com/community/EclipseIDE this manual. It tells to modify /etc/eclipse/java_home file but there is no such file on my system, just eclipse.ini. Am i missing a step here or have i encountered some sort of a bug?

    Read the article

  • What is the need for 'discoverability' in a REST API when the clients are not advanced enough to make use of it anyway?

    - by aditya menon
    The various talks I have watched and tutorials I scanned on REST seem to stress something called 'discoverability'. To my limited understanding, the term seems to mean that a client should be able to go to http://URL - and automatically get a list of things it can do. What I am having trouble understanding - is that 'software clients' are not human beings. They are just programs that do not have the intuitive knowledge to understand what exactly to do with the links provided. Only people can go to a website and make sense of the text and links presented and act on it. So what is the point of discoverability, when the client code that accesses such discoverable URLs cannot actually do anything with it, unless the human developer of the client actually experiments with the resources presented? This looks like the exact same thing as defining the set of available functions in a Documentation manual, just from a different direction and actually involving more work for the developer. Why is this second approach of pre-defining what can be done in a document external to the actual REST resources, considered inferior?

    Read the article

  • set proxy for vpn server on ubuntu server 12.4

    - by Morteza Soltanabadiyan
    I have a vpn server with HTTPS, L2TP , OPENVPN , PPTP. i want to set proxy in the server so all connection that comes from vpn clients use the proxy that i set in my server. I made a bash script file for it , but proxy not working. gsettings set org.gnome.system.proxy mode 'manual' gsettings set org.gnome.system.proxy.http enabled true gsettings set org.gnome.system.proxy.http host 'cproxy.anadolu.edu.tr' gsettings set org.gnome.system.proxy.http port 8080 gsettings set org.gnome.system.proxy.http authentication-user 'admin' gsettings set org.gnome.system.proxy.http authentication-password 'admin' gsettings set org.gnome.system.proxy use-same-proxy true export http_proxy=http://admin:[email protected]:8080 export https_proxy=http://admin:[email protected]:8080 export HTTP_PROXY=http://admin:[email protected]:8080 export HTTPS_PROXY=http://admin:[email protected]:8080 Now , I don't know what to do to make a global proxy for server and all vpn clients use it automatically.

    Read the article

  • Rel = translation

    - by Tom Gullen
    I can't find much online about rel="translation" We have tutorials and manual entries which we are going to get users to translate. If the original page in English is: http://www.scirra.com/tutorial/start And there are two translations: http://www.scirra.com/tutorial/es/start (spanish) http://www.scirra.com/tutorial/de/start (german) How would I correctly link all these up? I'm aware at the top of the page I would need to specify the correct IS639-1 code: <html lang="de"> But I'm more interested in letting Google know they are not duplicates but are translated.

    Read the article

  • Configure mounting timeout at boot

    - by René Pieters
    On remotely rebooting a 12.04 machine I found it hanging at the "unable to mount soandso: Skip, Manual Abort? " (That's pretty much how I remember the message) The machine was basically stopped there until I hooked up a keyboard and pressed "s". I can see the rationale for the question, but I'd really like to know where to configure it or turn it off altogether. A mandatory question like this makes sense in a desktop environment but for servers I'd like more flexibility. So where do I fiddle and tweak this?

    Read the article

  • What You Said: How You Customize Your Computer

    - by Jason Fitzpatrick
    Earlier this week we asked you to share the ways you customize your computing experience. You sounded off in the comments and we rounded up your tips and tricks to share. Read on to see how your fellow personalize their computers. It would seem the first stop on just about everyone’s customization route is stripping away the bloat/crapware. Lisa Wang writes: Depending on how much time I have when I receive my new machine,I might do the following in a few batches, starting with the simplest one. Usually, my list goes like this:1.Remove all bloatware and pretty much unneeded stuffs.2.Change my wallpaper,login screen,themes, and sound.3.Installing my ‘must-have’ softwares-starting with fences and rocketdock+stacks plugin4.Setting taskbar to autohide, pinning some apps there5.Installing additional languages6.Tweaking all settings and keyboard shortcuts to my preferance7.Changing the icons(either manual or with TuneUp Styler) Interface tweaks like the aforementioned Fences and Rocket Dock made quite a few appearances, as did Rainmeter. Graphalfkor writes: How to Stress Test the Hard Drives in Your PC or Server How To Customize Your Android Lock Screen with WidgetLocker The Best Free Portable Apps for Your Flash Drive Toolkit

    Read the article

  • What label of tests are BizUnit tests?

    - by charlie.mott
    BizUnit is defined as a "Framework for Automated Testing of Distributed Systems.  However, I've never seen a catchy label to describe what sort of tests we create using this framework. They are not really “Unit Tests” that's for sure. "Integration Tests" might be a good definition, but I want a label that clearly separates it from the manual "System Integration Testing" phase of a project where real instances of the integrated systems are used. Among some colleagues, we brainstormed some suggestions: Automated Integration Tests Stubbed Integration Tests Sandbox Integration Tests Localised Integration Tests All give a good view of the sorts of tests that are being done. I think "Stubbed Integration Tests" is most catchy and descriptive. So I will use that until someone comes up with a better idea.

    Read the article

  • Unable to Install GRUB in /dev/sda on raid drives

    - by Henry
    I'm trying to install 12.04LTS on a server but keep running into the unable to install grub error like so "Unable to Install GRUB in /dev/sda". The drives are in raid1 and I'm using fakeraid on a supermicro motherboard, which according to the manual is fully supported. I've tried installing both from USB and CD-R but still no luck. I'm not dual booting with any other OS, just using 2x320gb drives and have been choosing to install using the entire disk. Any ideas what I'm doing wrong or can do to fix this? Thanks

    Read the article

  • How to get working cups command line tools on Server 14.04

    - by Nick
    It looks like some of the commands like lpr and lprm have broken versions that don't work with cups. These commands worked properly on 10.04. lpr for cups has an -o option, but no lpr is intalled when cups is installed, and the lpr installed with apt-get install lpr does not have the -o option and does not appear to be the cups version of lpr. man lpr shows BSD General Commands Manual at the top, where man lpr on the Ubuntu 10.04 server said Apple, inc in the same spot. which leads me to believe the "wrong" lpr is in the "lpr" package or package names got moved around. There is also a lprng package, but trying to install it wants to remove cups and cups-client. lprm also returns lprm: PrinterName: unknown printer when PrinterName is in fact a valid printer installed with cups and does appear in lpstat -t. How do I get the proper cups versions of lpr working on Server 14.04?

    Read the article

  • 12.04 Dual Monitor Configuration changes to "Mirrored" automatically

    - by Josef
    After installing 12.04 on my Samsung X120 Notebook (with Intel GMA 4500 internal graphics card) i have a Problem with the configuration off my 2nd monitor. Its a 22" with 1680 * 1050 Pixel connected via HDMI. I use the display configuration and set internal display=off and external display=main display. That works fine. After rebooting i still have that configuration, but after a while (often when i press a button on the keyboard), the configuration automatically changes to "Mirror Screen" and the internal display switches on. Both displays then have a bad resolution. When i do the manual display configuration again, everything is fine for the duration of that session. What could be the problem, that this display configuration gets lost / resets?

    Read the article

  • Strategy for building an application to replace a large spreadsheet

    - by Dan Walmsley
    I'm working on an application that is going to replace a rather large spreadsheet. The spreadsheet is used to budget purchases and things like that. It is the largest spreadsheet I have ever seen, and it required a lot of manual data entry, so this application is going to automate much of that. But as I'm working on this I've noticed its slow going. And I got to thinking this must be a common thing to do many companies will start with something like a spreadsheet, then when they get too big to maintain that, they will get a custom application built. So is there anything out there ( a framework or similar ) that does this sort of thing, migrating a spreadsheet to a custom application. I've had a quick Google but not really seen the kind of thing that I'm looking for. It's too late for this project, but I thought it would be worth having a look for next time. How do you guys tackle this problem?

    Read the article

  • Cannot mount a CIFS network share on Ubuntu over VPN

    - by Aron Rotteveel
    I have setup u VPN connection to our Windows 2008 server at the office and it seems to work fine. For some reason, however, I still am not able to access the network shares over a VPN connection using my standard fstab entries. When I am physically connected to the network, it works fine, but now when trying this over VPN I get the following error: mount error(110): Connection timed out Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) My /etc/fstab looks like this: //server2008/share /mnt/share cifs iocharset=utf8,credentials=/home/aron/.smbcredentials,uid=1000 0 0 As said, it works fine when physically connected, but over VPN it just wont work. Any help is appreciated.

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >