Search Results

Search found 9124 results on 365 pages for 'general dba'.

Page 78/365 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • Can i use Twig and Doctrine in my project which is licensed under GPL license?

    - by aRagnis
    Can i license my open sourced CMS under GPL v2/v3 license if it uses Twig (BSD License) and Doctrine (LGPL)? And i also want to know, that do i have to put this text to teh beginning of all my source files... * This file is part of Foobar. * * Foobar is free software: you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation, either version 3 of the License, or * (at your option) any later version. * * Foobar is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with Foobar. If not, see <http://www.gnu.org/licenses/>. ..or can i do it like phpbb does? /** * * @package mcp * @version $Id$ * @copyright (c) 2005 phpBB Group * @license http://opensource.org/licenses/gpl-license.php GNU Public License * */

    Read the article

  • How big can I make an Android application's canvas in terms of pixels?

    - by user279112
    I've determined an estimate of the size of my Android emulator's screen in pixels, although I think its resolution can be changed to other numbers. Quite frankly though that doesn't eliminate the general problem of not knowing how many pixels on each axis I have to work with on my Android applications in general. The main problem I'm trying to solve is this: How do I make sure I don't use a faulty resolution on Android applications if I want to keep things' sizes constant (so that if the application screen shrinks, for instances, objects will still show up just as big - there just won't be as many of them being shown) if I wish to do this with a single universal resolution for each program? Failing that, how do I make sure everything's alright if I try to do everything the same way with maybe a few different pre-set resolutions? Mainly it seems like a relevant question that must be answered before I can come across a complete answer for the general problem is how big can I always make my application in pixels, NOT regarding if and when a user resizes the application's screen to something smaller than the maximum size permitted by the phone and its operating system. I really want to try to keep this simple. If I were doing this for a modern desktop, for instance, I know that if I design the application with a 800x600 canvas, the user can still shrink the application to the point they're not doing themselves any favors, but at least I can basically count on it working right and not being too big for the monitor or something. Is there such a magic resolution for Android, assuming that I'm designing for API levels 3+ (Android 1.5+)? Thanks

    Read the article

  • Good functions and techniques for dealing with haskell tuples?

    - by toofarsideways
    I've been doing a lot of work with tuples and lists of tuples recently and I've been wondering if I'm being sensible. Things feel awkward and clunky which for me signals that I'm doing something wrong. For example I've written three convenience functions for getting the first, second and third value in a tuple of 3 values. Is there a better way I'm missing? Are there more general functions that allow you to compose and manipulate tuple data? Here are some things I am trying to do that feel should be generalisable. Extracting values: Do I need to create a version of fst,snd,etc... for tuples of size two, three, four and five, etc...? fst3(x,_,_) = x fst4(x,_,_,_) = x Manipulating values: Can you increment the last value in a list of pairs and then use that same function to increment the last value in a list of triples? Zipping and Unzipping values: There is a zip and a zip3. Do I also need a zip4? or is there some way of creating a general zip function? Sorry if this seems subjective, I honestly don't know if this is even possible or if I'm wasting my time writing 3 extra functions every time I need a general solution. Thank you for any help you can give!

    Read the article

  • Is this a bad indexing strategy for a table?

    - by llamaoo7
    The table in question is part of a database that a vendor's software uses on our network. The table contains metadata about files. The schema of the table is as follows Metadata ResultID (PK, int, not null) MappedFieldname (char(50), not null) Fieldname (PK, char(50), not null) Fieldvalue (text, null) There is a clustered index on ResultID and Fieldname. This table typically contains millions of rows (in one case, it contains 500 million). The table is populated by 24 workers running 4 threads each when data is being "processed". This results in many non-sequential inserts. Later after processing, more data is inserted into this table by some of our in-house software. The fragmentation for a given table is at least 50%. In the case of the largest table, it is at 90%. We do not have a DBA. I am aware we desperately need a DB maintenance strategy. As far as my background, I'm a college student working part time at this company. My question is this, is a clustered index the best way to go about this? Should another index be considered? Are there any good references for this type and similar ad-hoc DBA tasks?

    Read the article

  • How to Load Oracle Tables From Hadoop Tutorial (Part 5 - Leveraging Parallelism in OSCH)

    - by Bob Hanckel
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Using OSCH: Beyond Hello World In the previous post we discussed a “Hello World” example for OSCH focusing on the mechanics of getting a toy end-to-end example working. In this post we are going to talk about how to make it work for big data loads. We will explain how to optimize an OSCH external table for load, paying particular attention to Oracle’s DOP (degree of parallelism), the number of external table location files we use, and the number of HDFS files that make up the payload. We will provide some rules that serve as best practices when using OSCH. The assumption is that you have read the previous post and have some end to end OSCH external tables working and now you want to ramp up the size of the loads. Using OSCH External Tables for Access and Loading OSCH external tables are no different from any other Oracle external tables.  They can be used to access HDFS content using Oracle SQL: SELECT * FROM my_hdfs_external_table; or use the same SQL access to load a table in Oracle. INSERT INTO my_oracle_table SELECT * FROM my_hdfs_external_table; To speed up the load time, you will want to control the degree of parallelism (i.e. DOP) and add two SQL hints. ALTER SESSION FORCE PARALLEL DML PARALLEL  8; ALTER SESSION FORCE PARALLEL QUERY PARALLEL 8; INSERT /*+ append pq_distribute(my_oracle_table, none) */ INTO my_oracle_table SELECT * FROM my_hdfs_external_table; There are various ways of either hinting at what level of DOP you want to use.  The ALTER SESSION statements above force the issue assuming you (the user of the session) are allowed to assert the DOP (more on that in the next section).  Alternatively you could embed additional parallel hints directly into the INSERT and SELECT clause respectively. /*+ parallel(my_oracle_table,8) *//*+ parallel(my_hdfs_external_table,8) */ Note that the "append" hint lets you load a target table by reserving space above a given "high watermark" in storage and uses Direct Path load.  In other doesn't try to fill blocks that are already allocated and partially filled. It uses unallocated blocks.  It is an optimized way of loading a table without incurring the typical resource overhead associated with run-of-the-mill inserts.  The "pq_distribute" hint in this context unifies the INSERT and SELECT operators to make data flow during a load more efficient. Finally your target Oracle table should be defined with "NOLOGGING" and "PARALLEL" attributes.   The combination of the "NOLOGGING" and use of the "append" hint disables REDO logging, and its overhead.  The "PARALLEL" clause tells Oracle to try to use parallel execution when operating on the target table. Determine Your DOP It might feel natural to build your datasets in Hadoop, then afterwards figure out how to tune the OSCH external table definition, but you should start backwards. You should focus on Oracle database, specifically the DOP you want to use when loading (or accessing) HDFS content using external tables. The DOP in Oracle controls how many PQ slaves are launched in parallel when executing an external table. Typically the DOP is something you want to Oracle to control transparently, but for loading content from Hadoop with OSCH, it's something that you will want to control. Oracle computes the maximum DOP that can be used by an Oracle user. The maximum value that can be assigned is an integer value typically equal to the number of CPUs on your Oracle instances, times the number of cores per CPU, times the number of Oracle instances. For example, suppose you have a RAC environment with 2 Oracle instances. And suppose that each system has 2 CPUs with 32 cores. The maximum DOP would be 128 (i.e. 2*2*32). In point of fact if you are running on a production system, the maximum DOP you are allowed to use will be restricted by the Oracle DBA. This is because using a system maximum DOP can subsume all system resources on Oracle and starve anything else that is executing. Obviously on a production system where resources need to be shared 24x7, this can’t be allowed to happen. The use cases for being able to run OSCH with a maximum DOP are when you have exclusive access to all the resources on an Oracle system. This can be in situations when your are first seeding tables in a new Oracle database, or there is a time where normal activity in the production database can be safely taken off-line for a few hours to free up resources for a big incremental load. Using OSCH on high end machines (specifically Oracle Exadata and Oracle BDA cabled with Infiniband), this mode of operation can load up to 15TB per hour. The bottom line is that you should first figure out what DOP you will be allowed to run with by talking to the DBAs who manage the production system. You then use that number to derive the number of location files, and (optionally) the number of HDFS data files that you want to generate, assuming that is flexible. Rule 1: Find out the maximum DOP you will be allowed to use with OSCH on the target Oracle system Determining the Number of Location Files Let’s assume that the DBA told you that your maximum DOP was 8. You want the number of location files in your external table to be big enough to utilize all 8 PQ slaves, and you want them to represent equally balanced workloads. Remember location files in OSCH are metadata lists of HDFS files and are created using OSCH’s External Table tool. They also represent the workload size given to an individual Oracle PQ slave (i.e. a PQ slave is given one location file to process at a time, and only it will process the contents of the location file.) Rule 2: The size of the workload of a single location file (and the PQ slave that processes it) is the sum of the content size of the HDFS files it lists For example, if a location file lists 5 HDFS files which are each 100GB in size, the workload size for that location file is 500GB. The number of location files that you generate is something you control by providing a number as input to OSCH’s External Table tool. Rule 3: The number of location files chosen should be a small multiple of the DOP Each location file represents one workload for one PQ slave. So the goal is to keep all slaves busy and try to give them equivalent workloads. Obviously if you run with a DOP of 8 but have 5 location files, only five PQ slaves will have something to do and the other three will have nothing to do and will quietly exit. If you run with 9 location files, then the PQ slaves will pick up the first 8 location files, and assuming they have equal work loads, will finish up about the same time. But the first PQ slave to finish its job will then be rescheduled to process the ninth location file, potentially doubling the end to end processing time. So for this DOP using 8, 16, or 32 location files would be a good idea. Determining the Number of HDFS Files Let’s start with the next rule and then explain it: Rule 4: The number of HDFS files should try to be a multiple of the number of location files and try to be relatively the same size In our running example, the DOP is 8. This means that the number of location files should be a small multiple of 8. Remember that each location file represents a list of unique HDFS files to load, and that the sum of the files listed in each location file is a workload for one Oracle PQ slave. The OSCH External Table tool will look in an HDFS directory for a set of HDFS files to load.  It will generate N number of location files (where N is the value you gave to the tool). It will then try to divvy up the HDFS files and do its best to make sure the workload across location files is as balanced as possible. (The tool uses a greedy algorithm that grabs the biggest HDFS file and delegates it to a particular location file. It then looks for the next biggest file and puts in some other location file, and so on). The tools ability to balance is reduced if HDFS file sizes are grossly out of balance or are too few. For example suppose my DOP is 8 and the number of location files is 8. Suppose I have only 8 HDFS files, where one file is 900GB and the others are 100GB. When the tool tries to balance the load it will be forced to put the singleton 900GB into one location file, and put each of the 100GB files in the 7 remaining location files. The load balance skew is 9 to 1. One PQ slave will be working overtime, while the slacker PQ slaves are off enjoying happy hour. If however the total payload (1600 GB) were broken up into smaller HDFS files, the OSCH External Table tool would have an easier time generating a list where each workload for each location file is relatively the same.  Applying Rule 4 above to our DOP of 8, we could divide the workload into160 files that were approximately 10 GB in size.  For this scenario the OSCH External Table tool would populate each location file with 20 HDFS file references, and all location files would have similar workloads (approximately 200GB per location file.) As a rule, when the OSCH External Table tool has to deal with more and smaller files it will be able to create more balanced loads. How small should HDFS files get? Not so small that the HDFS open and close file overhead starts having a substantial impact. For our performance test system (Exadata/BDA with Infiniband), I compared three OSCH loads of 1 TiB. One load had 128 HDFS files living in 64 location files where each HDFS file was about 8GB. I then did the same load with 12800 files where each HDFS file was about 80MB size. The end to end load time was virtually the same. However when I got ridiculously small (i.e. 128000 files at about 8MB per file), it started to make an impact and slow down the load time. What happens if you break rules 3 or 4 above? Nothing draconian, everything will still function. You just won’t be taking full advantage of the generous DOP that was allocated to you by your friendly DBA. The key point of the rules articulated above is this: if you know that HDFS content is ultimately going to be loaded into Oracle using OSCH, it makes sense to chop them up into the right number of files roughly the same size, derived from the DOP that you expect to use for loading. Next Steps So far we have talked about OLH and OSCH as alternative models for loading. That’s not quite the whole story. They can be used together in a way that provides for more efficient OSCH loads and allows one to be more flexible about scheduling on a Hadoop cluster and an Oracle Database to perform load operations. The next lesson will talk about Oracle Data Pump files generated by OLH, and loaded using OSCH. It will also outline the pros and cons of using various load methods.  This will be followed up with a final tutorial lesson focusing on how to optimize OLH and OSCH for use on Oracle's engineered systems: specifically Exadata and the BDA. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Installing trunk Mono on Ubuntu

    - by kalvi
    I am quite new to linux. I have to install Mono on a linux machine from souce code. I know the general method: read-instructions, install-dependencies, ./configure, make, make install. However this approach doesn't fit into the general Ubuntu package management routine. Other programs I install from .debs won't be able to notice the version of Mono. Also I can't remove Mono using standard Ubuntu package management tools. Is there an easy solution? I have seen that Ubuntu actually has several separate packages for the Mono project. Should I build packages from Mono? How can I follow the same conventions as the ubuntu packagers? Where should I look for info on packaging? Can you give step by step instructions? Thanks!

    Read the article

  • How do you fix the Performance Dashboard datetime overfow error

    - by Mike L
    I'm a programmer/DBA by accident and we're running SQL Server 2005 with Performance Dashboard for basic monitoring. The server has been up for a few weeks and now we can't drill into certain reports. Is there any way to reset these reports without a complete reboot? edit: I bet the error message would help. I get this when I drill into the CPU graph: Error: Difference of two datetime columns caused overflow at runtime.

    Read the article

  • Overhead of Perfmon -> direct to SQL Database

    - by StuartC
    HI All, First up, I'm a total newb at Performance Monitoring. I'm looking to set up central performance monitoring of some boxes. 2K3 TS ( Monitor General OS Perf & Session Specific Counters ) 2K8 R2 ( XenApp 6 = Monitor General OS Perf & Session Specific Counters ) File Server ( Standard File I/O ) My ultimate aim is to get as many counters/information, without impacting the clients session experience at all. Including counters specific to their sessions. I was thinking it logging directly to a SQL on another server, instead of a two part process of blg file then relog to sql. Would that work ok? Does anyone know the overhead of going straight to SQL from the client? I've searched around a bit, but havent found so much information it can be overwhelming. thanks

    Read the article

  • Active Directory Password Policy Problem

    - by Will
    To Clarify: my question is why isn't my password policy applying to people in the domain. Hey guys, having trouble with our password policy in Active Directory. Sometimes it just helps me to type out what I’m seeing It appears to not be applying properly across the board. I am new to this environment and AD in general but I think I have a general grasp of what should be going on. It’s a pretty simple AD setup without too many Group Policies being applied. It looks something like this DOMAIN Default Domain Policy (link enabled) Password Policy (link enabled and enforce) Personal OU Force Password Change (completely empty nothing in this GPO) IT OU Lockout Policy (link enabled and enforced) CS OU Lockout Policy Accouting OU Lockout Policy The password policy and default domain policy both define the same things under Computer ConfigWindows seetings sec settings Account Policies / Password Policy Enforce password History : 24 passwords remembered Maximum Password age : 180 days Min password age: 14 days Minimum Password Length: 6 characters Password must meet complexity requirements: Enabled Store Passwords using reversible encryption: Disabled Account Policies / Account Lockout Policy Account Lockout Duration 10080 Minutes Account Lockout Threshold: 5 invalid login attempts Reset Account Lockout Counter after : 30 minutes IT lockout This just sets the screen saver settings to lock computers when the user is Idle. After running Group Policy modeling it seems like the password policy and default domain policy is getting applied to everyone. Here is the results of group policy modeling on MO-BLANCKM using the mblanck account, as you can see the policies are both being applied , with nothing important being denied Group Policy Results NCLGS\mblanck on NCLGS\MO-BLANCKM Data collected on: 12/29/2010 11:29:44 AM Summary Computer Configuration Summary General Computer name NCLGS\MO-BLANCKM Domain NCLGS.local Site Default-First-Site-Name Last time Group Policy was processed 12/29/2010 10:17:58 AM Group Policy Objects Applied GPOs Name Link Location Revision Default Domain Policy NCLGS.local AD (15), Sysvol (15) WSUS-52010 NCLGS.local/WSUS/Clients AD (54), Sysvol (54) Password Policy NCLGS.local AD (58), Sysvol (58) Denied GPOs Name Link Location Reason Denied Local Group Policy Local Empty Security Group Membership when Group Policy was applied BUILTIN\Administrators Everyone S-1-5-21-507921405-1326574676-682003330-1003 BUILTIN\Users NT AUTHORITY\NETWORK NT AUTHORITY\Authenticated Users NCLGS\MO-BLANCKM$ NCLGS\Admin-ComputerAccounts-GP NCLGS\Domain Computers WMI Filters Name Value Reference GPO(s) None Component Status Component Name Status Last Process Time Group Policy Infrastructure Success 12/29/2010 10:17:59 AM EFS recovery Success (no data) 10/28/2010 9:10:34 AM Registry Success 10/28/2010 9:10:32 AM Security Success 10/28/2010 9:10:34 AM User Configuration Summary General User name NCLGS\mblanck Domain NCLGS.local Last time Group Policy was processed 12/29/2010 11:28:56 AM Group Policy Objects Applied GPOs Name Link Location Revision Default Domain Policy NCLGS.local AD (7), Sysvol (7) IT-Lockout NCLGS.local/Personal/CS AD (11), Sysvol (11) Password Policy NCLGS.local AD (5), Sysvol (5) Denied GPOs Name Link Location Reason Denied Local Group Policy Local Empty Force Password Change NCLGS.local/Personal Empty Security Group Membership when Group Policy was applied NCLGS\Domain Users Everyone BUILTIN\Administrators BUILTIN\Users NT AUTHORITY\INTERACTIVE NT AUTHORITY\Authenticated Users LOCAL NCLGS\MissingSkidEmail NCLGS\Customer_Service NCLGS\Email_Archive NCLGS\Job Ticket Users NCLGS\Office Staff NCLGS\CUSTOMER SERVI-1 NCLGS\Prestige_Jobs_Email NCLGS\Telecommuters NCLGS\Everyone - NCL WMI Filters Name Value Reference GPO(s) None Component Status Component Name Status Last Process Time Group Policy Infrastructure Success 12/29/2010 11:28:56 AM Registry Success 12/20/2010 12:05:51 PM Scripts Success 10/13/2010 10:38:40 AM Computer Configuration Windows Settings Security Settings Account Policies/Password Policy Policy Setting Winning GPO Enforce password history 24 passwords remembered Password Policy Maximum password age 180 days Password Policy Minimum password age 14 days Password Policy Minimum password length 6 characters Password Policy Password must meet complexity requirements Enabled Password Policy Store passwords using reversible encryption Disabled Password Policy Account Policies/Account Lockout Policy Policy Setting Winning GPO Account lockout duration 10080 minutes Password Policy Account lockout threshold 5 invalid logon attempts Password Policy Reset account lockout counter after 30 minutes Password Policy Local Policies/Security Options Network Security Policy Setting Winning GPO Network security: Force logoff when logon hours expire Enabled Default Domain Policy Public Key Policies/Autoenrollment Settings Policy Setting Winning GPO Enroll certificates automatically Enabled [Default setting] Renew expired certificates, update pending certificates, and remove revoked certificates Disabled Update certificates that use certificate templates Disabled Public Key Policies/Encrypting File System Properties Winning GPO [Default setting] Policy Setting Allow users to encrypt files using Encrypting File System (EFS) Enabled Certificates Issued To Issued By Expiration Date Intended Purposes Winning GPO SBurns SBurns 12/13/2007 5:24:30 PM File Recovery Default Domain Policy For additional information about individual settings, launch Group Policy Object Editor. Public Key Policies/Trusted Root Certification Authorities Properties Winning GPO [Default setting] Policy Setting Allow users to select new root certification authorities (CAs) to trust Enabled Client computers can trust the following certificate stores Third-Party Root Certification Authorities and Enterprise Root Certification Authorities To perform certificate-based authentication of users and computers, CAs must meet the following criteria Registered in Active Directory only Administrative Templates Windows Components/Windows Update Policy Setting Winning GPO Allow Automatic Updates immediate installation Enabled WSUS-52010 Allow non-administrators to receive update notifications Enabled WSUS-52010 Automatic Updates detection frequency Enabled WSUS-52010 Check for updates at the following interval (hours): 1 Policy Setting Winning GPO Configure Automatic Updates Enabled WSUS-52010 Configure automatic updating: 4 - Auto download and schedule the install The following settings are only required and applicable if 4 is selected. Scheduled install day: 0 - Every day Scheduled install time: 03:00 Policy Setting Winning GPO No auto-restart with logged on users for scheduled automatic updates installations Disabled WSUS-52010 Re-prompt for restart with scheduled installations Enabled WSUS-52010 Wait the following period before prompting again with a scheduled restart (minutes): 30 Policy Setting Winning GPO Reschedule Automatic Updates scheduled installations Enabled WSUS-52010 Wait after system startup (minutes): 1 Policy Setting Winning GPO Specify intranet Microsoft update service location Enabled WSUS-52010 Set the intranet update service for detecting updates: http://lavender Set the intranet statistics server: http://lavender (example: http://IntranetUpd01) User Configuration Administrative Templates Control Panel/Display Policy Setting Winning GPO Hide Screen Saver tab Enabled IT-Lockout Password protect the screen saver Enabled IT-Lockout Screen Saver Enabled IT-Lockout Screen Saver executable name Enabled IT-Lockout Screen Saver executable name sstext3d.scr Policy Setting Winning GPO Screen Saver timeout Enabled IT-Lockout Number of seconds to wait to enable the Screen Saver Seconds: 1800 System/Power Management Policy Setting Winning GPO Prompt for password on resume from hibernate / suspend Enabled IT-Lockout

    Read the article

  • Basics of Hosting [closed]

    - by Bala
    Assume we know nothing about web hosting but need to get a site online. What questions do we need to ask potential web hosting companies? What are the pitfalls and places where things can go terribly wrong? Are there any general good or bad things to be on the lookout for? Site could be anything from basic HTML up to e-commerce. We're looking for general thoughts that could apply to any web hosting. Thanks!

    Read the article

  • Why would image thumbnails not be showing in all search engines in all browsers?

    - by Edward Tanguay
    For over a week now, when I search for a word at Google, it correctly gives me some preview image thumbnails: But when I click on "images", it doesn't show me any thumbnails: The same thing happens at Bing.com, when I search Bing itself it gives me some thumbnails on the general search result page: But when I click on "images", it doesn't show me any thumbnails: The same thing happens at Yahoo: If I click on one of the broken thumbnails, it shows me the picture fine: Thumbnails at youtube also work fine: It seems each search engine is hanging on different URLs as shown above: t1.gstatic.com ts3.mm.bing.net thm-a02.yimg.com so it doesn't seem to be the problem of one specific URL not sending thumbnails, it is just a problem with search engine image thumbnails in general. Also, this happens in every browser I try: Explorer, Firefox, Chrome. What could be the problem? Is it my computer, some setting somewhere, my router, my Internet provider (T-Online, Germany)? Has anyone ever had this problem and solved it?

    Read the article

  • VMWare Workstation Performance

    - by tekiegreg
    Hi there, awhile ago I upgraded my laptop to Windows 7 x64 from Windows XP 32 bit edition. However not before virtualizing the physical installation and I continue to run it under VMWare Workstation today. The performance on the resulting VM is just absolutely atrocious! I've done a lot of uninstalling stuff that's not longer needed since the machine is virtual in an effort to reduce RAM, but in general the responsiveness seems sluggish. I also run the Virtual Machine on it's own separate HD that is seldom used by the host OS. I'm just hoping for some general tips in increasing VMWare performance anywhere, thoughts? EDIT: Both of the below answers were excellent starting points for me. However I did like the selected answer's strategies on disk management. I am running the Virtual Machine in a separate external hard disk, likely I'm going to have to reconfigure somehow. Thanks all!

    Read the article

  • Linux (Kubuntu 9.10): Strange DNS problem [seems to be IPv6 issue]

    - by Homer J. Simpson
    Hi, I'm experiencing strange problems with my Kubuntu 9.10 when doing DNS requests from various applications. The requests are extremely slow, so loading any pages in Firefox or Konqueror, doing package installations in Kpackagemanager and other apps is really painful, while for example Opera doesnt have any problems, and ping is normally fast as well for DNS pings. I checked the proxy settings of both the used applications as well as of the general system and there are none, so to me it doesn't seem as there was something inbetween.. Does anybody have an idea on what to check for possible problem sources or how to solve this ? I'm behind a DSL home router which does the DHCP (and works well with my other computer). Any kind of advice would be really helpful. Edit: It seems to be some kind of IPv6 problem, as I could get it to work by disabling IPv6 explicitly in Firefox. Is there a general solution to this ?

    Read the article

  • Install Sybase SQL Anywhere 11 as windows service

    - by student
    we are using Sybase SQL Anywhere 11. I am using command line to install/init database, dbinit -dba %username%,%pwd% -p 4k %dbLocation%, and start database server, dbsrv11 %dbLocation%, in a batch file. What I really want is install my database as Windows service and can be start/running automatically when machine get reboot. But I want to keep using batch for easy intall/uninstall/change it. Any Sybase expert here?

    Read the article

  • How do you cache web pages with a personalized header using caching reverse proxy such as Squid, Var

    - by Continuation
    Pretty much every page of my website is dynamically generated. However they don't change that frequently (kinda similar to a forum page). So I'd like to cache them using a caching reverse proxy such as Squid, varnish or Nginx. The problem is that for my logged-in users, each of them will see a personalized header saying "Welcome John Doe. Logout" on the upper right corner of the page (just like serverfault). While users who aren't logged in will see a header that says "Login" instead. So basically even though every user will see the same page in general, they all slightly different version due to that personalized header. Is there any way so that I can cache the "main" part of the page and serve it from cache while generate the personalized header dynamically for each individual user? This must be a very common problem. How is it solved in general?

    Read the article

  • Strange DNS problem [seems to be IPv6 issue]

    - by Homer J. Simpson
    Hi, I'm experiencing strange problems with my Kubuntu 9.10 when doing DNS requests from various applications. The requests are extremely slow, so loading any pages in Firefox or Konqueror, doing package installations in Kpackagemanager and other apps is really painful, while for example Opera doesnt have any problems, and ping is normally fast as well for DNS pings. I checked the proxy settings of both the used applications as well as of the general system and there are none, so to me it doesn't seem as there was something inbetween.. Does anybody have an idea on what to check for possible problem sources or how to solve this ? I'm behind a DSL home router which does the DHCP (and works well with my other computer). Any kind of advice would be really helpful. Edit: It seems to be some kind of IPv6 problem, as I could get it to work by disabling IPv6 explicitly in Firefox. Is there a general solution to this ?

    Read the article

  • CPU load, USB connection vs. NIC

    - by T.J. Crowder
    In general, and understanding the answer may vary by manufacturer and model (and driver, and...), in consumer-grade workstations with integrated NICs, does the NIC rely on the CPU for a lot of help (as is typically the case with a USB controller, for instance), or is it fairly intelligent and capable on its own (like, say, the typical Firewire controller)? Or is the question too general to answer? (If it matters, you can assume Linux.) Background: I'm looking at connecting a device (digital television capture) that will be delivering ~20-50 Mbit/sec of data to a somewhat under-powered workstation. I can get a USB 2 High-speed device, or a network-attached device, and am interested in avoiding impacting the CPU where possible. Obviously, if it's a 100Mbit NIC, that's roughly half its theoretical inbound bandwidth, whereas it's only roughly a tenth of the 480 Mbit/second the USB 2 "High Speed" interface. But if the latter requires a lot of CPU support and the former doesn't...

    Read the article

  • How to integrate Windows Server 2008 R2's NPS with Cisco switches?

    - by Massimo
    I need to evaluate in a lab environment the use of Windows Server 2008 R2's NPS for 802.1x authentication with Cisco Catalyst 3750 switches; the general idea is to only let clients connect to the company network if they can provide valid domain logon credentials, placing them in a restricted VLAN instead if they can't. NAP would also be a bonus, but it can be evaluated later; the main point now is only 802.1x authentication. Although I have very good knowledge of Windows and Active Directory (on the Microsoft side) and quite good knowledge of Catalyst switches (on the Cisco side), I'm totally new to 802.1x; I'd really like some general guidelines and help here, and some sort of implementation guide would also be very useful.

    Read the article

  • Can not search my company howto blog site anylonger in Sharepoint

    - by Worldunix
    I have a Howto company Blog site that i post to for my clients to access for help. For some reason it has stopped letting anyone search on it. I can search for Mysites or users. But when you drop down the tab to search: This Site: "blog site name" you get the following reply: No results matching your search were found. Check your spelling. Are the words in your query spelled correctly? Try using synonyms. Maybe what you're looking for uses slightly different words. Make your search more general. Try more general terms in place of specific ones. Try your search in a different scope. Different scopes can have different results. I have tried the following command: from the Index server net stop osearch net start osearch iisreset /noforce But still not able to search a local blog site I can only search for users and Sites. please help Don

    Read the article

  • Can not search my company howto blog site anylonger in Sharepoint

    - by Worldunix
    I have a Howto company Blog site that i post to for my clients to access for help. For some reason it has stopped letting anyone search on it. I can search for Mysites or users. But when you drop down the tab to search: This Site: "blog site name" you get the following reply: No results matching your search were found. Check your spelling. Are the words in your query spelled correctly? Try using synonyms. Maybe what you're looking for uses slightly different words. Make your search more general. Try more general terms in place of specific ones. Try your search in a different scope. Different scopes can have different results. I have tried the following command: from the Index server net stop osearch net start osearch iisreset /noforce But still not able to search a local blog site I can only search for users and Sites. please help Don

    Read the article

  • Puppet, Nagios, Munin on cPanel based hosts

    - by WinkyWolly
    I've been managing 20-30~ cPanel based hosts over the past year with Puppet, Nagios and Munin for general monitoring / trending however a lot of the methods I've had to use to deploy / manage things such as configurations a pain. For those of you who aren't familiar with cPanel - it adds a few things to yum exclude such as perl*, ruby* and so forth. This causes issues with me being able to bootstrap monitoring on a new server via Puppet (well via the Package type) due to a bunch of conflicts with installing via Yum. Now I could create a custom RPM for everything and remove certain dependencies from the spec file however I would like to avoid this if possible. Does anyone have any proposed functional ways to manage this sort of environment? Currently I install Puppet, Facter and Munin via RPM's and force install using --nodeps and such (since they're installed, just no the ones Yum wants). Nagios I installed manually from source at this time (likely will create RPM's however I want to tackle this general issue first).

    Read the article

  • SQL Server Management Studio unable to connect to local instance.

    - by Ben Collins
    I'm not a DBA, I'm a developer - and I'm having trouble with SQL Server Management studio. I installed SQL Server 2008 Standard on Windows 2008 Server R2, and according to Sql Server Configuration Manager, I've got two instances: OFFICESERVERS (for sharepoint) and MSSQLSERVER. When I open SQL Server Management Studio I can only discover OFFICESERVERS. I've checked the protocol configuration for both instances and didn't see anything that indicates to me why this would be. Any hints?

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >