Search Results

Search found 13895 results on 556 pages for 'options'.

Page 253/556 | < Previous Page | 249 250 251 252 253 254 255 256 257 258 259 260  | Next Page >

  • Which CPU for SQL Server machine (Xeon, i5, i7, AMD Phenom)?

    - by Tony_Henrich
    I am going to build a full height server machine to be used for SQL Server 2008 64bit. I have $400 to spend for a CPU. Which CPU should I get among i5, i7, Xeon and Phenom in terms of performance. There are so many options and I am out of touch with the latest stuff. All I know I want something fast and works with DDR3 fast memory and works with some kind of fast system bus. I don't care about overclocking, 3D & gfx benchmarks. The machine is not used for games and gfx apps. Any recommendations?

    Read the article

  • How to add programs in context menu on desktop at one context name?

    - by tonni
    Hello I'm searching for answer how to put programs (not program) in context menu of desktop at one context name? Example: I want to create new context name which can be extendable to put inside more programs. That is like "New" or "View" that show as more options after using it. Here is what i tried (and is working when you want to put one program to desktop): I created in registry "New Folder" with name of some programs (i.e. "notepad") inside of this location HKEY_CLASSES_ROOT\Directory\Background\shell\ Inside of new created folder ("notepad") i put one more time "New Folder" and name it "command" (must be same name) Inside of "command" at string ("Default") put the location through notepad ("C:\Windows\system32\notepad.exe") - and now when you click right mouse button on desktop will see new context text with name "notepad" what will of course open notepad if you use them Well what i'm searching for is to find solution of how to make context name which will offer as to use more that one programs, do you have any solution? O.S. win 7

    Read the article

  • Is there a Unix/Linux platform equivalent to Telligent Community?

    - by Scott A. Lawrence
    Telligent Community combines blogs, wikis, forums, and file-sharing capabilities into a single product with single sign-on, using all Microsoft technologies. Is there an equivalent offering that runs on Unix/Linux? Or would I have to pick and choose individual product offerings and figure out another option for single sign-on across them? Are there plug-ins for something like WordPress or MovableType that might add the necessary functionality? A friend of mine is looking to add a "members-only" area to her company's website, and since they're hosted on Dreamhost (and can't afford StackExchange pricing yet), I'm trying to find other options for them.

    Read the article

  • Into Orbit (OBIEE 11g Launch)

    - by Darryn.Hinett
    After much anticipation, it appears that OBIEE 11g is about to hit the streets. Join Charles Phillips, President, and Thomas Kurian, Executive Vice President, Product Development, for the launch of the latest release of Oracle's business intelligence software. Be the first to hear about Oracle Business Intelligence Enterprise Edition 11g, the new, industry-leading technology platform for business intelligence, which offers: A powerful end-user experience with rich visualisation, search, and actionable collaboration Advancements in analytics, OLAP, and enterprise reporting, with unmatched performance and scalability Simplified system configuration, life-cycle management, and performance optimisation As well as the keynote and technical general session, break out sessions will cover the following topics: Business Intelligence: From Insight to Action In this session, you will learn about an exciting, industry-first innovation that connects business intelligence directly to your business processes. You can spot an opportunity or issue, and immediately initiate appropriate action directly from your dashboard. Oracle Business Intelligence Enterprise Edition 11g Systems Management and Deployment Learn how you can streamline the process of configuring your system, provisioning users, and monitoring and optimising query performance. Attend this session to hear how new integration with Oracle Enterprise Manager provides unique systems management, superior scalability, and high availability and security benefits, while making upgrades effortless. Extending Business Intelligence Analytics with Online Analytical Processing (OLAP) Learn how you can enhance the analytical power and business value of your BI solution with a unified environment for navigating and querying both OLAP and relational data sources. This session will focus on how Oracle Business Intelligence Enterprise Edition 11g, used with Oracle Essbase, can deliver insight at the speed of thought. Integrated Performance Management If your organisation is using or considering performance management applications such as Oracle's Hyperion Planning and Hyperion Financial Management, you will not want to miss this session. See how you can leverage Oracle's BI solution for accessing performance management applications and performing extended financial reporting and analysis. Visualisation and End-user Experience The latest release of Oracle Business Intelligence provides an unrivaled end user experience, including rich interactive dashboards, a vast range of animated charting options, integrated search, and more. This session will also include a close look at how you can leverage location data to visualise geo-spatial information.

    Read the article

  • SQL SERVER – Repair a SQL Server Database Using a Transaction Log Explorer

    - by Pinal Dave
    In this blog, I’ll show how to use ApexSQL Log, a SQL Server transaction log viewer. You can download it for free, install, and play along. But first, let’s describe some disaster recovery scenarios where it’s useful. About SQL Server disaster recovery Along with database development and administration, you must work on a good recovery plan. Disasters do happen and no one’s immune. What you can do is take all actions needed to be ready for a disaster and go through it with minimal data loss and downtime. Besides creating a recovery plan, it’s necessary to have a list of steps that will be executed when a disaster occurs and to test them before a disaster. This way, you’ll know that the plan is good and viable. Testing can also be used as training for all team members, so they can all understand and execute it when the time comes. It will show how much time is needed to have your servers fully functional again and how much data you can lose in a real-life situation. If these don’t meet recovery-time and recovery-point objectives, the plan needs to be improved. Keep in mind that all major changes in environment configuration, business strategy, and recovery objectives require a new recovery plan testing, as these changes most probably induce a recovery plan changing and tweaking. What is a good SQL Server disaster recovery plan? A good SQL Server disaster recovery strategy starts with planning SQL Server database backups. An efficient strategy is to create a full database backup periodically. Between two successive full database backups, you can create differential database backups. It is essential is to create transaction log backups regularly between full database backups. Keep in mind that transaction log backups can be created only on databases in the full recovery model. In other words, a simple, but efficient backup strategy would be a full database backup every night, a transaction log backup every hour, or every 15 minutes. The frequency depends on how much data you can afford to lose and how busy the database is. Another option, instead of creating a full database backup every night, is to create a full database backup once a week (e.g. on Friday at midnight) and differential database backup every night until next Friday when you will create a full database backup again. Once you create your SQL Server database backup strategy, schedule the backups. You can do that easily using SQL Server maintenance plans. Why are transaction logs important? Transaction log backups contain transactions executed on a SQL Server database. They provide enough information to undo and redo the transactions and roll back or forward the database to a point in time. In SQL Server disaster recovery situations, transaction logs enable to repair a SQL Server database and bring it to the state before the disaster. Be aware that even with regular backups, there will be some data missing. These are the transactions made between the last transaction log backup and the time of the disaster. In some situations, to repair your SQL Server database it’s not necessary to re-create the database from its last backup. The database might still be online and all you need to do is roll back several transactions, such as wrong update, insert, or delete. The restore to a point in time feature is available in SQL Server, but for large databases, it is very time-consuming, as SQL Server first restores a full database backup, and then restores transaction log backups, one after another, up to the recovery point. During that time, the database is unavailable. This is where a SQL Server transaction log viewer can help. For optimal recovery, besides having a database in the full recovery model, it’s important that you haven’t manually truncated the online transaction log. This ensures that all transactions made after the last transaction log backup are still in the online transaction log. All you have to do is read and replay them. How to read a SQL Server transaction log? SQL Server doesn’t provide an option to read transaction logs. There are several SQL Server commands and functions that read the content of a transaction log file (fn_dblog, fn_dump_dblog, and DBCC PAGE), but they are undocumented. They require T-SQL knowledge, return a large number of not easy to read and understand columns, sometimes in binary or hexadecimal format. Another challenge is reading UPDATE statements, as it’s necessary to match it to a value in the MDF file. When you finally read the transactions executed, you have to create a script for it. How to easily repair a SQL database? The easiest solution is to use a transaction log reader that will not only read the transactions in the transaction log files, but also automatically create scripts for the read transactions. In the following example, I will show how to use ApexSQL Log to repair a SQL database after a crash. If a database has crashed and both MDF and LDF files are lost, you have to rely on the full database backup and all subsequent transaction log backups. In another scenario, the MDF file is lost, but the LDF file is available. First, restore the last full database backup on SQL Server using SQL Server Management Studio. I’ll name it Restored_AW2014. Then, start ApexSQL Log It will automatically detect all local servers. If not, click the icon right to the Server drop-down list, or just type in the SQL Server instance name. Select the Windows or SQL Server authentication type and select the Restored_AW2014 database from the database drop-down list. When all options are set, click Next. ApexSQL Log will show the online transaction log file. Now, click Add and add all transaction log backups created after the full database backup I used to restore the database. In case you don’t have transaction log backups, but the LDF file hasn’t been lost during the SQL Server disaster, add it using Add.   To repair a SQL database to a point in time, ApexSQL Log needs to read and replay all the transactions in the transaction log backups (or the LDF file saved after the disaster). That’s why I selected the Whole transaction log option in the Filter setup. ApexSQL Log offers a range of various filters, which are useful when you need to read just specific transactions. You can filter transactions by the time of the transactions, operation type (e.g. to read only data inserts), table name, SQL Server login that made the transaction, etc. In this scenario, to repair a SQL database, I’ll check all filters and make sure that all transactions are included. In the Operations tab, select all schema operations (DDL). If you omit these, only the data changes will be read so if there were any schema changes, such as a new function created, or an existing table modified, they will be ignored and database will not be properly repaired. The data repair for modified tables will fail. In the Tables tab, I’ll make sure all tables are selected. I will uncheck the Show operations on dropped tables option, to reduce the number of transactions. Click Next. ApexSQL Log offers three options. Select Open results in grid, to get a user-friendly presentation of the transactions. As you can see, details are shown for every transaction, including the old and new values for updated columns, which are clearly highlighted. Now, select them all and then create a redo script by clicking the Create redo script icon in the menu.   For a large number of transactions and in a critical situation, when acting fast is a must, I recommend using the Export results to file option. It will save some time, as the transactions will be directly scripted into a redo file, without showing them in the grid first. Select Generate reconstruction (REDO) script , change the output path if you want, and click Finish. After the redo T-SQL script is created, ApexSQL Log shows the redo script summary: The third option will create a command line statement for a batch file that you can use to schedule execution, which is not really applicable when you repair a SQL database, but quite useful in daily auditing scenarios. To repair your SQL database, all you have to do is execute the generated redo script using an integrated developer environment tool such as SQL Server Management Studio or any other, against the restored database. You can find more information about how to read SQL Server transaction logs and repair a SQL database on ApexSQL Solution center. There are solutions for various situations when data needs to be recovered, restored, or transactions rolled back. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • GlassFish will not start when SNMP is enabled

    - by edarc
    I have a GlassFish v3 app server running on 64-bit Debian Lenny. Everything is running fine, except I would like to monitor GF's JVM instance with SNMP. However, every time I try to enable it by adding the following <jvm-options> in domain.xml: -Dcom.sun.management.snmp.port=10161 -Dcom.sun.management.snmp.acl.file=/path/to/snmp.acl -Dcom.sun.management.snmp.interface=127.0.0.1 GlassFish refuses to start: $ asadmin start-domain Waiting for DAS to start .Error starting domain: default. The server exited prematurely with exit code 1. Command start-domain failed. $ There is also nothing illuminating (well, really nothing at all) in jvm.log or server.log. The snmp.acl file contains: acl = { { communities = public access = read-only managers = localhost } } and is chmod 600 (I know this is not the problem because it will actually fail with an error about the permissions if it is set to anything other than 600) $ java -version java version "1.6.0_0" OpenJDK Runtime Environment (build 1.6.0_0-b11) OpenJDK 64-Bit Server VM (build 1.6.0_0-b11, mixed mode)

    Read the article

  • How to consistently enable screen sharing with iChat

    - by Joel
    I am unable to consistently get screen sharing in iChat to work. When I select an online buddy, under the Buddies menu the options "Share my screen with Bob" and "Ask to Share Bob's Screen" are disabled. Sometimes starting a chat with that person will enable the screen sharing but often not. Once its enabled it works fine but I have no idea what the key is to getting it enabled. It seems fairly random when it works. This is over the public internet using Google Talk. Both ends are running OSX 10.5.

    Read the article

  • Trying to configure DNS on a Godaddy Virtual Dedicated host, Mediatemple Domain Registration

    - by dclowd9901
    A client of mine purchased VD hosting with Godaddy and a domain name with Mediatemple. I've never configured DNS from scratch, and I'm finding it very difficult to find any sort of explanation on how to go about it. As of right now, Mediatemple is pointing to the Godaddy's ns1.domaincontrol.com and ns2.domaincontrol.com nameservers. The VD hosting on Godaddy (via their Simple Control Panel) has options to "Add a new domain", which brings you through a wizard of sorts that asks you if the domain has already been registered (yes), what it is (dclowd9901.com for this example), create a system username and password for it (with checkboxes for SSH and FTP access), which level of user can administer it, and whether a mail account should be setup. When complete, it also creates a zone file. In this zone file, the Primary nameserver is ns1.dclowd9901.com; the records are as follow (where 12.23.12.34 is the presumed host): @ A 12.23.12.34 @ NS ns1 @ NS ns2 ns1 A 12.23.12.34 ns2 A 12.23.12.34 @ MX mail www A 12.23.12.34 ftp A 12.23.12.34 ssh A 12.23.12.34 mail A 12.23.12.34 If anyone can shed any light on this for me, explain to me the interactions between the registrar and the host and so on, I'd be very grateful. Thanks in advance for the help.

    Read the article

  • Network issues with DNS not being found

    - by Anriëtte Combrink
    Hi there This is exactly like how our network looks like: Single server with a network router Everything is setup, but I cannot connect our Macs under the Login Options - Join... to this server. Our server's name is Toolbox and I have tried Toolbox.local, Toolbox.private, prepended the afp:// protocol to the name, but nothing, our Macs just don't want to connect this way. Our router has DHCP and gives out all the IP addresses naturally, would I have to add Toolbox.local to the DNS on the router and like it via static internal IP to the server? Our Macs keep giving the following error while trying to join the Network Account Server: Unable to add server Could not resolve the address (2200) What am I doing wrong?

    Read the article

  • How do you research while pair programming?

    - by traffichazard
    I've recently started at a new job and pairing has helped me become effective there very quickly. I am, however, having a hard time when we must do brief joint research during our workflow, covering API features, code examples, or command options. My team lead urges us to do all research on our pairing station, rather than our individual laptops, and to synchronize our research by verbally negotiating the steps between different web resources. I research, read, and absorb information differently from my pairing partner, and I feel much more productive when I can follow a thread of research to the next web page exactly when I want to, rather than trying to keep exact pace and place with what my partner's reading. We're both smart and fast, but we can't help moving in different ways and instantaneous velocities when we're figuring stuff out. It seems so much easier to poke around individually for a minute until one of us says "I've got it," then get back together and code. When you pair program, how do you handle short research tasks? What works best for you, and how to you keep in sync with your partner?

    Read the article

  • SQL Server 2012 Memory Manager KB articles

    - by SQLOS Team
    Since the release of SQL Server 2012 with a redesigned memory manager, a steady stream of KB articles have been produced by CSS to provide guidance on the new or changed options, as well as fixes that have been published..   How has memory sizing changed in SQL 2012? 2663912 Memory configuration and sizing considerations in SQL Server 2012 - http://support.microsoft.com/default.aspx?scid=kb;EN-US;2663912     Setting "locked pages" to avoid SQL Server memory pages getting swapped has been simplified, particularly for Standard Edition, the details can be found here: 2659143 How to enable the "locked pages" feature in SQL Server 2012 - http://support.microsoft.com/default.aspx?scid=kb;EN-US;2659143   Note the following deprecation (particularly relevant for 32-bit installations): 2644592 The "AWE enabled" SQL Server feature is deprecated - http://support.microsoft.com/default.aspx?scid=kb;EN-US;2644592   Note the following fixes available: 2708594 FIX: Locked page allocations are enabled without any warning after you upgrade to SQL Server 2012 - http://support.microsoft.com/kb/2708594/EN-US 2688697 FIX: Out-of-memory error when you run an instance of SQL Server 2012 on a computer that uses NUMA - http://support.microsoft.com/kb/2688697/EN-US Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • Sitemaps - do I need to submit each sitemap in sitemap_index.xml to Google Webmaster tools?

    - by iSumitG
    I am having a Wordpress blog on my CentOS server. There is no sitemap.xml in the root directory but there is sitemap_index.xml file in the root directory which contains the following code: <?xml-stylesheet type="text/xsl" href="http://mywebsite.com/wp-content/plugins/wordpress-seo/css/xml-sitemap-xsl.php"?> <sitemapindex xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"> <sitemap> <loc> http://mywebsite.com/post-sitemap.xml </loc> <lastmod> 2012-12-18T19:47:47+00:00 </lastmod> </sitemap> <sitemap> <loc> http://mywebsite.com/page-sitemap.xml </loc> <lastmod> 2012-12-18T17:32:49+00:00 </lastmod> </sitemap> </sitemapindex> My question: Which sitemap should I submit to Google Webmasters Tools? Options are: Only sitemap_index.xml Only post-sitemap.xml and page-sitemap.xml All 3 (sitemap_index.xml, post-sitemap.xml and page-sitemap.xml) Any other, please let me know.

    Read the article

  • In Which We Demystify A Few Docupresentment Settings And Learn the Ethos of the Author

    - by Andy Little
    It's no secret that Docupresentment (part of the Oracle Documaker suite) is powerful tool for integrating on-demand and interactive applications for publishing with the Oracle Documaker framework.  It's also no secret there are are many details with respect to the configuration of Docupresentment that can elude even the most erudite of of techies.  To be sure, Docupresentment will work for you right out of the box, and in most cases will suit your needs without toying with a configuration file.  But, where's the adventure in that?   With this inaugural post to That's The Way, I'm going to introduce myself, and what my aim is with this blog.  If you didn't figure it out already by checking out my profile, my name is Andy and I've been with Oracle (nee Skywire Software nee Docucorp nee Formmaker) since the formative years of 1998.  Strangely, it doesn't seem that long ago, but it's certainly a lifetime in the age of technology.  I recall running a BBS from my parent's basement on a 1200 baud modem, and the trepidation and sweaty-palmed excitement of upgrading to the power and speed of 2400 baud!  Fine, I'll admit that perhaps I'm inflating the experience a bit, but I was kid!  This is the stuff of War Games and King's Quest I and the demise of TI-99 4/A.  Exciting times.  So fast-forward a bit and I'm 12 years into a career in the world of document automation and publishing working for the best (IMHO) software company on the planet.  With That's The Way I hope to shed a little light and peek under the covers of some of the more interesting aspects of implementations involving the tech space within the Oracle Insurance Global Business Unit (IGBU), which includes Oracle Documaker, Rating & Underwriting, and Policy Administration to name a few.  I may delve off course a bit, and you'll likely get a dose of humor (at least in my mind) but I hope you'll glean at least a tidbit of usefulness with each post.  Feel free to comment as I'm a fairly conversant guy and happy to talk -- it's stopping the talking that's the hard part... So, back to our regularly-scheduled post, already in progress.  By this time you've visited Oracle's E-Delivery site and acquired your properly-licensed version of Oracle Documaker.  Wait -- you didn't find it?  Understandable -- navigating the voluminous download library within Oracle can be a daunting task.  It's pretty simple once you’ve done it a few times.  Login to the e-delivery site, and accept the license terms and restrictions.  Then, you’ll be able to select the Oracle Insurance Applications product pack and your appropriate platform. Click Go and you’ll see a list of applicable products, and you’ll click on Oracle Documaker Media Pack (as I went to press with this article the version is 11.4): Finally, click the Download button next to Docupresentment (again, version at press time is 2.2 p5). This should give you a ZIP file that contains the installation packages for the Docupresentment Server and Client, cryptically named IDSServer22P05W32.exe and IDSClient22P05W32.exe. At this time, I’d like to take a little detour and explain that the world of Oracle, like most technical companies, is rife with acronyms.  One of the reasons Skywire Software was a appealing to Oracle was our use of many acronyms, including the occasional use of multiple acronyms with the same meaning.  I apologize in advance and will try to point these out along the way.  Here’s your first sticky note to go along with that: IDS = Internet Document Server = Docupresentment Once you’ve completed the installation, you’ll have a shiny new Docupresentment server and client, and if you installed the default location it will be living in c:\docserv. Unix users, I’m one of you!  You’ll find it by default in  ~/docupresentment/docserv.  Forging onward with the meat of this post is learning about some special configuration options.  By now you’ve read the documentation included with the download (specifically ids_book.pdf) which goes into some detail of the rubric of the configuration file and in fact there’s even a handy utility that provides an interface to the configuration file (see Running IDSConfig in the documentation).  But who wants to deal with a configuration utility when we have the tools and technology to edit the file <gasp> by hand! I shall now proceed with the standard Information Technology Under the Hood Disclaimer: Please remember to back up any files before you make changes.  I am not responsible for any havoc you may wreak! Go to your installation directory, and locate your docserv.xml file.  Open it in your favorite XML editor.  I happen to be fond of Notepad++ with the XML Tools plugin.  Almost immediately you will behold the splendor of the configuration file.  Just take a moment and let that sink in.  Ok – moving on.  If you reviewed the documentation you know that inside the root <configuration> node there are multiple <section> nodes, each containing a specific group of settings.  Let’s take a look at <section name=”DocumentServer”>: There are a few entries I’d like to discuss.  First, <entry name=”StartCommand”>. This should be pretty self-explanatory; it’s the name of the executable that’s run when you fire up Docupresentment.  Immediately following that is <entry name=”StartArguments”> and as you might imagine these are the arguments passed to the executable.  A few things to point out: The –Dids.configuration=docserv.xml parameter specifies the name of your configuration file. The –Dlogging.configuration=logconf.xml parameter specifies the name of your logging configuration file (this uses log4j so bone up on that before you delve here). The -Djava.endorsed.dirs=lib/endorsed parameter specifies the path where 3rd party Java libraries can be located for use with Docupresentment.  More on that in another post. The <entry name=”Instances”> allows you to specify the number of instances of Docupresentment that will be started.  By default this is two, and generally two instances per CPU is adequate, however you will always need to perform load testing to determine the sweet spot based on your hardware and types of transactions.  You may have many, many more instances than 2. Time for a sidebar on instances.  An instance is nothing more than a separate process of Docupresentment.  The Docupresentment service that you fire up with docserver.bat or docserver.sh actually starts a watchdog process, which is then responsible for starting up the actual Docupresentment processes.  Each of these act independently from one another, so if one crashes, it does not affect any others.  In the case of a crashed process, the watchdog will start up another instance so the number of configured instances are always running.  Bottom line: instance = Docupresentment process. And now, finally, to the settings which gave me pause on an not-too-long-ago implementation!  Docupresentment includes a feature that watches configuration files (such as docserv.xml and logconf.xml) and will automatically restart its instances to load the changes.  You can configure the time that Docupresentment waits to check these files using the setting <entry name=”FileWatchTimeMillis”>.  By default the number is 12000ms, or 12 seconds.  You can save yourself a few CPU cycles by extending this time, or by disabling  the check altogether by setting the value to 0.  This may or may not be appropriate for your environment; if you have 100% uptime requirements then you probably don’t want to bring down an entire set of processes just to accept a new configuration value, so it’s best to leave this somewhere between 12 seconds to a few minutes.  Another point to keep in mind: if you are using Documaker real-time processing under Docupresentment the Master Resource Library (MRL) files and INI options are cached, and if you need to affect a change, you’ll have to “restart” Docupresentment.  Touching the docserv.xml file is an easy way to do this (other methods including using the RSS request, but that’s another post). The next item up: <entry name=”FilePurgeTimeSeconds”>.  You may already know that the Docupresentment system can generate many temporary files based on certain request types that are processed through the system.  What you may not know is how those files are cleaned up.  There are many rules in Docupresentment that cause the creation of temporary files.  When these files are created, Docupresentment writes an entry into a properties file called the file cache.  This file contains the name, creation date, and expiration time of each temporary file created by each instance of Docupresentment.  Periodically Docupresentment will check the file cache to determine if there are files that are past the expiration time, not unlike that block of cheese festering away in the back of my refrigerator.  However, unlike my ‘fridge cleaning tendencies, Docupresentment is quick to remove files that are past their expiration time.  You, my friend, have the power to control how often Docupresentment inspects the file cache.  Simply set the value for <entry name=”FilePurgeTimeSeconds”> to the number of seconds appropriate for your requirements and you’re set.  Note that file purging happens on a separate thread from normal request processing, so this shouldn’t interfere with response times unless the CPU happens to be really taxed at the point of cache processing.  Finally, after all of this, we get to the final setting I’m going to address in this post: <entry name=”FilePurgeList”>.  The default is “filecache.properties”.  This establishes the root name for the Docupresentment file cache that I mentioned previously.  Docupresentment creates a separate cache file for each instance based on this setting.  If you have two instances, you’ll see two files created: filecache.properties.1 and filecache.properties.2.  Feel free to open these up and check them out. I hope you’ve enjoyed this first foray into the configuration file of Docupresentment.  If you did enjoy it, feel free to drop a comment, I welcome feedback.  If you have ideas for other posts you’d like to see, please do let me know.  You can reach me at [email protected]. ‘Til next time! ###

    Read the article

  • API Management Solutions

    - by Mike
    I'm currently building an API and am looking for a tool to allow me to monitor (in a GUI) and rate limit usage. I've come across a few enterprise solutions including: http://apigee.com/ http://mashery.com/ http://www.layer7tech.com/ http://www.3scale.net/ The Apigee enterprise plan is exactly what I'm looking for but plans start at $3000 / month which is out of my price range. The other solutions are all either to expense or do not provide the solution I'm looking for. This led me to look at some open source options including: http://apiaxle.com/ https://code.google.com/p/varnish-apikey/wiki/UsageManual Varnish seems like a fairly complete solution, however I would need to build a GUI to visualise the data. My final option would to build a solution from scratch using EventMachine and ruby. Any advice?

    Read the article

  • BizTalk 2010 upgrade - Sunset Development/Deployment Modes

    - by Ahsan Alam
    Those who are familiar with BizTalk 2006, should know about Development and Deployment modes in Visual Studio. Personally, I never questioned why it's not Debug and Release just like everything else in Visual Studio. Then everything changed in BizTalk 2010. BizTalk and Visual Studio 2010 now uses Debug and Release modes by default. When we upgraded BizTalk 2006 solution to 2010, Development and Deployment modes remained unchanged for all the projects, and code compiled without any issues. Soon, I realized that any new projects added to the converted solutions started using Debug and Release modes. This also didn't cause any problem compiling the solution from Visual Studio; however, it broke our custom build/deployment scripts since the scripts were trying to build in Deployment mode. So, I decided to change all projects from Development and Deployment modes to Debug and Release modes to keep them consistent. During this process I realized that Debug and Release modes are defaults; but it's completely customizable. During the BizTalk 2010 upgrade process, I figured that switching to default Debug and Release modes are the best options.

    Read the article

  • How do I repeat a texture with GLKit?

    - by Synopfab
    I am using GLKit in order to show textures on my project. The code is like this: -(void)setTextureImage:(UIImage *)image { NSError *error; texture = [GLKTextureLoader textureWithCGImage:image.CGImage options:nil error:&error]; if (error) { NSLog(@"Error loading texture from image: %@",error); } } effect.texture2d0.envMode = GLKTextureEnvModeReplace; effect.texture2d0.target = GLKTextureTarget2D; effect.texture2d0.name = texture.name; glEnableVertexAttribArray(GLKVertexAttribTexCoord0); glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_FALSE, 0, self.textureCoordinates); Now I want to repeat this texture on a rectangle. Is there any way use GLKit for this behavior? I've tried to use opengl function in addition to the glkit ones, but it raises errors: glEnable(GL_TEXTURE_2D); glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT ); glBindTexture( GL_TEXTURE_2D, texture.name ); 2011-11-09 20:10:28.614 **[16309:207] GL ERROR: 0x0500 2011-11-09 20:10:30.840 **[16309:207] Error loading texture from image: Error Domain=GLKTextureLoaderErrorDomain Code=8 "The operation couldn’t be completed. (GLKTextureLoaderErrorDomain error 8.)" UserInfo=0x68545c0 {GLKTextureLoaderGLErrorKey=1280, GLKTextureLoaderErrorKey=OpenGL error}

    Read the article

  • What is the fastest MD5 sum calculator?

    - by netvope
    I tested the speed of md5sum on a few Ubuntu 8.04 servers Pentium III 700 MHz: 52 MB/s Atom 1.6 GHz, 32-bit: 119 MB/s Core 2 (Yorkfield) 2.5GHz, 32-bit: 194 MB/s Core 2 (Yorkfield) 2.5GHz, 64-bit: 222 MB/s Then I downloaded a tool (by apt-get install) called md5deep and found that it's roughly 20% faster (as tested on the 32-bit Core 2 server). This makes me feel that the "vanilla" md5sum included in Ubuntu isn't the fastest one. Questions: Other than md5deep, are you aware of any MD5 calculators that are potentially faster than md5sum? (Answers for software from other OS are also welcomed.) If I want to compile md5sum myself, where can I download the source? What compiler options would you suggest for the Core 2 server? (note: gcc 4.2.4 in Ubuntu 8.04 does not seem to support -march=core2)

    Read the article

  • Using the Juniper EX3300 Switch as a router?

    - by Richard Whitman
    I have a Juniper EX3300 switch in a data center. And I have connected one of the uplink ports (ge-0/1/0) to my ISP's router. I want to configure it so that all the devices connected to ports in the same VLAN as ge-0/1/0 can access the Internet. I have done some research, and I haven't gotten anywhere really. I have configured the interface as follows: ge-0/1/0 { ether-options { no-auto-negotiation; link-mode full-duplex; speed { 1g; } } unit 0 { family inet { address xx.xx.xx.xx/32; } } } where xx.xx.xx.xx is the "Customer Router Port IP" assigned by my ISP. When I try to commit, I get the following error: Interface ge-0/1/0.0 not enabled for switching Can some one tell me what is the right way to configure it?

    Read the article

  • Asus Sonic Master on Asus N53SV

    - by David Winchester
    I have read that there's a problem to get the subwoofer working in these laptops. I tried this solution No sound from external subwoofer but I don't how to prove that the subwoofer is properly functioning. I use Pulseaudio equalizer and the bass sound seems to work fine, but when I go to the Sound Settings, I can't move the bar where it says 'Subwoofer' in my sound card option, so I don't know if everything is alright. If someone has a solution I would like to know, because there isn't much information regarding this. By the way, I'm using Ubuntu 12.04 64 bits. Thanks beforehand, Dave EDIT ----------- Possible Solution Well, I will post a solution that worked for me and I think it will help a lot of users. I finally got the subwoofer working. Besides adding in /etc/modprobe.d/alsa-base.conf the line options snd-hda-intel model=asus-mode4 I deleted the lines with load-module module-combine and module-combine-sink in /etc/pulse/default.pa (in the home folder there's also a ~/.pulse/default.pa file, I don't know if it has the lines too) To assure all the channels are working, I think this command tells me that speaker-test -c6 -l1 -twav I use pulseaudio-equalizer and the bass sounds very well when properly adjusted. Also, all the channels seems to work fine and the sound is even better than in Windows (where I don't have an equalizer). I pointed out before a module-combine and module-combine-sink problem, because one day I turned on my laptop and pulseaudio didn't work. So I deleted the lines with that names (don't know if they came by default, maybe I added them sometime when I was trying to fix my speakers). After all this, I can now move the Subwoofer bar in the Sound configuration. Anyways, the Equalizer does a great job and it improves the sound a lot.

    Read the article

  • Cannot configure NAP DCOM security.

    - by mattdwen
    I've just added a new 2K8 domain controller to an existing domain as part of a transition from 2k3. I am getting a lot of DCOM 10016 errors, indicating launch security permission problems on a specific CLSID, which ends up being the NAP Agent Service. I've dealt with this before by granting the Network Service local launch and local activation permissions, but the secuirty options are all disabled for this component in the Component Services snap-in. The NAP agent service is not running, and startup is set to Manual. Any ideas on how to remove the errors for the unrequried NAP agent?

    Read the article

  • ECMP Load Balancing in JUNOS

    - by SpacemanSpiff
    I'm trying to figure out how to use ECMP load balancing in JUNOS. I know this isn't the best way to load balance, but its quick and dirty and gets done what I need to. In ScreenOS this was pretty easy. Device: SRX220 JunOS: 10.3R2.11 Here's what I've got so far: routing-options { static { route 0.0.0.0/0 { next-hop [ 1.1.1.1 1.1.1.2 ]; metric 10; } } maximum-paths 2; Will that do it? Tom

    Read the article

  • Unable to use pbcopy while in tmux session

    - by user62139
    Running tmux 1.4 installed from ports on snow-leopard I am unable to use the built in OSX pbcopy command. Outside of tmux: > echo "abc" | pbcopy > echo pbpaste # or using ^v abc But inside of tmux: > echo "123" | pbcopy > echo pbpaste abc I've scoured the man page but can't find any options that might relate to this behaivor. I also can't understand why tmux would mess with shell redirection. Anybody have any clues?

    Read the article

  • How to penetrate the QA industry after layoffs, next steps...

    - by Erik
    Briefly, my background is in manual black box testing of websites and applications within the Agile/waterfall context. Over the past four years I was a member of two web development firms' small QA teams dedicated to testing the deployment of websites for national/international non profits, governmental organizations, and for profit business, to name a few: -Brookings Institution -Senate -Tyco Electronics -Blue Cross/Blue Shield -National Geographic -Discover Channel I have a very strong understanding of the: -SDLC -STLC of bugs and website deployment/development -Use Case & Test Case development In March of this year, my last firm downsized and lost my job as a QA tester. I have been networking and doing a very detailed job search, but have had a very difficult time getting my next job within the QA industry, even with my background as a manual black box QA tester in the website development context. My direct question to all of you: What are some ways I can be more competitive and get hired? Options that could get me competitive: Should I go back to school and learn some more 'hard' skills in website development and client side technologies, e.g.: -HTML -CSS -JavaScript Learn programming: -PHP -C# -Ruby -SQL -Python -Perl -?? Get Certified as a QA Tester, there are a countless numbers of programs to become a Certified Tester. Most, if not all jobs, being advertised now require Automated Testing experience, in: -QTP -Loadrunner -Selenium -ETC. Should I learn, Automated testing skills, via a paid course, or teach myself? --Learn scripting languages to understand the automated testing process better? Become a Certified "Project Management Professional" (PMP) to prove to hiring managers that I 'get' the project development life cycle? At the end of the day I need to be competitive and get hired as a QA tester and want to build upon my skills within the QA web development field. How should I do this, without reinventing the wheel? Any help in this regard would be fabulous. Thanks! .erik

    Read the article

  • bind (hardlink) one directory to many places

    - by PoltoS
    I need to "bind" one directory to many chrooted places. I know that I can do "mount -o bind", but this requires special processing on startup each time (run the mount). Is there a way to do it on the filesystem directly? My fs is ext4 and it seems not to support hardlinks to directories. Hardlinking all files inside is not an option too. Is thee a way to enable hardlinks to directories in ext4? Or any other options are avilable?

    Read the article

  • Avoid Memory Leaks in SharePoint2010 Development

    - by ybbest
    When you develop SharePoint solution using code, you need to Dispose SPWeb appropriately to avoid memory Leaks. The general guideline for this are: Dispose Not to dispose OpenWebEnumerating Webs or AllWebs ParentWebRootWeb SPWeb from SPContext There are more rules than the one list above and as a smart SharePoint developer, you do not have to memories all the rules .There is a tool called SharePoint Dispose Checker which can help you to find potential memory leak. To use SPDisposeChecker in you solution, you need to download the tool from MSDN Code Gallery and install it in your development machine as follow. 1. Run the installer with elevated privilege. 2. Accept the agreement and click next. 3. Select those two options and click next. 4. Select Everyone and click Next. 5. Go to Toolsà SharePoint Dispose Check to Configure the SPDisposeCheck. 6. You can change the Treat problems as Errors to Warnings. 7. after clicking Save, you are all set to use the tool.Recompile my project , I can get the result below. References: SharePoint 2007/2010 “Do Not Dispose Guidance” + SPDisposeCheck

    Read the article

< Previous Page | 249 250 251 252 253 254 255 256 257 258 259 260  | Next Page >