Search Results

Search found 12780 results on 512 pages for 'webmaster tools'.

Page 172/512 | < Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >

  • Compiling and Running Handbrake in Ubuntu

    <b>Packt Publishing: </b>"Handbrake is considered the Swiss Army knife of video conversion tools. Running on the three major operating system platforms, Handbrake can open a huge variety of formats, including common ones that others can't handle (like the titles in the MPEG TS structure of a DVD)."

    Read the article

  • Why Mac OS X is referred to as the developer's OS? [closed]

    - by dbramhall
    Possible Duplicate: Why do programmers use or recommend Mac OS X? I have heard people referring to Mac OS X as the 'developer's operating system' and I was wondering why? I have been using Mac OS X for years but I only see Mac OS X as a developer's OS if the developer tools are installed, without them it's not really a developer's OS. Also, the Terminal is obviously a huge plus for developers but is this it?

    Read the article

  • APress Deal of the Day 14/August/2014 - Software Exorcism

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2014/08/14/apress-deal-of-the-day-14august2014---software-exorcism.aspxToday’s $10 Deal of the Day from APress at http://www.apress.com/9781430251071 is Software Exorcism! “Software Exorcism discusses tools and techniques for effective and aggressive debugging, gives optimization strategies that appeal to all levels of programmers, and presents in-depth treatments of technical issues with honest assessments. ”

    Read the article

  • Improving Manageability of Virtual Environments

    - by Jeff Victor
    Boot Environments for Solaris 10 Branded Zones Until recently, Solaris 10 Branded Zones on Solaris 11 suffered one notable regression: Live Upgrade did not work. The individual packaging and patching tools work correctly, but the ability to upgrade Solaris while the production workload continued running did not exist. A recent Solaris 11 SRU (Solaris 11.1 SRU 6.4) restored most of that functionality, although with a slightly different concept, different commands, and without all of the feature details. This new method gives you the ability to create and manage multiple boot environments (BEs) for a Solaris 10 Branded Zone, and modify the active or any inactive BE, and to do so while the production workload continues to run. Background In case you are new to Solaris: Solaris includes a set of features that enables you to create a bootable Solaris image, called a Boot Environment (BE). This newly created image can be modified while the original BE is still running your workload(s). There are many benefits, including improved uptime and the ability to reboot into (or downgrade to) an older BE if a newer one has a problem. In Solaris 10 this set of features was named Live Upgrade. Solaris 11 applies the same basic concepts to the new packaging system (IPS) but there isn't a specific name for the feature set. The features are simply part of IPS. Solaris 11 Boot Environments are not discussed in this blog entry. Although a Solaris 10 system can have multiple BEs, until recently a Solaris 10 Branded Zone (BZ) in a Solaris 11 system did not have this ability. This limitation was addressed recently, and that enhancement is the subject of this blog entry. This new implementation uses two concepts. The first is the use of a ZFS clone for each BE. This makes it very easy to create a BE, or many BEs. This is a distinct advantage over the Live Upgrade feature set in Solaris 10, which had a practical limitation of two BEs on a system, when using UFS. The second new concept is a very simple mechanism to indicate the BE that should be booted: a ZFS property. The new ZFS property is named com.oracle.zones.solaris10:activebe (isn't that creative? ). It's important to note that the property is inherited from the original BE's file system to any BEs you create. In other words, all BEs in one zone have the same value for that property. When the (Solaris 11) global zone boots the Solaris 10 BZ, it boots the BE that has the name that is stored in the activebe property. Here is a quick summary of the actions you can use to manage these BEs: To create a BE: Create a ZFS clone of the zone's root dataset To activate a BE: Set the ZFS property of the root dataset to indicate the BE To add a package or patch to an inactive BE: Mount the inactive BE Add packages or patches to it Unmount the inactive BE To list the available BEs: Use the "zfs list" command. To destroy a BE: Use the "zfs destroy" command. Preparation Before you can use the new features, you will need a Solaris 10 BZ on a Solaris 11 system. You can use these three steps - on a real Solaris 11.1 server or in a VirtualBox guest running Solaris 11.1 - to create a Solaris 10 BZ. The Solaris 11.1 environment must be at SRU 6.4 or newer. Create a flash archive on the Solaris 10 system s10# flarcreate -n s10-system /net/zones/archives/s10-system.flar Configure the Solaris 10 BZ on the Solaris 11 system s11# zonecfg -z s10z Use 'create' to begin configuring a new zone. zonecfg:s10z create -t SYSsolaris10 zonecfg:s10z set zonepath=/zones/s10z zonecfg:s10z exit s11# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - s10z configured /zones/s10z solaris10 excl Install the zone from the flash archive s11# zoneadm -z s10z install -a /net/zones/archives/s10-system.flar -p You can find more information about the migration of Solaris 10 environments to Solaris 10 Branded Zones in the documentation. The rest of this blog entry demonstrates the commands you can use to accomplish the aforementioned actions related to BEs. New features in action Note that the demonstration of the commands occurs in the Solaris 10 BZ, as indicated by the shell prompt "s10z# ". Many of these commands can be performed in the global zone instead, if you prefer. If you perform them in the global zone, you must change the ZFS file system names. Create The only complicated action is the creation of a BE. In the Solaris 10 BZ, create a new "boot environment" - a ZFS clone. You can assign any name to the final portion of the clone's name, as long as it meets the requirements for a ZFS file system name. s10z# zfs snapshot rpool/ROOT/zbe-0@snap s10z# zfs clone -o mountpoint=/ -o canmount=noauto rpool/ROOT/zbe-0@snap rpool/ROOT/newBE cannot mount 'rpool/ROOT/newBE' on '/': directory is not empty filesystem successfully created, but not mounted You can safely ignore that message: we already know that / is not empty! We have merely told ZFS that the default mountpoint for the clone is the root directory. List the available BEs and active BE Because each BE is represented by a clone of the rpool/ROOT dataset, listing the BEs is as simple as listing the clones. s10z# zfs list -r rpool/ROOT NAME USED AVAIL REFER MOUNTPOINT rpool/ROOT 3.55G 42.9G 31K legacy rpool/ROOT/zbe-0 1K 42.9G 3.55G / rpool/ROOT/newBE 3.55G 42.9G 3.55G / The output shows that two BEs exist. Their names are "zbe-0" and "newBE". You can tell Solaris that one particular BE should be used when the zone next boots by using a ZFS property. Its name is com.oracle.zones.solaris10:activebe. The value of that property is the name of the clone that contains the BE that should be booted. s10z# zfs get com.oracle.zones.solaris10:activebe rpool/ROOT NAME PROPERTY VALUE SOURCE rpool/ROOT com.oracle.zones.solaris10:activebe zbe-0 local Change the active BE When you want to change the BE that will be booted next time, you can just change the activebe property on the rpool/ROOT dataset. s10z# zfs get com.oracle.zones.solaris10:activebe rpool/ROOT NAME PROPERTY VALUE SOURCE rpool/ROOT com.oracle.zones.solaris10:activebe zbe-0 local s10z# zfs set com.oracle.zones.solaris10:activebe=newBE rpool/ROOT s10z# zfs get com.oracle.zones.solaris10:activebe rpool/ROOT NAME PROPERTY VALUE SOURCE rpool/ROOT com.oracle.zones.solaris10:activebe newBE local s10z# shutdown -y -g0 -i6 After the zone has rebooted: s10z# zfs get com.oracle.zones.solaris10:activebe rpool/ROOT rpool/ROOT com.oracle.zones.solaris10:activebe newBE local s10z# zfs mount rpool/ROOT/newBE / rpool/export /export rpool/export/home /export/home rpool /rpool Mount the original BE to see that it's still there. s10z# zfs mount -o mountpoint=/mnt rpool/ROOT/zbe-0 s10z# ls /mnt Desktop export platform Documents export.backup.20130607T214951Z proc S10Flar home rpool TT_DB kernel sbin bin lib system boot lost+found tmp cdrom mnt usr dev net var etc opt Patch an inactive BE At this point, you can modify the original BE. If you would prefer to modify the new BE, you can restore the original value to the activebe property and reboot, and then mount the new BE to /mnt (or another empty directory) and modify it. Let's mount the original BE so we can modify it. (The first command is only needed if you haven't already mounted that BE.) s10z# zfs mount -o mountpoint=/mnt rpool/ROOT/zbe-0 s10z# patchadd -R /mnt -M /var/sadm/spool 104945-02 Note that the typical usage will be: Create a BE Mount the new (inactive) BE Use the package and patch tools to update the new BE Unmount the new BE Reboot Delete an inactive BE ZFS clones are children of their parent file systems. In order to destroy the parent, you must first "promote" the child. This reverses the parent-child relationship. (For more information on this, see the documentation.) The original rpool/ROOT file system is the parent of the clones that you create as BEs. In order to destroy an earlier BE that is that parent of other BEs, you must first promote one of the child BEs to be the ZFS parent. Only then can you destroy the original BE. Fortunately, this is easier to do than to explain: s10z# zfs promote rpool/ROOT/newBE s10z# zfs destroy rpool/ROOT/zbe-0 s10z# zfs list -r rpool/ROOT NAME USED AVAIL REFER MOUNTPOINT rpool/ROOT 3.56G 269G 31K legacy rpool/ROOT/newBE 3.56G 269G 3.55G / Documentation This feature is so new, it is not yet described in the Solaris 11 documentation. However, MOS note 1558773.1 offers some details. Conclusion With this new feature, you can add and patch packages to boot environments of a Solaris 10 Branded Zone. This ability improves the manageability of these zones, and makes their use more practical. It also means that you can use the existing P2V tools with earlier Solaris 10 updates, and modify the environments after they become Solaris 10 Branded Zones.

    Read the article

  • Cloud Computing = Elasticity * Availability

    - by Herve Roggero
    What is cloud computing? Is hosting the same thing as cloud computing? Are you running a cloud if you already use virtual machines? What is the difference between Infrastructure as a Service (IaaS) and a cloud provider? And the list goes on… these questions keep coming up and all try to fundamentally explain what “cloud” means relative to other concepts. At the risk of over simplification, answering these questions becomes simpler once you understand the primary foundations of cloud computing: Elasticity and Availability.   Elasticity The basic value proposition of cloud computing is to pay as you go, and to pay for what you use. This implies that an application can expand and contract on demand, across all its tiers (presentation layer, services, database, security…).  This also implies that application components can grow independently from each other. So if you need more storage for your database, you should be able to grow that tier without affecting, reconfiguring or changing the other tiers. Basically, cloud applications behave like a sponge; when you add water to a sponge, it grows in size; in the application world, the more customers you add, the more it grows. Pure IaaS providers will provide certain benefits, specifically in terms of operating costs, but an IaaS provider will not help you in making your applications elastic; neither will Virtual Machines. The smallest elasticity unit of an IaaS provider and a Virtual Machine environment is a server (physical or virtual). While adding servers in a datacenter helps in achieving scale, it is hardly enough. The application has yet to use this hardware.  If the process of adding computing resources is not transparent to the application, the application is not elastic.   As you can see from the above description, designing for the cloud is not about more servers; it is about designing an application for elasticity regardless of the underlying server farm.   Availability The fact of the matter is that making applications highly available is hard. It requires highly specialized tools and trained staff. On top of it, it's expensive. Many companies are required to run multiple data centers due to high availability requirements. In some organizations, some data centers are simply on standby, waiting to be used in a case of a failover. Other organizations are able to achieve a certain level of success with active/active data centers, in which all available data centers serve incoming user requests. While achieving high availability for services is relatively simple, establishing a highly available database farm is far more complex. In fact it is so complex that many companies establish yearly tests to validate failover procedures.   To a certain degree certain IaaS provides can assist with complex disaster recovery planning and setting up data centers that can achieve successful failover. However the burden is still on the corporation to manage and maintain such an environment, including regular hardware and software upgrades. Cloud computing on the other hand removes most of the disaster recovery requirements by hiding many of the underlying complexities.   Cloud Providers A cloud provider is an infrastructure provider offering additional tools to achieve application elasticity and availability that are not usually available on-premise. For example Microsoft Azure provides a simple configuration screen that makes it possible to run 1 or 100 web sites by clicking a button or two on a screen (simplifying provisioning), and soon SQL Azure will offer Data Federation to allow database sharding (which allows you to scale the database tier seamlessly and automatically). Other cloud providers offer certain features that are not available on-premise as well, such as the Amazon SC3 (Simple Storage Service) which gives you virtually unlimited storage capabilities for simple data stores, which is somewhat equivalent to the Microsoft Azure Table offering (offering a server-independent data storage model). Unlike IaaS providers, cloud providers give you the necessary tools to adopt elasticity as part of your application architecture.    Some cloud providers offer built-in high availability that get you out of the business of configuring clustered solutions, or running multiple data centers. Some cloud providers will give you more control (which puts some of that burden back on the customers' shoulder) and others will tend to make high availability totally transparent. For example, SQL Azure provides high availability automatically which would be very difficult to achieve (and very costly) on premise.   Keep in mind that each cloud provider has its strengths and weaknesses; some are better at achieving transparent scalability and server independence than others.    Not for Everyone Note however that it is up to you to leverage the elasticity capabilities of a cloud provider, as discussed previously; if you build a website that does not need to scale, for which elasticity is not important, then you can use a traditional host provider unless you also need high availability. Leveraging the technologies of cloud providers can be difficult and can become a journey for companies that build their solutions in a scale up fashion. Cloud computing promises to address cost containment and scalability of applications with built-in high availability. If your application does not need to scale or you do not need high availability, then cloud computing may not be for you. In fact, you may pay a premium to run your applications with cloud providers due to the underlying technologies built specifically for scalability and availability requirements. And as such, the cloud is not for everyone.   Consistent Customer Experience, Predictable Cost With all its complexities, buzz and foggy definition, cloud computing boils down to a simple objective: consistent customer experience at a predictable cost.  The objective of a cloud solution is to provide the same user experience to your last customer than the first, while keeping your operating costs directly proportional to the number of customers you have. Making your applications elastic and highly available across all its tiers, with as much automation as possible, achieves the first objective of a consistent customer experience. And the ability to expand and contract the infrastructure footprint of your application dynamically achieves the cost containment objectives.     Herve Roggero is a SQL Azure MVP and co-author of Pro SQL Azure (APress).  He is the co-founder of Blue Syntax Consulting (www.bluesyntax.net), a company focusing on cloud computing technologies helping customers understand and adopt cloud computing technologies. For more information contact herve at hroggero @ bluesyntax.net .

    Read the article

  • Talend vs. SSIS: A Simple Performance Comparison

    With all of the ETL tools in the marketplace, which one is best? Jeff Singleton brings us simple performance comparison pitting SSIS against open source powerhouse Talend. Optimize SQL Server performance“With SQL Monitor, we can be proactive in our optimization process, instead of waiting until a customer reports a problem,” John Trumbul, Sr. Software Engineer. Optimize your servers with a free trial.

    Read the article

  • Moving sprites on a graph in libGDX

    - by nosferat
    In my game I'd like to move sprites on a fixed path. Until this point I was trying to stick with the tools already provided by libGDX, like the Tiled map renderer classes so I'm looking for a solution nearly as convenient as that, e.g. I'd like to avoid creating the adjacency matrix by hand. Tiled has the functionality to add objects to the map but I'm not sure if I can use it for this purpose. Any idea?

    Read the article

  • Interviews: Going Beyond the Technical Quiz

    - by Tony Davis
    All developers will be familiar with the basic format of a technical interview. After a bout of CV-trawling to gauge basic experience, strengths and weaknesses, the interview turns technical. The whiteboard takes center stage and the challenge is set to design a function or query, or solve what on the face of it might seem a disarmingly simple programming puzzle. Most developers will have experienced those few panic-stricken moments, when one’s mind goes as blank as the whiteboard, before un-popping the marker pen, and hopefully one’s mental functions, to work through the problem. It is a way to probe the candidate’s knowledge of basic programming structures and techniques and to challenge their critical thinking. However, these challenges or puzzles, often devised by some of the smartest brains in the development team, have a tendency to become unnecessarily ‘tricksy’. They often seem somewhat academic in nature. While the candidate straight out of IT school might breeze through the construction of a Markov chain, a candidate with bags of practical experience but less in the way of formal training could become nonplussed. Also, a whiteboard and a marker pen make up only a very small part of the toolkit that a programmer will use in everyday work. I remember vividly my first job interview, for a position as technical editor. It went well, but after the usual CV grilling and technical questions, I was only halfway there. Later, they sat me alongside a team of editors, in front of a computer loaded with MS Word and copy of SQL Server Query Analyzer, and my task was to edit a real chapter for a real SQL Server book that they planned to publish, including validating and testing all the code. It was a tough challenge but I came away with a sound knowledge of the sort of work I’d do, and its context. It makes perfect sense, yet my impression is that many organizations don’t do this. Indeed, it is only relatively recently that Red Gate started to move over to this model for developer interviews. Now, instead of, or perhaps in addition to, the whiteboard challenges, the candidate can expect to sit with their prospective team, in front of Visual Studio, loaded with all the useful tools in the developer’s kit (ReSharper and so on) and asked to, for example, analyze and improve a real piece of software. The same principles should apply when interviewing for a database positon. In addition to the usual questions challenging the candidate’s knowledge of such things as b-trees, object permissions, database recovery models, and so on, sit the candidate down with the other database developers or DBAs. Arm them with a copy of Management Studio, and a few other tools, then challenge them to discover the flaws in a stored procedure, and improve its performance. Or present them with a corrupt database and ask them to get the database back online, and discover the cause of the corruption.

    Read the article

  • Excel Solver vs Solver Foundation

    - by JoshReuben
    I recently read a book http://www.amazon.com/Scientific-Engineering-Cookbook-Cookbooks-OReilly/dp/0596008791/ref=sr_1_1?ie=UTF8&s=books&qid=1296593374&sr=8-1 - the Excel Scientific and Engineering Cookbook.     The 2 main tools that this book leveraged were the Data Analysis Pack and Excel Solver. I had previously been aquanted with Microsoft Solver Foundation - this is a full fledged API for solving optimization problems, and went beyond being a mere Excel plugin - it exposed a C# programmatic interface for in process and a web service interface for out of process integration. were they the same? apparently not!   2 different solver frameworks for Excel: http://www.solver.com/index.html http://www.solverfoundation.com/ I contacted both vendors to get their perspectives.   Heres what the Excel Solver guys had to say:   "The Solver Foundation requires you to learn and use a very specific modeling language (OML). The Excel solver allows you to formulate your optimization problems without learning any new language simply by entering the formulas into cells on the Excel spreadsheet, something that nearly everyone is already familiar with doing.   The Excel Solver also allows you to seamlessly upgrade to products that combine Monte Carlo Simulation capabilities (our Risk Solver Premium and Risk Solver Platform products) which allow you to include uncertainty into your models when appropriate.   Our advanced Excel Solver Products also have a number of built in reporting tools for advanced analysis of the your model and it's results"           And Heres what the Microsoft Solver Foundation guys had to say:   "  With the release of Solver Foundation 3.0, Solver Foundation has the same kinds of solvers (plus a few more) than what is found in Excel Solver. I think there are two main differences:   1.      Problems are described differently. In Excel Solver the goals and constraints are specified inside the spreadsheet, in formulas. In Solver Foundation they are described either in .Net code that uses the Solver Foundation Services API, or using the OML modeling language in Excel. 2.      Solver Foundation’s primary strength is on solving large linear, mixed integer, and constraint models. That is, models that contain arbitrary nonlinear functions (such as trig functions, IF(), powers, etc) are handled a bit better by the Excel Solver at this point. "

    Read the article

  • Google Accounts

    - by Alex
    Hi all, I need to setup a bunch of accounts (~50) with Google which will later be hooked up to Analytics and the Webmaster tools and I will access them via their APIs. The problem is that Google will stop me from making accounts at a certain point because it thinks I'm spamming it. I've tried bypassing the issue by proxying through Tor, but that didn't work (I suspect the Tor nodes are already abused and blacklisted). I also drove around town with my iPhone hopping towers trying to setup accounts. Also didn't work. Phone verification also doesn't work because my phone is already linked to too many accounts... So, I figure it's time to ask. How can I continue to setup Google accounts (NOT GMail...) without being considered a spammer? If there's a service from Google to whitelist my IP, even for a fee, I would be glad to sign up for it. Any suggestions will be appreciated!

    Read the article

  • VHOST not working in Apache

    - by Starx
    I got a LAMP server working in Ubuntu 11.04. Now the problem is that the websites have to enabled and disabled from the terminal. All all of them have to be accessed from http://localhost which is not so much efficient. So I created a VHOSTS, using some tutorials off the net. Here is the coding for it <VirtualHost *:80> ServerAdmin webmaster@localhost Servername site.com ServerAlias www.site.com DocumentRoot /home/starx/public_html/site/public <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/starx/public_html/site/public> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> nGen ErrorLog ${APACHE_LOG_DIR}/site-error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/site-access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> Now, still I can't access the page in http://site.com but if i access using http://localhsot/ it is accessed. I have disabled all others site including default and have just enabled one site i.e. site How to fix this?

    Read the article

  • MySQL and Hadoop Integration - Unlocking New Insight

    - by Mat Keep
    “Big Data” offers the potential for organizations to revolutionize their operations. With the volume of business data doubling every 1.2 years, analysts and business users are discovering very real benefits when integrating and analyzing data from multiple sources, enabling deeper insight into their customers, partners, and business processes. As the world’s most popular open source database, and the most deployed database in the web and cloud, MySQL is a key component of many big data platforms, with Hadoop vendors estimating 80% of deployments are integrated with MySQL. The new Guide to MySQL and Hadoop presents the tools enabling integration between the two data platforms, supporting the data lifecycle from acquisition and organisation to analysis and visualisation / decision, as shown in the figure below The Guide details each of these stages and the technologies supporting them: Acquire: Through new NoSQL APIs, MySQL is able to ingest high volume, high velocity data, without sacrificing ACID guarantees, thereby ensuring data quality. Real-time analytics can also be run against newly acquired data, enabling immediate business insight, before data is loaded into Hadoop. In addition, sensitive data can be pre-processed, for example healthcare or financial services records can be anonymized, before transfer to Hadoop. Organize: Data is transferred from MySQL tables to Hadoop using Apache Sqoop. With the MySQL Binlog (Binary Log) API, users can also invoke real-time change data capture processes to stream updates to HDFS. Analyze: Multi-structured data ingested from multiple sources is consolidated and processed within the Hadoop platform. Decide: The results of the analysis are loaded back to MySQL via Apache Sqoop where they inform real-time operational processes or provide source data for BI analytics tools. So how are companies taking advantage of this today? As an example, on-line retailers can use big data from their web properties to better understand site visitors’ activities, such as paths through the site, pages viewed, and comments posted. This knowledge can be combined with user profiles and purchasing history to gain a better understanding of customers, and the delivery of highly targeted offers. Of course, it is not just in the web that big data can make a difference. Every business activity can benefit, with other common use cases including: - Sentiment analysis; - Marketing campaign analysis; - Customer churn modeling; - Fraud detection; - Research and Development; - Risk Modeling; - And more. As the guide discusses, Big Data is promising a significant transformation of the way organizations leverage data to run their businesses. MySQL can be seamlessly integrated within a Big Data lifecycle, enabling the unification of multi-structured data into common data platforms, taking advantage of all new data sources and yielding more insight than was ever previously imaginable. Download the guide to MySQL and Hadoop integration to learn more. I'd also be interested in hearing about how you are integrating MySQL with Hadoop today, and your requirements for the future, so please use the comments on this blog to share your insights.

    Read the article

  • Week in Geek: New Faster IonMonkey JavaScript Engine Added to Firefox Nightly Builds

    - by Asian Angel
    Our latest edition of WIG is filled with news link coverage on topics such as the next version of Photoshop will not support Windows XP, Microsoft has found preloaded malware on PC production lines in China, Internet Explorer 8 users will lose browser support for Google Apps in November, and more. Original wallpaper for the image shown above is available at HD Wallpapers 4 Free. How To Create a Customized Windows 7 Installation Disc With Integrated Updates How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using?

    Read the article

  • How do I rate limit google's crawl of my class C IP block?

    - by Zak
    I have several sites in a class C network that all get crawled by google on a pretty regular basis. Normally this is fine. However, when google starts crawling all the sites at the same time, the small set of servers that back this IP block can take a pretty big hit on load. With google webmaster tools, you can rate limit the googlebot on a given domain, but I haven't found a way to limit the bot across an IP network yet. Anyone have experience with this? How did you fix it?

    Read the article

  • Wo finde ich was im OPN?

    - by A&C Redaktion
    Oracle Partner haben Zugriff auf verschiedenste Tools, Ressourcen und Services, die die tägliche Arbeit erleichtern und einen signifikanten Wettbewerbsvorteil bieten. Für unsere neuen, und vielleicht auch manchen altgedienten Partner, hier ein kleiner Wegweiser zu den wichtigsten Angeboten. Welche Ressourcen kann ich mit welchem Level der Spezialisierung nutzen?Einen englischsprachigen Überblick über alle Angebote aus den Bereichen Enablement, Development, Marketing, Sales und Support finden Sie hier unter „OPN Benefits Table Details“. Wo kann ich mich über bestimmte Oracle Produkte informieren und weiterbilden?Die Knowledge Zones sind lösungsorientierte Webseiten für den Einstieg in die Spezialisierung. Sie finden dort detaillierte Informationen zu Entwicklung, Verkauf und Implementierung von Oracle Lösungen – aufgeschlüsselt nach den Themen Datenbank, Middleware, Anwendungen, Server- und Speichersysteme sowie nach Branchen. Je nach Interesse und Spezialisierung können Sie hier bestimmten Knowledge Zones beitreten. Wie können Kunden mich und meine Leistungen als Oracle Partner finden und Kontakt aufnehmen?Dafür gibt es den Solutions Catalog: Diese Plattform gehört zu den wichtigsten Tools, um Kunden an den für sie idealen Oracle Partner zu vermitteln. Jeder spezialisierte Partner weltweit hat im Solutions Catalog ein suchmaschinenoptimiertes Profil, das er über das OPN selbst pflegt und ausbaut. Kunden filtern das Angebot nach Region und gewünschter Lösung und nehmen direkt Kontakt auf. Besuche auf der Webseite werden evaluiert und können zur individuellen Lead-Generierung genutzt werden. Wie kann ich meine Oracle Spezialisierung nutzen, um neue Kunden zu gewinnen?Im Marketing-Bereich des OPN-Portals finden Sie diverse Möglichkeiten der Werbung und Demand Generation. Einige Beispiele: Die deutschsprachigen Marketing Kits bieten Werbematerial, Templates, Schulungsmaterial und Anleitungen für das Marketing der Partner. Sie helfen dabei, eigene Kampagnen, z.B. Mailings oder Telemarketing zu einzelnen Themen, wie etwa aktuell Exadata, durchzuführen und die Demand Generation voranzutreiben. Mit den Partner Logos können Sie auf Ihrer eigenen Webseite damit werben, dass und wie intensiv Sie mit Oracle zusammenarbeiten. Es gibt Logos für jedes Partner Level sowie für jede einzelne Zertifizierung aus dem Oracle Universum. Der Partner Event Publishing Service hilft dabei, Ihre Veranstaltungen global und öffentlichkeitswirksam auf der Oracle Webseite zu präsentieren. So funktioniert's: Einfach das Excel-Formular downloaden, in deutsch oder englisch ausfüllen und mit Ihrem Logo an das Event Publishing Team senden. Ihre Event-Seite wird erstellt und ist auf dem Eventportal von Oracle suchbar. Sie erhalten für Ihre Prmotion den Link und schon haben sich einen neuen Kreis potenzieller Teilnehmer erschlossen.

    Read the article

  • Wo finde ich was im OPN?

    - by A&C Redaktion
    Oracle Partner haben Zugriff auf verschiedenste Tools, Ressourcen und Services, die die tägliche Arbeit erleichtern und einen signifikanten Wettbewerbsvorteil bieten. Für unsere neuen, und vielleicht auch manchen altgedienten Partner, hier ein kleiner Wegweiser zu den wichtigsten Angeboten. Welche Ressourcen kann ich mit welchem Level der Spezialisierung nutzen?Einen englischsprachigen Überblick über alle Angebote aus den Bereichen Enablement, Development, Marketing, Sales und Support finden Sie hier unter „OPN Benefits Table Details“. Wo kann ich mich über bestimmte Oracle Produkte informieren und weiterbilden?Die Knowledge Zones sind lösungsorientierte Webseiten für den Einstieg in die Spezialisierung. Sie finden dort detaillierte Informationen zu Entwicklung, Verkauf und Implementierung von Oracle Lösungen – aufgeschlüsselt nach den Themen Datenbank, Middleware, Anwendungen, Server- und Speichersysteme sowie nach Branchen. Je nach Interesse und Spezialisierung können Sie hier bestimmten Knowledge Zones beitreten. Wie können Kunden mich und meine Leistungen als Oracle Partner finden und Kontakt aufnehmen?Dafür gibt es den Solutions Catalog: Diese Plattform gehört zu den wichtigsten Tools, um Kunden an den für sie idealen Oracle Partner zu vermitteln. Jeder spezialisierte Partner weltweit hat im Solutions Catalog ein suchmaschinenoptimiertes Profil, das er über das OPN selbst pflegt und ausbaut. Kunden filtern das Angebot nach Region und gewünschter Lösung und nehmen direkt Kontakt auf. Besuche auf der Webseite werden evaluiert und können zur individuellen Lead-Generierung genutzt werden. Wie kann ich meine Oracle Spezialisierung nutzen, um neue Kunden zu gewinnen?Im Marketing-Bereich des OPN-Portals finden Sie diverse Möglichkeiten der Werbung und Demand Generation. Einige Beispiele: Die deutschsprachigen Marketing Kits bieten Werbematerial, Templates, Schulungsmaterial und Anleitungen für das Marketing der Partner. Sie helfen dabei, eigene Kampagnen, z.B. Mailings oder Telemarketing zu einzelnen Themen, wie etwa aktuell Exadata, durchzuführen und die Demand Generation voranzutreiben. Mit den Partner Logos können Sie auf Ihrer eigenen Webseite damit werben, dass und wie intensiv Sie mit Oracle zusammenarbeiten. Es gibt Logos für jedes Partner Level sowie für jede einzelne Zertifizierung aus dem Oracle Universum. Der Partner Event Publishing Service hilft dabei, Ihre Veranstaltungen global und öffentlichkeitswirksam auf der Oracle Webseite zu präsentieren. So funktioniert's: Einfach das Excel-Formular downloaden, in deutsch oder englisch ausfüllen und mit Ihrem Logo an das Event Publishing Team senden. Ihre Event-Seite wird erstellt und ist auf dem Eventportal von Oracle suchbar. Sie erhalten für Ihre Prmotion den Link und schon haben sich einen neuen Kreis potenzieller Teilnehmer erschlossen.

    Read the article

  • Security risks posed by specifying technologies used

    - by SabreWolfy
    I am developing online tools for non-commercial use, which are hosted on dedicated hardware. I would like to include logos indicating the technologies I used (Apache or Python for example), at the bottom of the page. What are the security risks/implications, if any, of "advertizing" this information? It is better not to reveal that the web server is Apache, and that I used Pyhton and jQuery, for example?

    Read the article

  • Dell Studio 1737 Overheating

    - by Sean
    I am using a Dell Studio 1737 laptop. I have been running Linux and have ran Windows recently for a very long time. I upgraded to the 10.10 distribution and since that distro, it seems that for some reason all Linuxes want to push my laptop to extremes. I have recently upgraded to Ubuntu 12.04 since I heart that it contains kernel fixes for overheating issues. 12.04 will actually eventually cool the system, but that is after the fans run to the point it sounds like a jet aircraft taking off and the laptop makes my hands sweat. In trying to combat the heat problems I have done the following: I installed the propriatery driver for my ATI Mobility HD 3600. I have tried both the one in the Additional Drivers and also tried ATI's latest greatest version. If I don't install this my laptop will overheat and shut off in minutes. Both seem to perform similarly, but the heat problem remains. I have tried limiting the CPU by installing the CPUFreq Indicator. This does help keep the machine from shutting off, but the heat is still uncomfortable to be around the machine. I usually run in power saver mode or run the cpu at 1.6 GHZ just to error on safety. I ran sensors-detect and here are the results: sean@sean-Studio-1737:~$ sudo sensors-detect # sensors-detect revision 5984 (2011-07-10 21:22:53 +0200) # System: Dell Inc. Studio 1737 (laptop) # Board: Dell Inc. 0F237N This program will help you determine which kernel modules you need to load to use lm_sensors most effectively. It is generally safe and recommended to accept the default answers to all questions, unless you know what you're doing. Some south bridges, CPUs or memory controllers contain embedded sensors. Do you want to scan for them? This is totally safe. (YES/no): y Module cpuid loaded successfully. Silicon Integrated Systems SIS5595... No VIA VT82C686 Integrated Sensors... No VIA VT8231 Integrated Sensors... No AMD K8 thermal sensors... No AMD Family 10h thermal sensors... No AMD Family 11h thermal sensors... No AMD Family 12h and 14h thermal sensors... No AMD Family 15h thermal sensors... No AMD Family 15h power sensors... No Intel digital thermal sensor... Success! (driver `coretemp') Intel AMB FB-DIMM thermal sensor... No VIA C7 thermal sensor... No VIA Nano thermal sensor... No Some Super I/O chips contain embedded sensors. We have to write to standard I/O ports to probe them. This is usually safe. Do you want to scan for Super I/O sensors? (YES/no): y Probing for Super-I/O at 0x2e/0x2f Trying family `National Semiconductor/ITE'... No Trying family `SMSC'... No Trying family `VIA/Winbond/Nuvoton/Fintek'... No Trying family `ITE'... No Probing for Super-I/O at 0x4e/0x4f Trying family `National Semiconductor/ITE'... Yes Found `ITE IT8512E/F/G Super IO' (but not activated) Some hardware monitoring chips are accessible through the ISA I/O ports. We have to write to arbitrary I/O ports to probe them. This is usually safe though. Yes, you do have ISA I/O ports even if you do not have any ISA slots! Do you want to scan the ISA I/O ports? (YES/no): y Probing for `National Semiconductor LM78' at 0x290... No Probing for `National Semiconductor LM79' at 0x290... No Probing for `Winbond W83781D' at 0x290... No Probing for `Winbond W83782D' at 0x290... No Lastly, we can probe the I2C/SMBus adapters for connected hardware monitoring devices. This is the most risky part, and while it works reasonably well on most systems, it has been reported to cause trouble on some systems. Do you want to probe the I2C/SMBus adapters now? (YES/no): y Using driver `i2c-i801' for device 0000:00:1f.3: Intel ICH9 Module i2c-i801 loaded successfully. Module i2c-dev loaded successfully. Now follows a summary of the probes I have just done. Just press ENTER to continue: Driver `coretemp': * Chip `Intel digital thermal sensor' (confidence: 9) To load everything that is needed, add this to /etc/modules: #----cut here---- # Chip drivers coretemp #----cut here---- If you have some drivers built into your kernel, the list above will contain too many modules. Skip the appropriate ones! Do you want to add these lines automatically to /etc/modules? (yes/NO)y Successful! Monitoring programs won't work until the needed modules are loaded. You may want to run 'service module-init-tools start' to load them. Unloading i2c-dev... OK Unloading i2c-i801... OK Unloading cpuid... OK sean@sean-Studio-1737:~$ sudo service module-init-tools start module-init-tools stop/waiting I also tried installing i8k but that didn't work since it didn't seem to be able to communicate with the hardware (probably for different kind of device). Also I ran acpi -V and here are the results: Battery 0: Full, 100% Battery 0: design capacity 613 mAh, last full capacity 260 mAh = 42% Adapter 0: on-line Thermal 0: ok, 49.0 degrees C Thermal 0: trip point 0 switches to mode critical at temperature 100.0 degrees C Thermal 1: ok, 48.0 degrees C Thermal 1: trip point 0 switches to mode critical at temperature 100.0 degrees C Thermal 2: ok, 51.0 degrees C Thermal 2: trip point 0 switches to mode critical at temperature 100.0 degrees C Cooling 0: LCD 0 of 15 Cooling 1: Processor 0 of 10 Cooling 2: Processor 0 of 10 I have hit a wall and don't know what to do now. Any advice is appreciated.

    Read the article

  • Anyone know a way to find out number of Twitter followers for a particular account on any given date?

    - by mejpark
    Hello. I manage two corporate Twitter accounts and two personal Twitter accounts, and it would be really useful to know how many followers each account has at the end of each week. I'm using TweetStats.com to gather statistics at the moment, but the follower stats functionality isn't granular enough to determine the precise number of followers. Does anyone know of any useful and free tools that can provide the exact number of followers for a specific Twitter account on any given date? Thank you, Mike.

    Read the article

  • What is the current "standard" for setting up a development environment that supports remote collaboration as well as secure version control?

    - by Andrew
    What is the current "standard" for setting up a development environment that supports remote collaboration as well as secure version control? Considering a virtual dedicated solution with vm for a web layer and a data layer, using VPN for each programmer. We're a small start-up that do both Microsoft and open-source development. Is there a set software tools or packages that are appropriate for a small shop and yet scalable? Thanks.

    Read the article

  • Game of Phones

    - by Carlos Chang
    Game  of  Phones There’s an excellent DZone article titled: 2014 Guide to Mobile Development. It’s loaded with excellent information including some results from a mobile related survey to more than 1000 IT professionals. Without giving away too much, these highlights should convince you to read the entire article.  Web and Hybrid apps are gaining tons of traction particularly in the enterprise. If you want to better understand the differences between Web, Native and Hybrid, this article has you covered. Enterprise developers are increasingly more interested in cross platform tools. Makes sense right?  I mean, unless you have infinite resources (e.g. Facebook) and can afford to write native apps to every platform, finding something that can meet your needs for iOS and Android makes sense.  And toss in the possibility of Windows Phone …and oh, just to be current, the addition of Apple’s new mobile language, Swift, to add to Objective C.. and oh boy.  Why not check out cross platform tools? BTW, don’t  forget testing on each platform, and maintenance and the next versions of the app. It’s not one and done. If you’re successful, you’re never done. Various mobile vendors are represented and many provide some great information.  Oracle's own Suhas Uliyar, VP of Mobile Strategy, represented with some great insights into the challenges of mobile back end integration (SOA, mBaaS, etc.) and moving from "mobile first" to a mobile plus world. BTW, Suhas was recently named Top 100 Wireless Technology Experts for 2014 by Today's Wireless World magazine.  And if your not yet convinced, DZone did a very nice job with their mobile infographic stylized after the insanely popular series, Game of Thrones.  Even though there were no dragons illustrated, worth the price of admission just for that.   Check it out here.

    Read the article

  • Friday Fun: Mummy Blaster

    - by Asian Angel
    In this week’s game your mission is to destroy all the mummies waiting for you in this long forgotten temple. All that you have is a limited amount of explosives and your strategic skills to win the day! Can you get the job done? How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using? HTG Explains: What The Windows Event Viewer Is and How You Can Use It

    Read the article

  • SMART Technologies E-Business Suite Release 12 Success

    Get smart about E-Business Suite Release 12 implementation with this customer success story. Hear Mike Battistel, VP Information Systems – SMART Technologies, discuss why they selected E-Business Suite Release 12, the implementation process, what benefits they have gained and lessons learned along the way. SMART Technologies is both the industry pioneer and market-segment leader in easy-to-use interactive whiteboards and other group collaboration tools.

    Read the article

< Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >