Search Results

Search found 14000 results on 560 pages for 'include guards'.

Page 291/560 | < Previous Page | 287 288 289 290 291 292 293 294 295 296 297 298  | Next Page >

  • 2-D Codes in Retail

    - by David Dorf
    The UPC you find on packaging is a one-dimensional barcode that's been in use, in one form or another, since the 1970s. While its a good symbology to encode numbers like a product identifier, its not really big enough to hold much more. It also requires a barcode scanner (like those connected to the POS), although iPhone apps like RedLaser have proved a mobile camera can be made to work in many situations. The next generation barcodes are two-dimensional and therefore capable of holding much more information as well as being more conducive to cameras. The most popular format is the QR Code, widely used in Japan because almost every mobile phone has a built-in reader. A typical use for QR Codes is to embed a URL so that that a mobile phone can quickly navigate to the specified web page. QR Codes can be found on posters, billboards, catalogs, and circulars. Speaking of which, Best Buy recently put a QR code in their circular as shown below. If fact, they even updated their iPhone application to include a QR Code reader. I was able to scan the barcode above right from the screen with my iPhone without issues, even though its fairly small in this image. Clearly they are planning to incorporate more QR Codes in their stores and advertising. If you haven't seen QR Codes before, you're not looking hard enough. They are around and will continue to spread.

    Read the article

  • Screen problems on 11.10 using VGA compatible controller 2nd Generation Core Processor Family Integrated Graphics Controller

    - by MorrisseyJ
    I am having problems with my display. The problem manifests as lots of screen artefacts, which seem to be worse in Unity than in Gnome 3, are worse after i have used suspend and are intolerable if i set myself up on a dual monitor. Specific issues include: icons disappearing, lines occurring all over the screen, the backgrounds of certain windows going another colour and window borders disappearing or being filled with text from other parts of the screen. The most annoying problem is lines of text disappearing from a host of word processing programmes (libreoffice, gedit, bluefish etc), as i type. In most circumstances the screen problem can be temporarily fixed (so that i can see the screen clearly) by either scrolling the text off the screen and then scrolling it back onto the screen, or highlighting the offending area of the desktop, by clicking and dragging. Errors on parts of the screen that don't seem to redraw (window borders (off the universal menu) or the screen area outside of a LO document, in print layout view, for example) can't seem to be fixed in a session. I am running 11.10, 64 bit on my Thinkpad x121e Display information is: description: VGA compatible controller product: 2nd Generation Core Processor Family Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 09 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:42 memory:d0000000-d03fffff memory:c0000000-cfffffff ioport:4000(size=64) There appear to be a few problems with the Intel graphics and Ubuntu but i am not sure if they are all the same. If anyone knows if this is a known bug it'd be great to know, otherwise i'll file a report. Should anyone know of a fix i would greatly appreciate hearing about it. Let me know if you need any more information. Thanks

    Read the article

  • Mobile and Social for Retail

    - by David Dorf
    I've got two speaking gigs in the next few weeks, so I thought I'd preview both here. First I'll be at eTail West on February 24th to talk about mobile. I'll be previewing a new study of how shoppers are using mobile phones. Here's a sneak peek at one of the slides: It should be no surprise that as more consumers adopt smartphones, more are finding ways to use them to help with shopping. Sometimes that's to find a store, download a coupon, or do price comparisons. I'll also be discussing the NRF Mobile Blueprint, and will walk through an example of mobile impacting the in-store experience. Retailers need to look upon mobile as the method of bringing the digital assets of e-commerce into the aisles to enhance shopping. On March 9th I'll be at NRF Innovate co-presenting with Jon Kubo of Wet Seal on social strategies. Jon is a retail innovation rock-star and I always learn something new from every conversation with him. Below is a another slide preview: I cheated a little on the top 10 most popular retailer pages by not including Victoria's Secret Pink. VC is already represented, so I didn't include them a second time. The most interesting statistic I found was that the average user spends 55 minutes on Facebook a day. Wow! I also decided to use the old "Like" and "Fan" icons just because I like them better (pun intended). Wet Seal has been collecting interesting statistics on liked products, so I hope Jon will share lots (I'm on a roll). Hope to see you at both events.

    Read the article

  • Full Portfolio of x86 Systems On Display at Oracle OpenWorld

    - by kgee
    This OpenWorld, Oracle’s x86 hardware team will have two hardware demos, showcasing the new X3 systems, as well as several other x86 solutions such as the ZFS Storage Appliance, Oracle Database Appliance and the Carrier Grade NETRA systems. These two demos are located in the South Hall in Oracle’s booth 1133 and Intel’s booth 1101.  The Intel booth will feature additional demos including 3D demos of each server, a static architectural demo, the Oracle x86 Grand Prix video game and the Intel Theatre featuring several presentations by Intel’s partners. Oracle’s Intel Theatre Schedule and Topics Include:Monday 1. 10:30 a.m. - Engineered to Work Together: Oracle x86 Systems in the Data Center2. 12:30 a.m. - The Oracle NoSQL Database on the Intel Platform.3. 1:30 p.m. - Accelerate Your Path to Cloud with Oracle VM4. 3:30 p.m. - Why Oracle Linux is the Best Linux for Your Intel Based Systems5. 4:30 p.m. - Accelerate Your Path to Cloud with Oracle VMTuesday 1. 10:00 a.m. - Speed of thought” Analytics using In-Memory Analytics2. 1:30 a.m. - A Storage Architecture for Big Data:  "It’s Not JUST Hadoop"3. 2:00 a.m. - Oracle Optimized Solution for Enterprise Cloud Infrastructure.4. 2:30 p.m. - Configuring Storage to Optimize Database Performance and Efficiency.5. 3:30 p.m. - Total Cloud Control for Oracle's x86 SystemsWednesday 1. 10:00 a.m. - Big Data Analysis Using R-Programming Language2. 11:30 a.m. - Extreme Performance Overview, The Oracle Exadata Database Machine3. 1:30 p.m. - Oracle Times Ten In-Memory Database Overview

    Read the article

  • Grow Your Oracle Exadata and Manageability Business: Engage With Us to Find Out How

    - by swalker
    Don't miss out on the first EMEA Partner Community Cast! If you are a business decision maker, project leader, technical leader or business development manager you will gain incredible value from these events, and we believe that this introduction to Oracle Partner Communities will bring you a wealth of new opportunities. Join Us on December 7th, 10:00 GMT (11:00 CET) for the first broadcast the Exadata and Manageability solution areas. In just 30 minutes, you will find out more about Oracle's Exadata, Manageability and Oracle Enterprise Manager 12c solutions, and the value they can generate for you and your customers. See the full agenda here. Hosted by Paul Thompson, Senior Director, Alliances and Solutions Partner Programs, Oracle EMEA and Javier Puerta, Director, Core Technology Partner Programs, Oracle EMEA, our special guests include: Steve McNickle, Vice President Europe, cVidya Dave Sanderson, Associate Partner, Technology Reply Patrick Rood, Lead for Indirect Manageability Business, Oracle EMEA Register Now Partner Community Casts are a new series of interactive broadcasts designed to help you truly engage with Oracle on an individual level, build expertise around your specialist solution area and make valuable new contacts in Oracle and other Oracle partners. Community Casts can be viewed live from our online platform. Audience members have the opportunity to submit questions during the show via chat or social media outlets, many of which are answered on-air. Learn more about EMEA Partner Community Casts Register Now to learn how participation in the Exadata and Manageability Partner Communities will help your business flourish!

    Read the article

  • How can I fix puppet refusing to start and asking for "master.pp"?

    - by cwd
    I'm using the very latest version of puppet and have been following the Apress "Pro Puppet" guide step by step. I have installed puppet sudo aptitude install ruby libshadow-ruby1.8 sudo aptitude install puppet puppetmaster facter I have edited /etc/puppet/puppet.conf to include certname [master] certname=puppet.mydomain.com I have edited /etc/hosts and added the following line 127.0.0.1 puppet.mydomain.com puppet I have set the hostname of the server echo "puppet.mydomain.com" > /etc/hostname hostname -F /etc/hostname And then I try and run puppet from the command line. puppet master --verbose --no-daemonize And puppet gives me this error: Could not parse for environment production: Could not find file /master.pp I'm running all commands with sudo and the last line of the error message always says that it can't find master.pp and the path before it is to my current working directory. What am I doing wrong? I should also mention that I don't have a DNS record set up for puppet.mydomain.com - I saw some online documentation mentioning this might be a problem - however I was fairly sure that the hosts file would let me get around that.

    Read the article

  • Resolve Instructional Webcast Series

    - by Get Proactive Customer Adoption Team
    Untitled Document Catch the Express—Register for an Instructional Webcast Oracle Proactive Support’s ‘Get Proactive’ message to customers underscores the benefits they’ll obtain by leveraging the Prevent, Resolve and Upgrade capabilities available across the suite of Oracle Products. Our goal in Proactive Support is to show customers how to ‘Get Proactive’ and achieve success by leveraging the latest tools, knowledge, and best practices available to manage your applications and technology more proactively. Most importantly, we want to ensure that customers are proficient in the use of these proactive capabilities. To help you gain this proficiency, we’ve recently launched a series of instructional webcasts that we call the “Resolve Series.” This series consists of both live and on-demand webcasts, and features some of the key proactive capabilities that customers can leverage to resolve their own problems. We launched the first phase of the series in July, and focused on finding answers using the My Oracle Support portal. Among the topics covered in those sessions were best practices for searching the knowledge base, leveraging communities to find answers faster, and other proactive features of My Oracle Support The second phase of the series is set to kick off in September. This phase will include product specific sessions designed to provide customers who use the product with the skills and knowledge required to leverage some of the most important capabilities found under the “RESOLVE” category of our proactive portfolio on My Oracle Support. These webcasts will feature Subject Matter Experts demonstrating how to use the tools and capabilities, discussing best practices, and providing answers to any questions you might have. In addition, hands-on labs will be included in some of the sessions, allowing you to practice applying what you’ve just learned. Whether you are a new customer or you’ve worked with Oracle Support for years, you’ll discover new information and techniques to help you work more efficiently and keep your systems running smoothly. Leverage this opportunity to learn best practices and get the inside track on finding answers fast by using the right tools at the right time. Make sure to take advantage of these webcasts and maximize the value you receive from your Oracle Premier Support investment. See the full schedule of events and register for sessions.

    Read the article

  • What's the relationship between meta-circular interpreters, virtual machines and increased performance?

    - by Gomi
    I've read about meta-circular interpreters on the web (including SICP) and I've looked into the code of some implementations (such as PyPy and Narcissus). I've read quite a bit about two languages which made great use of metacircular evaluation, Lisp and Smalltalk. As far as I understood Lisp was the first self-hosting compiler and Smalltalk had the first "true" JIT implementation. One thing I've not fully understood is how can those interpreters/compilers achieve so good performance or, in other words, why is PyPy faster than CPython? Is it because of reflection? And also, my Smalltalk research led me to believe that there's a relationship between JIT, virtual machines and reflection. Virtual Machines such as the JVM and CLR allow a great deal of type introspection and I believe they make great use it in Just-in-Time (and AOT, I suppose?) compilation. But as far as I know, Virtual Machines are kind of like CPUs, in that they have a basic instruction set. Are Virtual Machines efficient because they include type and reference information, which would allow language-agnostic reflection? I ask this because many both interpreted and compiled languages are now using bytecode as a target (LLVM, Parrot, YARV, CPython) and traditional VMs like JVM and CLR have gained incredible boosts in performance. I've been told that it's about JIT, but as far as I know JIT is nothing new since Smalltalk and Sun's own Self have been doing it before Java. I don't remember VMs performing particularly well in the past, there weren't many non-academic ones outside of JVM and .NET and their performance was definitely not as good as it is now (I wish I could source this claim but I speak from personal experience). Then all of a sudden, in the late 2000s something changed and a lot of VMs started to pop up even for established languages, and with very good performance. Was something discovered about the JIT implementation that allowed pretty much every modern VM to skyrocket in performance? A paper or a book maybe?

    Read the article

  • The Exceptional EXCEPT clause

    - by steveh99999
    Ok, I exaggerate, but it can be useful… I came across some ‘poorly-written’ stored procedures on a SQL server recently, that were using sp_xml_preparedocument. Unfortunately these procs were  not properly removing the memory allocated to XML structures – ie they were not subsequently calling sp_xml_removedocument… I needed a quick way of identifying on the server how many stored procedures this affected.. Here’s what I used.. EXEC sp_msforeachdb 'USE ? SELECT DB_NAME(),OBJECT_NAME(s1.id) FROM syscomments s1 WHERE [text] LIKE ''%sp_xml_preparedocument%'' EXCEPT SELECT DB_NAME(),OBJECT_NAME(s2.id) FROM syscomments s2 WHERE [text] LIKE ''%sp_xml_removedocument%'' ‘ There’s three nice features about the code above… 1. It uses sp_msforeachdb. There’s a nice blog on this statement here 2. It uses the EXCEPT clause.  So in the above query I get all the procedures which include the sp_xml_preparedocument string, but by using the EXCEPT clause I remove all the procedures which contain sp_xml_removedocument.  Read more about EXCEPT here 3. It can be used to quickly identify incorrect usage of sp_xml_preparedocument. Read more about this here The above query isn’t perfect – I’m not properly parsing the SQL text to ignore comments for example - but for the quick analysis I needed to perform, it was just the job…

    Read the article

  • Oracle Endeca Information Discovery 3.1 is Now Available

    - by p.anda
    Oracle Endeca Information Discovery (OEID) 3.1 is a major release that incorporates significant new self-service discovery capabilities for business users. These include agile data mashup, extended support for unstructured analytics, and an even tighter integration with Oracle BI This release is available for download from: Oracle Delivery Cloud Oracle Technology Network Some of the what's new highlights ... Self-service data mashup... enables access to a wider variety of personal and trusted enterprise data sources. Blend multiple data sets in a single app. Agile discovery dashboards... allows users to easily create, configure, and securely share discovery dashboards with intelligent defaults, intuitive wizards and drag-and-drop configuration. Deeper unstructured analysis ... enables users to enrich text using term extraction and whitelist tagging while the data is live. Enhanced integration with OBI... provides easier wizards for data selection and enables OBI Server as a self-service data source. Enterprise-class data discovery... offers faster performance, a trusted data connection library, improved auditing and increased data connectivity for Hadoop, web content and Oracle Data Integrator. Find out more ... visit the OEID Overview page to download the What's New and related Data Sheet PDF documents. Have questions or want to share details for Oracle Endeca Information Discovery?  The MOS Communities is a great first stop to visit and you can stop-by at MOS OEID Community.

    Read the article

  • How do I install kivy?

    - by aspasia
    I was trying to install Kivy (by following the instructions here). I downloaded and installed all packages where the installation process went through without giving me any errors. However, when later I enter below command; sudo easy_install kivy It looked like it was going to work but it ends with an error by displaying following lines, which I don't comprehend: Detected compiler is unix /tmp/easy_install-BtOA_u/Kivy-1.8.0/kivy/graphics/texture.c:8:22: fatal error: pyconfig.h: No such file or directory #include "pyconfig.h" ^ compilation terminated. error: Setup script exited with error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 I saw a similar question asked as; Problem with kivy installation. However, this didn't work for me though the question suggests installing libgles-mesa-dev-lts-raring which I did as below; sudo apt-get install libgles-mesa-dev-lts-raring which then gave below; E: Unable to locate package libgles-mesa-dev-lts-raring (sorry for being so specific and perhaps obvious, but I'm in the early stage of learning my way around linux). This user was running Ubuntu 12.04, and most other questions related to this I've seen came from people with a different release from mine, which has led me to believe that that is the reason why the suggestions to those didn't solve my problem. I'm using Ubuntu 13.10

    Read the article

  • Daily Blog Archives and Duplicate Content

    - by nemmy
    A few weeks back I realised that my blog software was creating daily post archives. Which basically resulted in duplicate content especially if I only had one post a day. The situation is something like this: www.sitename.com/blog/archives/2013/06/01 - daily archive for 1 June 2013 www.sitename.com/blog/archives/2013/06/my-post-name.html So, here we have two pages that are basically identical except the daily archive has some meaningless title like "Daily Archive for 1 June 2003". And I have no control over which content Google decides is the primary content. It's quite possible (and likely) that the daily archive could be the "primary" content and the actual post itself the "duplicate". Once I realised it was doing this I modified the daily archive template to include <meta name="robots" content="noindex"> Here we are a few weeks later and I still see some daily archives coming up in Google search results. I realise some of those deep pages might not be crawled yet but I am worried that the original post (which should be the PRIMARY content) has been marked duplicate content by Google. Now I've no indexed the daily archives I might end up with no indexed content AND the original articles still flagged as duplicates. And nothing will show up in search at all. Have I screwed myself here or is there a way out?

    Read the article

  • Is the development of CLI apps considered "backward"?

    - by user61852
    I am a DBA fledgling with a lot of experience in programming. I have developed several CLI, non interactive apps that solve some daily repetitive tasks or eliminate the human error from more complex albeit not so daily tasks. These tools are now part of our tool box. I find CLI apps are great because you can include them in an automated workflow. Also the Unix philosophy of doing a single thing but doing it well, and letting the output of a process be the input of another, is a great way of building a set of tools than would consolidate into an strategic advantage. My boss recently commented that developing CLI tools is "backward", or constitutes a "regression". I told him I disagreed, because most CLI tools that exist now are not legacy but are live projects with improved versions being released all the time. Is this kind of development considered "backwards" in the market? Does it look bad on a rèsumè? I also considered all solutions whether they are web or desktop, should have command line, non-interactive options. Some people consider this a waste of programming resources. Is this goal a worthy one in a software project?

    Read the article

  • what is the Web development process efficiency judgment criteria

    - by Ahmed safan
    I'm working as a web developer and I want to be able to determine if I'm efficient. Does this include the how long it take to accomplish tasks such as: Server side code for the site logic with one language or multiple php,asp,asp.net. Client side code like javascript with jquery for ajax, menus and other interactivity Page layout, html, css (color, fonts (but I have no artistic sense!)) The needs of the site and how it will work (planning) How can i judge how long it will take to complete a website? The site has CMS for adding and editing news, products, articles on the experience of the company. Also, they can edit team work, add Recreational Activities and a logo gallery with compressed psd download, and send messages to cpanel and to email. You are starting from scratch except JQuery and PHPmailer. How can I estimate how long the job will take, and how can I calculate the required time to finish any new projects? I'm so sorry for many scattered questions, but I'm in my first experiment and I want to take benefits from the great experience of those who have it.

    Read the article

  • Great Example of a Simple Cost-Benefit Analysis

    - by BuckWoody
    I saw a post the other day that you should definitely go check out. It’s a cost/benefit decision, and although the author gives it a quick treatment and doesn’t take all points in the decision into account, you should focus on the process he follows. It’s a quick and simple example of the kind of thought process we should have as data professionals when we pick a server, a process, or application and even platform software. The key is to include more than just the price of a piece of software or hardware. You need to think about the “other” costs in the decision, and then make the right one. Sometimes the cheapest option is the cheapest, and other times, well, it isn’t. I’ve seen this played out not only in the decision to go with a certain selection, but in the options or editions it comes in. You have to put all of the decision points in the analysis to come up with the right answer, and you have to be able to explain your logic to your team and your company. This is the way you become a data professional, not just a DBA. You can check out the post here – it deals with Azure, but the point is the process, not Azure itself: http://blogs.msdn.com/eugeniop/archive/2010/03/19/windows-azure-guidance-a-simplistic-economic-analysis-of-a-expense-migration.aspx Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Sorting a Grid of Data in ASP.NET MVC

    Last week's article, Displaying a Grid of Data in ASP.NET MVC, showed, step-by-step, how to display a grid of data in an ASP.NET MVC application. Last week's article started with creating a new ASP.NET MVC application in Visual Studio, then added the Northwind database to the project and showed how to use Microsoft's Linq-to-SQL tool to access data from the database. The article then looked at creating a Controller and View for displaying a list of product information (the Model). This article builds on the demo application created in Displaying a Grid of Data in ASP.NET MVC, enhancing the grid to include bi-directional sorting. If you come from an ASP.NET WebForms background, you know that the GridView control makes implementing sorting as easy as ticking a checkbox. Unfortunately, implementing sorting in ASP.NET MVC involves a bit more work than simply checking a checkbox, but the quantity of work isn't significantly greater and with ASP.NET MVC we have more control over the grid and sorting interface's layout and markup, as well as the mechanism through which sorting is implemented. With the GridView control, sorting is handled through form postbacks with the sorting parameters - what column to sort by and whether to sort in ascending or descending order - being submitted as hidden form fields. In this article we'll use querystring parameters to indicate the sorting parameters, which means a particular sort order can be indexed by search engines, bookmarked, emailed to a colleague, and so on - things that are not possible with the GridView's built-in sorting capabilities. Like with its predecessor, this article offers step-by-step instructions and includes a complete, working demo available for download at the end of the article. Read on to learn more! Read More >

    Read the article

  • Open source license with backlink requirement

    - by KajMagnus
    I'm developing a Javascript library, and I'm thinking about releasing it under an open source license (e.g. GPL, BSD, MIT) — but that requires that websites that use the software link back to my website. Do you know about any such licenses? And how have they formulated the attribution part of the license text? Do you think this BSD-license would do what you think that I want? (I suppose it doesn't :-)) [...] 3. Each website that redistributes this work must include a visible rel=follow link to my-website.example.com, reachable via rel=follow links from each page where the software is being redistributed. (For example, you could have a link back to your homepage, and from your homepage to an About-Us section, which could link to a Credits section) I realize that some companies wouldn't want to use the library because of legal issues with interpreting non-standard licenses (have a look at this answer: http://programmers.stackexchange.com/a/156859/54906). — After half a year, or perhaps some years, I'd change the license to plain GPL + MIT.

    Read the article

  • Cannont add service account to domain group during sql cluster install

    - by Sam
    I'm installing a 2008 instance on a 2003 machine which is already running 2005. I need to set up domain groups for the security setup step: http://msdn.microsoft.com/en-us/library/ms179530.aspx On Windows Server 2003, specify domain groups for SQL Server services. All resource permissions are controlled by domain-level groups that include SQL Server service accounts as group members. Much more info on this here: http://support.microsoft.com/kb/910708 I've had problems with being able to add the windows service accounts to the groups at install time. The security admins had to make my account a domain admin - which they were hesitant to do. The account under which SQL Server Setup is running must have permissions to add accounts to the domain groups. Is there a specific security setting which would allow my account to add accounts to a group?

    Read the article

  • Docbook: Centralized glossary, where each document includes only terms which appear in it?

    - by DanM
    Trying to figure out if this (or something similar) is possible. I'm working with a collection of technical documents, all written in DocBook. The documents each contain many acronyms, technical terms and other jargon, so we need to include a glossary with each of them. The ideal situation would be this: I have a central glossary.xml file which contains a glossentry item (or similar) for each such term; then, each of the documents uses that glossary file, but only prints out the terms which appear IN that document. So, each document has its own glossary printed at the end, but the actual glossary entries are stored centrally. Is that doable?

    Read the article

  • Which .NET REST approach/technology/tool should I use?

    - by SonOfPirate
    I am implementing a RESTful web service and several client applications that are mostly in Silverlight. I am finding a litany of options for developing both the server-side and client-side of the API but am not sure which is the best approach. I'm concerned about stability as well as a platform that will continue to exist a few months from now. We started using the REST Starter Kit with .NET 3.5 but moved to the new WCF Web API when updating to .NET 4.0. All of their documentation indicates that WCF Web API is the replacement for the RSK. However, Web API is only in Preview 4 and does not include support for Silverlight or Windows Phone 7 clients (yet). WCF Web API looks like a wrapper on top of the WCF WebHttp Services stuff provided in the System.ServiceModel.Web library which makes me think that maybe it would be simpler to just go with the built-in stuff but Web API does offer some nice features. I am specifically tied-up trying to determine the best course for the client-side. My main requirement is that I need to support deserializing into my client-side objects quickly and easily. The Web API offers a nice client library but doesn't have a Silverlight version. I'd like to use the latest approach and the toolset that is being actively developed and supported. Is the REST Starter Kit really obsolete? Has anyone had any success implementing the WCF Web API toolkit? Is there merit to using either of these over the built-in WCF WebHttp Services features found in System.ServiceModel.Web? Is there a single solution that works for any client (web, Silverlight, etc.)? What suggestions do you have?

    Read the article

  • Can't find netbooted for Kerrighed pxe boot with Ubuntu Lucid Server

    - by Pengin
    I'm following installation guides for pxe booting and kerrighed. I can't find the package nfsbooted for Ubuntu 10.04. Where did it go? Context: At work I have access to 8 mini-ITX PCs and am trying to build a cluster. My plans include trying Condor, GridGain, Hadoop, and recently Kerrighed has caught my eye. (I reaslise these are all for different kinds of things, I'm just evaluating). Ideally, I'd like to have all the nodes network boot from a single server, since that seems so much easier to manage, plus I can 'borrow' additional PCs for a while without touching their HD. I've been getting on great with Ubuntu Lucid Server (10.04), trying to follow the only guides I can find to get pxe booting (and ultimately kerrighed) to work. This guide is for Ubuntu 8.04 and this one is for Debian. They both refer to a package I can't seem to find, nfsbooted. Has this package been replaced? Am I doing something daft?

    Read the article

  • Nginx no longer servers uwsgi application behind HAProxy - Looks for static file instead

    - by Ralph
    We implemented our web application using web2py. It consists of several modules offering a REST API at various resources (e.g. /dids, /replicas, ...). The API is used by clients implementing requests.py. My problem is that our web app works fine if it's behind HAProxy and hosted by Apache using mod_wsgi. It also works fine if the clients interact with nginx directly. It doesn't work though when using HAProxy in front of nginx. My guess is that HAProxy somehow modifies the request and thus nginx behaves differently i.e. looking for a static file instead of calling the WSGI container. Unfortunately I can't figure out what's exactly going (wr)on(g). Here are the relevant config sections of these three component's config files. At least I guess they are interesting. If you miss anything, please let me know. 1) haproxy.conf frontend app-lb bind loadbalancer:443 ssl crt /etc/grid-security/hostcertkey.pem default_backend nginx-servers mode http backend nginx-servers balance leastconn option forwardfor server nginx-01 nginx-server-int-01.domain.com:80 check 2) nginx.conf: sendfile off; #tcp_nopush on; keepalive_timeout 65; include /etc/nginx/conf.d/*.conf; server { server_name nginx-server-int-01.domain.com; root /path/to/app/; location / { uwsgi_pass unix:///tmp/app.sock; include uwsgi_params; uwsgi_read_timeout 600; # Requests can run for a serious long time } 3) uwsgi.ini [uwsgi] chdir = /path/to/app/ chmod-socket = 777 no-default-app = True socket = /tmp/app.sock manage-script-name = True mount = /dids=did.py mount = /replicas=replica.py callable = application Now when I let my clients go against nginx-server-int-01.domain.com everything is fine. In the access.log of nginx lines like these are appearing: 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /dids/attachments HTTP/1.1" 201 17 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /dids/attachments HTTP/1.1" 201 17 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /dids/user.ogueta/cnt_mc12_8TeV.16304.stream_name_too_long.other.notype.004202218365415e990b9997ea859f20.user/dids HTTP/1.1" 201 17 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /replicas/list HTTP/1.1" 200 5282 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /replicas/list HTTP/1.1" 200 5094 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /replicas/list HTTP/1.1" 200 528 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:21 +0200] "GET /dids/mc13_14TeV/dids/search?project=mc13_14TeV&stream_name=%2Adummy&type=dataset&datatype=NTUP_SMDYMUMU HTTP/1.1" 401 73 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:21 +0200] "POST /replicas/list HTTP/1.1" 200 713 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:21 +0200] "POST /dids/attachments HTTP/1.1" 201 17 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" But when I switch the clients to go against HAProxy (loadbalancer.domain.com:443), the error.log of nginx shows lines like these: 2014/08/23 01:26:01 [error] 1705#0: *21231 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XX1, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21232 open() "/usr/share/nginx/html/replicas/list" failed (2: No such file or directory), client: 128.142.XXX.XX1, server: localhost, request: "POST /replicas/list HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21233 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XX1, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21234 open() "/usr/share/nginx/html/replicas/list" failed (2: No such file or directory), client: 128.142.XXX.XX1, server: localhost, request: "POST /replicas/list HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21235 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer" 2014/08/23 01:26:02 [error] 1705#0: *21238 open() "/usr/share/nginx/html/replicas/list" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /replicas/list HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21239 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21242 open() "/usr/share/nginx/html/replicas/list" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /replicas/list HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21244 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer.domain.com" As you can see, that request looks the same, only the client IP changed, from the client's host to the one from loadbalancer.domain.com. But due to what ever reasons ngxin seems to assume that it is a static file to be served which eventually results in the file not found message. I searched the web for multiple hours already, but without much luck so far. Any help is very much appreciated. Cheers, Ralph

    Read the article

  • How should I deal with user agent parsing in logs?

    - by Mr. Jefferson
    My web app project includes logging functionality so we can see where visitors are coming from (referrer URL), what the popular user agents are, what pages are most popular, etc. The log is stored in SQL Server, and when I query the user agents I use a large (almost 100 lines) and growing CASE statement to separate the user agents using string matching (i.e. if the user agent contains the string "Firefox/9" then it's Firefox 9). Is there a better way to do this so I don't have to continually add to that CASE statement to deal with new browser releases? Also, how should I deal with less common, weird/unknown user agents? I've seen the following in the logs and been unable to find good information online about what they are: WordPress/3.3.1; http://www.facecolony.org Mozilla/4.0 ( http://www.hairirons.org redips; <a href=http://hairirons.org/>chi hair iron</a>) I'd guess they're bots/crawlers, but the sites they point to don't appear to reference web crawlers (or even be available sometimes). I've seen other user agents aren't familiar to me, but I know they're bots because they include "bot" or "spider" or something similar in them.

    Read the article

  • SharePoint 2010 PowerShell Script to Find All SPShellAdmins with Database Name

    - by Brian Jackett
    Problem     Yesterday on Twitter my friend @cacallahan asked for some help on how she could get all SharePoint 2010 SPShellAdmin users and the associated database name.  I spent a few minutes and wrote up a script that gets this information and decided I’d post it here for others to enjoy.     Background     The Get-SPShellAdmin commandlet returns a listing of SPShellAdmins for the given database Id you pass in, or the farm configuration database by default.  For those unfamiliar, SPShellAdmin access is necessary for non-admin users to run PowerShell commands against a SharePoint 2010 farm (content and configuration databases specifically).  Click here to read an excellent guest post article my friend John Ferringer (twitter) wrote on the Hey Scripting Guy! blog regarding granting SPShellAdmin access.  Solution     Below is the script I wrote (formatted for space and to include comments) to provide the information needed. Click here to download the script.   # declare a hashtable to store results $results = @{}   # fetch databases (only configuration and content DBs are needed) $databasesToQuery = Get-SPDatabase | Where {$_.Type -eq 'Configuration Database' -or $_.Type -eq 'Content Database'}   # for each database get spshelladmins and add db name and username to result $databasesToQuery | ForEach-Object {$dbName = $_.Name; Get-SPShellAdmin -database $_.id | ForEach-Object {$results.Add($dbName, $_.username)}}   # sort results by db name and pipe to table with auto sizing of col width $results.GetEnumerator() | Sort-Object -Property Name | ft -AutoSize     Conclusion     In this post I provided a script that outputs all of the SPShellAdmin users and the associated database names in a SharePoint 2010 farm.  Funny enough it actually took me longer to boot up my dev VM and PowerShell (~3 mins) than it did to write the first working draft of the script (~2 mins).  Feel free to use this script and modify as needed, just be sure to give credit back to the original author.  Let me know if you have any questions or comments.  Enjoy!         -Frog Out   Links PowerShell Hashtables http://technet.microsoft.com/en-us/library/ee692803.aspx SPShellAdmin Access Explained http://blogs.technet.com/b/heyscriptingguy/archive/2010/07/06/hey-scripting-guy-tell-me-about-permissions-for-using-windows-powershell-2-0-cmdlets-with-sharepoint-2010.aspx

    Read the article

  • Solaris 11 Customer Maintenance Lifecycle

    - by user12244672
    Hi Folks, Welcome to my new blog, http://blogs.oracle.com/Solaris11Life , which is all about the Customer Maintenance Lifecycle for Image Packaging System (IPS) based Solaris releases, such as Solaris 11. It'll include policies, best practices, clarifications, and lots of other stuff which I hope you'll find useful as you get up to speed with Solaris 11 and IPS.   Let's start with a version of my Solaris 11 Customer Maintenance Lifecycle presentation which I gave at this year's Oracle Open World and at the recent Deutsche Oracle Anwendergruppe (DOAG - German Oracle Users Group) conference in Nürnberg. Some of you may be familiar with my Patch Corner blog, http://blogs.oracle.com/patch , which fulfilled a similar purpose for System V [five] Release 4 (SVR4) based Solaris releases, such as Solaris 10 and below. Since maintaining a Solaris 11 system is quite different to maintaining a Solaris 10 system, I thought it prudent to start this 2nd parallel blog for Solaris 11. Actually, I have an ulterior motive for starting this separate blog.  Since IPS is a single tier packaging architecture, it doesn't have any patches, only package updates.  I've therefore banned the word "patch" in Solaris 11 and introduced a swear box to which my colleagues must contribute a quarter [$0.25] every time they use the word "patch" in a public forum.  From their Oracle Open World presentations, John Fowler owes 50 cents, Liane Preza owes $1.25, and Bart Smaalders owes 75 cents.  Since I'm stinging my colleagues in what could be a lucrative enterprise, I couldn't very well discuss IPS best practices on a blog called "Patch Corner" with a URI of http://blogs.oracle.com/patch.  I simply couldn't afford all those contributions to the "patch" swear box. :) Feel free to let me know what topics you'd like covered - just post a comment in the comment box on the blog. Best Wishes, Gerry.

    Read the article

< Previous Page | 287 288 289 290 291 292 293 294 295 296 297 298  | Next Page >