Search Results

Search found 59133 results on 2366 pages for 'data education'.

Page 112/2366 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • nagios3 Error: Could not read object configuration data!

    - by user1493730
    I have a brand new install of nagios3 on ubuntu 12.04. After I log in to the web interface and click any link I get the error: Error: Could not read object configuration data! Here are some things you should check in order to resolve this error: Verify configuration options using the -v command-line option to check for errors. Check the Nagios log file for messages relating to startup or status data errors. I ran it with the -v option and it reported no errors: Total Warnings: 0 Total Errors: 0 Things look okay - No serious problems were detected during the pre-flight check The nagios log and apache error log and debug log all have nothing regarding this. Does anyone know how to turn on logging that will give me some kind of useful error? Or if anyone knows how to fix this specific problem without additional logging, I guess that's okay too. Thanks!

    Read the article

  • Google Analytics Export API - nextPagePath data

    - by Btibert3
    I am probably missing something obvious, but I do not understand when I query: start.date = DATE_START, end.date = DATE_END, dimensions = c("ga:pagePath","ga:previousPagePath"), metrics = c("ga:pageviews"), filters = mypageofinterest, table.id = "ga:mytable", max.results=RESULTS my data return as expected, all of the previous pages including (entrance). However, when I modify the code to be nextPagePath start.date = DATE_START, end.date = DATE_END, dimensions = c("ga:pagePath","ga:nextPagePath"), metrics = c("ga:pageviews"), filters = mypageofinterest, table.id = "ga:mytable", max.results=RESULTS only one line of data are returned; the pagepath and nextpagepath are identical with itself. I replicated this result using the Query Explorer. What am I missing or doing wrong? I was expecting to see a large number of "next" pages, including (exit). Thanks in advance.

    Read the article

  • SQLSaturday #60 - Cleveland Rocks!

    - by Mike C
    Looking forward to seeing all the DBAs, programmers and BI folks in Cleveland at SQLSaturday #60 tomorrow! I'll be presenting on (1) Intro to Spatial Data and (2) Build Your Own Search Engine in SQL. I've reworked the Spatial Data presentation based on feedback from previous SQLSaturday events and added more sample code. I also expanded the Build Your Own Search Engine code samples to demonstrate additional FILESTREAM functionality. See you all tomorrow! A little road music, please! http://www.youtube.com/watch?v=vU0JpyH1gC...(read more)

    Read the article

  • Microsoft Access 2010: How to Add, Edit, and Delete Data in Tables

    Tables are such an integral part of databases and corresponding tasks in Access 2010 because they act as the centers that hold all the data. They may be basic in format, but their role is undeniably important. So, to get you up to speed on working with tables, let's begin adding, editing, and deleting data. These are very standard tasks that you will need to employ from time to time, so it is a good idea to start learning how to execute them now. As is sometimes the case with our tutorials, we will be working with a specific sample. To learn the tasks, read over the tutorial and then apply...

    Read the article

  • Skynet Big Data Demo Using Hexbug Spider Robot, Raspberry Pi, and Java SE Embedded (Part 4)

    - by hinkmond
    Here's the first sign of life of a Hexbug Spider Robot converted to become a Skynet Big Data model T-1. Yes, this is T-1 the precursor to the Cyberdyne Systems T-101 (and you know where that will lead to...) It is demonstrating a heartbeat using a simple Java SE Embedded program to drive it. See: Skynet Model T-1 Heartbeat It's alive!!! Well, almost alive. At least there's a pulse. We'll program more to its actions next, and then finally connect it to Skynet Big Data to do more advanced stuff, like hunt for Sara Connor. Java SE Embedded programming makes it simple to create the first model in the long line of T-XXX robots to take on the world. Raspberry Pi makes connecting it all together on one simple device, easy. Next post, I'll show how the wires are connected to drive the T-1 robot. Hinkmond

    Read the article

  • Converting large files in python

    - by Cenoc
    I have a few files that are ~64GB in size that I think I would like to convert to hdf5 format. I was wondering what the best approach for doing so would be? Reading line-by-line seems to take more than 4 hours, so I was thinking of using multiprocessing in sequence, but was hoping for some direction on what would be the most efficient way without resorting to hadoop. Any help would be very much appreciated. (and thank you in advance) EDIT: Right now I'm just doing a for line in fd: approach. After that right now I just check to make sure I'm picking out the right sort of data, which is very short; I'm not writing anywhere, and it's taking around 4 hours to complete with that. I can't read blocks of data because the blocks in this weird file format I'm reading are not standard, it switches between three different sizes... and you can only tell which by reading the first few characters of the block.

    Read the article

  • Mass Transit Visualizations Reveal Cities’ Daily Movements [Video]

    - by Jason Fitzpatrick
    If you’re a sucker for data visualization–and we certainly are–this collection of mass transit data visualized over city maps are fascinating and makes mass transit look like a cell culture unfolding. Check out one day in the life of the New York City mass transit system in the video above and then hit up the link below to check out other cities including Chicago, Washington D.C., Boston, and Manchester. Mesmerizing Visualizations Show Mass-Transit Patterns of Major Cities [Wired] HTG Explains: Does Your Android Phone Need an Antivirus? How To Use USB Drives With the Nexus 7 and Other Android Devices Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder?

    Read the article

  • High Tech Product Companies: Benchmark Your Sales & Marketing Data Management

    - by user709269
    Aberdeen’s Q4 2010 Quarterly Business Review found that 74% of the Sales and Marketing organizations in High Tech product manufacturing have strategic CRM initiatives in 2011. Aberdeen Group is conducting a survey that will help high tech product companies such as yours determine the Best-in-Class procedures for capturing, managing, and disseminating business data. If your product company is planning on implementing a CRM solution or is simply evaluating the potential benefits, we would appreciate your feedback in this brief, 10-minute survey. You will be able to compare your experiences in leveraging customer information for sales and marketing compare with your peers, benchmark your performance, and see how you can achieve Best-in-Class results. Individual responses will be kept strictly confidential, and data will only be used in aggregate. In appreciation for sharing your time and thoughts with us, we will provide complimentary access for you to the full benchmark report as soon as it is published (a $399 value). Take the survey.

    Read the article

  • Programming to ANSI standards (for engineering)

    - by Jake
    I am currently tasked to write a software to help engineers design standard compliant designs. If there is a bad design, software will report an error or warning. Maybe it's just me, but anyone who has done this should be familiar with the massive amounts of ANSI standards tables like this one: http://en.wikipedia.org/wiki/Nominal_Pipe_Size Computers are, as its name suggest, computing machines, not lookup machines. I feel that feeding formulas into computers and churning out standard compliant designs is much more efficient than doing memory intensive data lookups that are prone to human input errors and susceptible to "data updates". I actually think that there are formulas to calculate all those numbers, but nobody so far could give me that information. Anyone been through this before? What is THE best approach to this? Thanks for sharing.

    Read the article

  • Using Microsoft&apos;s Chart Controls In An ASP.NET Application: Serializing Chart Data

    In most usage scenarios, the data displayed in a Microsoft Chart control comes from some dynamic source, such as from a database query. The appearance of the chart can be modified dynamically, as well; past installments in this article series showed how to programmatically customize the axes, labels, and other appearance-related settings. However, it is possible to statically define the chart's data and appearance strictly through the control's declarative markup. One of the demos examined in the <a href="http://www.4guysfromrolla.com/articles/072209-1.aspx">Getting Started</a> article rendered a column chart with seven columns whose labels and values were defined statically in the <code>&lt;asp:Series&gt;</code> tag's

    Read the article

  • Recovering a deleted partition

    - by Kishore
    I had a dual boot PC running Ubuntu 12.04 and Windows 7. About a month back, I deleted the Ubuntu partition via the disk management utility (I do not remember whether or not I formatted the partition after performing this action). I ran into some grub issues and used lilo to solve the issue. I followed the simple instructions described in this blog post. I now realize that there were some files in the Ubuntu installation that I need. Of course, I backed up the data, but not this folder apparently. Is there any way to get the data back? I tried following the process suggested on another post on askubuntu (suggesting the use of TestDisk), but was not able to even install TestDisk. The live USB I use is running Ubuntu 12.04 and it does not have a synaptic package manager. Installing from the terminal does not work because even after I type: sudo apt-get update sudo apt-get upgrade the command: sudo apt-get install testdisk fails to work.

    Read the article

  • The DBA Team tackles data corruption

    Paul Randal joins the team in this instalment of the DBA Team saga. In this episode, Monte Bank is trying to cover up insider trading - using data corruption to eliminate the evidence, and a patsy DBA to take the blame. It's a great story with useful advice on how to perform thorough data recovery tasks. "A real time saver" Andy Doyle, Head of IT ServicesAndy and his team saved time by automating backup and restores with SQL Backup Pro. Find out how much time you could save. Download a free trial now.

    Read the article

  • SQLSaturday #60 - Cleveland Rocks!

    - by Mike C
    Looking forward to seeing all the DBAs, programmers and BI folks in Cleveland at SQLSaturday #60 tomorrow! I'll be presenting on (1) Intro to Spatial Data and (2) Build Your Own Search Engine in SQL. I've reworked the Spatial Data presentation based on feedback from previous SQLSaturday events and added more sample code. I also expanded the Build Your Own Search Engine code samples to demonstrate additional FILESTREAM functionality. See you all tomorrow! A little road music, please! http://www.youtube.com/watch?v=vU0JpyH1gC...(read more)

    Read the article

  • Finding out the shared hosting providers located in a particular data center

    - by unixman83
    I know the physical location of data-centers that I want my website hosted in. One of these is located on 350 E Cermack in Chicago, IL. My problem is that I am looking for all the providers of low-cost shared hosting in this data center. Do you have a list? And if you do have such a list can you please tell me how you came up with it? I know many discount hosting providers are physically located in the Arizona-Utah areas. But I am located near Chicago.

    Read the article

  • Data architecture for event log metrics?

    - by elliot42
    My service has a large ongoing number of user events, and we would like to do things like "count occurrence of event type T since date D." We are trying to make two basic decisions: What to store? Storing every event vs. only storing aggregates (Event log style) log every event and count them later, vs. (Time-series style) store a single aggregated "count of event E for date D" for every day Where to store the data In a relational database (particularly MySQL) In a non-relational (NoSQL) database In flat log files (collected centrally over the network via syslog-ng) What is standard practice / where can I read more about comparing the different types of systems? Additional details: The total event stream is large, potentially hundreds of thousands of entries per day But our current need is only to count certain types of events within it We don't necessarily need real-time access to the raw data or aggregation results IMHO, "log all events to files, crawl them at a later time to filter and aggregate the stream" is a pretty standard UNIX Way, but my Rails-y compatriots seem to think that nothing is real unless it's in MySQL.

    Read the article

  • Data for animation

    - by saadtaame
    Say you are using C/SDL for a 2D game project. It's often the case that people use a structure to represent a frame in an animation. The struct consists of an image and how much time the frame is supposed to be visible. Is this data sufficient to represent somewhat complex animatio? Is it a good idea to separate animation management code and animation data? Can somebody provide a link to animations tutorials that store animations in a file and retrieve them when needed. I read this in a book (AI game programming wisdom) but would like to see a real implementation.

    Read the article

  • [EF + Oracle] Inserting Data (1/2)

    - by JTorrecilla
    Prologue Following EF series (I ,II y III) in this chapter we will see how to create DB record from EF. Inserting Data Like we indicated in the 2º post: “One Entity matches with a DB record, and one property match with a Table Column”. To start, we need to create an object from one of the Entities: 1: EMPLEADOS empleado = new EMPLEADOS(); Also like, I told previously, Exists the possibility to use the Static Function defined by VS for each Entity: Once we have created the object, we can Access to it properties to fill like a common class:   1: empleado.NOMBRE = "Javier Torrecilla";   After finish of fill our Entity properties, it must be needed to add the object to the appropriate ObjectSet in the ObjectContext: 1: enti.EMPLEADOS.AddObject(empleado); or 1: enti.AddToEMPLEADOS(empleado); Both methods will do the same action, create an insert statement. Have we finished? No. Any Entity has a property called “EntityState”. This prop is an Enum from “EntityState”, which has the following: Detached: the Entity is created, but not added to the Context. Unchanged: There is no pending changes in the Entity. Added: The entity is added to the ObjectSet, but it is not yet sent to the DB. Deleted: The object is deleted form the ObjectSet, but not yet from the DB. Modified: There is Pending Changes to confirm. Let’s see, the several values of the property during the Creation steps: 1. While the Object is created and we are filling the props: EntityState.Detached; 2. After adding to the ObjectSet: EntityState.Added. This not indicated that the record is in the DB 3. Saving the Data: To sabe the data in the DB, we are going to call “SaveChanges” method of the Object Context. After invoke it, the property will be EntityState.Unchanged.   What does SaveChanges Method? This function will synchronize and send all pending changes to DB. It will add, modify or delete all Entities, whose EntityState property, is setted to Added, Deleted or Modified. After finishing, all added or modified entities will be change the State to “Unchanged”, and deleted Entities must take the “Detached” state.

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >