Search Results

Search found 70827 results on 2834 pages for 'data quality services'.

Page 189/2834 | < Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >

  • Displaying google analytics data on my website

    - by anon-user0
    I sale adspace for my websites directly to the advertisers. I ad page where I want to show google anaylytics information that update automatically without me having to manually update everyday or every month. Something like this: http://wstats.net/en/website/riverplate.com#stat_trafic I don't want to use embedded or iframed third party services. I know google has public API and you can connect it to google graph API to to show pretty graphs. There is a tutorial by google here on how to do it: https://developers.google.com/analytics/resources/articles/gdataAnalyticsCharts Few problems: I don't know much javascript The javascript seems to prompt for authntication as opposed to login automatically. (from my understanding by reading comments on the code) Does anyone know of any ready made script that does what I am looking for or know how I can fix this code that will allow me to display analytic info without authenticating? Thanks.

    Read the article

  • Gracefully terminate a request based service on server

    - by Jatin
    In our web application, for each http-request there is a lot of computation that happens on back end. Output can vary from 10 sec - 1 Hour. In the mean time when it is computed, "Waiting.." is shown on the website for the respective user. But it so happens, that a user might cut down the service in between. So what all can be done on the back end so that the computation can be stopped in between to save resources? What different tactics can be applied here? And if better (instead of killing the thread directly), then a graceful termination policy should make wonders.

    Read the article

  • Persisting NLP parsed data

    - by tjb1982
    I've recently started experimenting with NLP using Stanford's CoreNLP, and I'm wondering what are some of the standard ways to store NLP parsed data for something like a text mining application? One way I thought might be interesting is to store the children as an adjacency list and make good use of recursive queries (postgres supports this and I've found it works really well). Something like this: Component ( id, POS, parent_id ) Word ( id, raw, lemma, POS, NER ) CW_Map ( component_id, word_id, position int ) But I assume there are probably many standard ways to do this depending on what kind of analysis is being done that have been adopted by people working in the field over the years. So what are the standard persistence strategies for NLP parsed data and how are they used?

    Read the article

  • Cannot Create a connection to Data Source VB 2010 [closed]

    - by CLO_471
    I seem to be having some issues with my Visual Basic 2010. I am trying to create a connection to a data source and it is just not working. Even my old connections in my other projects are not working. When I get into VB I try and create a connection by clicking Add New Data Source Database DataSet New Connection and when I click on New Connection the screen disappears and I am not able to select anything. Does anyone know of a glitch or something? I have checked my ODBC connections and all is good and I have been able to play around with my Access connections (which I am trying to connect) and Queries and everything seems to be working fine. I have rebooted several times, uninstalled and resinstalled VB and have also repaired the entire application. I am not sure what else to try or what else to do. Any help would be much appreciated. My computer specs are XP SP3, Core2 Duo at 2.80 and 3GB RAM

    Read the article

  • WCF/webservice architecture question

    - by M.R.
    I have a requirement to create a webservice to expose certain items from a CMS as a web service, and I need some suggestions - the structure of the items is as such: item - field 1 - field 2 - field 3 - field 4 So, one would think that the class for this will be: public class MyItem { public string ItemName { get; set; } public List<MyField> Fields { get; set; } } public class MyField { public string FieldName { get; set; } public string FieldValue { get; set; } //they are always string (except - see below) } This works for when its always one level deep, but sometimes, one of the fields is actually a point to ANOTHER item (MyItem) or multiple MyItem (List<MyItem>), so I thought I would change the structure of MyField as follows, to make FieldValue as object; public class MyField { public string FieldName { get; set; } public object FieldValue { get; set; } //changed to object } So, now, I can put whatever I want in there. This is great in theory, but how will clients consume this? I suspect that when users make a reference to this web service, they won't know which object is being returned in that field? This seems like a not-so-good design. Is there a better approach to this?

    Read the article

  • Methods to Validate User Supplied Data

    - by clifgray
    I am working on a website where users record data from certain locations and they input an address to tag that location with a GPS coordinate. Pretty frequently those locations are tagged more than a mile away from the actual location and I am trying to implement a few ways to validate the data. Right now I am thinkiing of: having a tag of location pages for other users to say "incorrect location" so I can go one by one and fix it letting users with a decent amount of experience (reputation) edit the location GPS coordinates making the location be validated by a mod before it goes live and they make sure it is a good location Are these reasonable? I know the first will take a lot of my time and I would love some suggestions.

    Read the article

  • Grant a user permissions on www-data owned /var/www

    - by George Pearce
    I have a simple web server setup for some websites, with a layout something like: site1: /var/www/site1/public_html/ site2: /var/www/site2/public_html/ I have previously used the root user to manage files, and then given them back to www-data when I was done (WordPress sites, needed for WP Uploads to work). This probably isn't the best way. I'm trying to find a way to create another user (lets call it user1) that has permission to edit files in site1, but not site2, and doesn't stop the files being 'owned' by www-data. Is there any way for me to do this?

    Read the article

  • What datastructure would you use for a collision-detection in a tilemap?

    - by Solom
    Currently I save those blocks in my map that could be colliding with the player in a HashMap (Vector2, Block). So the Vector2 represents the coordinates of the blog. Whenever the player moves I then iterate over all these Blocks (that are in a specific range around the player) and check if a collision happened. This was my first rough idea on how to implement the collision-detection. Currently if the player moves I put more and more blocks in the HashMap until a specific "upper bound", then I clear it and start over. I was fully aware that it was not the brightest solution for the problem, but as said, it was a rough first implementation (I'm still learning a lot about game-design and the data-structure). What data-structure would you use to save the Blocks? I thought about a Queue or even a Stack, but I'm not sure, hence I ask.

    Read the article

  • SQL SERVER – What is SSAS Tabular Data model and Why to use it – Part 2

    - by Pinal Dave
    In my last article, I talked about the basics of tabular data model and why use it. Then I demonstrated step by step creation of a basic tabular model project. In this part I’m going to throw some light on how to create measures and analyses in excel. If you look at the tabular project closely, you will notice that we have not defined any measure yet. So, in the first step we will define the measure first.  Open the solution and select the column you want to define as a measure. Then, click on the summation icon on the toolbar. You will see the aggregated results at the bottom of that column. You have also other choices as well like average, min, max, count and distinct count. After creating the required measures, we need to analyze our data in excel. To do this, click on the excel icon in the upper left corner of the toolbar. This will open your analysis in excel. Notice the pivot table field list here. I have highlighted the measures that we created in the earlier step. Now, we can use these measures in our analysis Now, we have to put the required fields in their respective places as column labels, row labels, Values and Report filter for analysis. See below snapshot for details, it shows region wise sales on a yearly basis You can even apply filters on the above analysis by placing the slicer field in report filter. In our example, we will take an English product name as a filter. You can use the filter as depicted in the below snapshot. Optionally, you can also use the slider to filter data more interactively. Further to improve our analysis, we can insert pivot charts That’s all for this time, in my next post I’m going to show in detail about how to create hierarchies, perspectives, KPI’s  and many more features. Author: Namita Sharma, Senior Corporate Trainer at Koenig Solutions. Reference: Pinal Dave (http://blog.SQLAuthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SSAS

    Read the article

  • How to commit a file conversion?

    - by l0b0
    Say you've committed a file of type foo in your favorite vcs: $ vcs add data.foo $ vcs commit -m "My data" After publishing you realize there's a better data format bar. To convert you can use one of these solutions: $ vcs mv data.foo data.bar $ vcs commit -m "Preparing to use format bar" $ foo2bar --output data.bar data.bar $ vcs commit -m "Actual foo to bar conversion" or $ foo2bar --output data.foo data.foo $ vcs commit -m "Converted data to format bar" $ vcs mv data.foo data.bar $ vcs commit -m "Renamed to fit data type" or $ foo2bar --output data.bar data.foo $ vcs rm data.foo $ vcs add data.bar $ vcs commit -m "Converted data to format bar" In the first two cases the conversion is not an atomic operation and the file extension is "lying" in the first commit. In the last case the conversion will not be detected as a move operation, so as far as I can tell it'll be difficult to trace the file history across the commit. Although I'd instinctively prefer the last solution, I can't help thinking that tracing history should be given very high priority in version control. What is the best thing to do here?

    Read the article

  • Creating a new naming context in OUD

    - by Sylvain Duloutre
    A naming context (also known as a directory suffix) is a DN that identifies the top entry in a locally held directory hierarchy. A new naming context can be created using ODSM, the OUD gui admin console, as described in http://docs.oracle.com/cd/E29407_01/admin.111200/e22648/server_config.htm#CBDGCJGF It can also be created using the dsconfig command lione as described below: Creation of a new naming context consists in 3 steps: First create a Local Backend Workflow element (myNewDb in this exemple) ,  responsible for the naming context base dn, e.g o=example. dsconfig create-workflow-element \           --set base-dn:o=example \           --set enabled:true \           --type db-local-backend \           --element-name myNewDb \           --hostname <your host> \           --port <admin port> \           --bindDN cn=Directory\ Manager \           --bindPasswordFile ****** \           --no-prompt Second, create a Workflow element (workFlowForMyNewDb in this exemple) associated with the Local Backend Workflow element. WorkFlow elements are used to route LDAP requests to the appropriate database, based on the target base dn. dsconfig create-workflow \           --set base-dn:o=example \           --set enabled:true \           --set workflow-element:myNewDb \           --type generic \           --workflow-name workFlowForMyNewDb \           --hostname <your host name> \           --port <admin port>\           --bindDN cn=Directory\ Manager \           --bindPasswordFile ****** \           --no-prompt Then, the workflow element must be made visible outside of the directory, i.e added to the internal "routing table". This is done by adding the Workflow to the appropriate Network Group. A Network group  is used to classify incoming client connections and route requests to workflows. dsconfig set-network-group-prop \           --group-name network-group \           --add workflow:workFlowForMyNewDb \           --hostname <your hostname> \           --port <admin port>\           --bindDN cn=Directory\ Manager \           --bindPasswordFile ****** \           --no-prompt At that stage, it is possible to import entries to the new naming context o=example.

    Read the article

  • Best/Easiest Technology for a RESTful webservice [closed]

    - by user1751547
    So I'm going to be creating a phone app + website that will need to utilize a web service. Webservices are completely outside my domain so I'm not entirely sure where to start. Does anybody have any suggestions on the technology stack I should use? (mainly in terms of ease of use and reliability) So far what I've looked at are: RoR Python + Django + TastyPie Python + Flask Microsoft WCF 3.5 PHP + some framework I would rather not do anything with Java I'm leaning towards the Python + Django + TastyPie route as it seems like it would be easy to get up and going and learn in general. My only concern with it is the reliability of the libraries (feature breaking updates, abandonment, etc). Also I would prefer to create the website with the same framework so I wouldn't have to deal with learning and using two different ones. Any advice would be helpful, thanks.

    Read the article

  • Best way to indicate more results available

    - by Alex Stangl
    We have a service to return messages. We want to limit the number returned, either allowing the caller to specify the max number to return, or else to use an internal hard limit. We also have thought it would be nice to include in the response whether more messages are available. The "best" way to go about this is not clear. Here are some ideas so far: Only set the "more messages" indicator if the user did not specify a max limit, and the internal max limit was hit. Same as #1 except that "more messages" indicator set regardless of whether the internal hard limit is hit, or the user-specified limit is hit. Same as #1 (or #2) except that we internally read limit + 1 records, but only return limit records, so we know "for sure" there is at least one additional message rather than "maybe" there are additional messages. Do away with the "more messages" flag, as it is confusing and unnecessary. Instead force the user to keep calling the API until it returns no messages. Change "more messages" indicator to something more akin to an EOF indicator, only set when the last message is known to have been retrieved and returned. What do you think is the best solution? (Doesn't have to be one of the above choices.) I searched and couldn't find a similar question already asked. Hopefully this is not "too subjective".

    Read the article

  • Want to Learn SQL Server 2012?

    - by andyleonard
    Or SSIS 2012? SSRS 2012? SSAS 2012? There’s no substitute for getting your hands on the product, in my opinion. I can hear you thinking, “But Andy, I can’t afford to purchase a copy of SQL Server 2012.” Are you sure? What if I told you that you can get a full-feature version of SQL Server 2012 Enterprise Edition for $50? Well, you cannot… it’s actually less than $50! SQL Server 2012 Developer Edition is available at Amazon on the day of this writing for $41.24USD. That’s about the price of eight...(read more)

    Read the article

  • How should modules access data outside their scope?

    - by Joe
    I run into this same problem quite often. First, I create a namespace and then add modules to this namespace. Then issue I always run into is how best to initialize the application? Naturally, each module has its own startup procedure so should this data(not code in some cases, just a list of items to run) stay with the module? Or should there be a startup procedure in the global namespace which has the startup data for ALL the modules. Which is the more robust way of organizing this situation? Should some things be made centralized or should there be strict adherence to modules encapsulating everything about themselves? Though this is a general architecture questions, Javascript centric answers would be really appreciated!

    Read the article

  • Send raw data to USB parallel port after upgrading to 11.10

    - by zaphod
    I have a laser cutter connected via a generic USB to parallel adapter. The laser cutter speaks HPGL, as it happens, but since this is a laser cutter and not a plotter, I usually want to generate the HPGL myself, since I care about the ordering, speed, and direction of cuts and so on. In previous versions of Ubuntu, I was able to print to the cutter by copying an HPGL file directly to the corresponding USB "lp" device. For example: cp foo.plt /dev/usblp1 Well, I just upgraded to Ubuntu 11.10 oneiric, and I can't find any "lp" devices in /dev anymore. D'oh! What's the preferred way to send raw data to a parallel port in Ubuntu? I've tried System Settings Printing + Add, hoping that I might be able to associate my device with some kind of "raw printer" driver and print to it with a command like lp -d LaserCutter foo.plt But my USB to parallel adapter doesn't seem to show up in the list. What I do see are my HP Color LaserJet, two USB-to-serial adapters, "Enter URI", and "Network Printer". Meanwhile, over in /dev, I do see /dev/ttyUSB0 and /dev/ttyUSB1 devices for the 2 USB-to-serial adapters. I don't see anything obvious corresponding to the HP printer (which was /dev/usblp0 prior to the upgrade), except for generic USB stuff. For example, sudo find /dev | grep lp produces no output. I do seem to be able to print to the HP printer just fine, though. The printer setup GUI gives it a device URI starting with "hp:" which isn't much help for the parallel adapter. The CUPS administrator's guide makes it sound like I might need to feed it a device URI of the form parallel:/dev/SOMETHING, but of course if I had a /dev/SOMETHING I'd probably just go on writing to it directly. Here's what dmesg says after I disconnect and reconnect the device from the USB port: [ 924.722906] usb 1-1.1.4: USB disconnect, device number 7 [ 959.993002] usb 1-1.1.4: new full speed USB device number 8 using ehci_hcd And here's how it shows up in lsusb -v: Bus 001 Device 008: ID 1a86:7584 QinHeng Electronics CH340S Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 1.10 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 8 idVendor 0x1a86 QinHeng Electronics idProduct 0x7584 CH340S bcdDevice 2.52 iManufacturer 0 iProduct 2 USB2.0-Print iSerial 0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 32 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0x80 (Bus Powered) MaxPower 96mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 7 Printer bInterfaceSubClass 1 Printer bInterfaceProtocol 2 Bidirectional iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0020 1x 32 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x02 EP 2 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0020 1x 32 bytes bInterval 0 Device Status: 0x0000 (Bus Powered)

    Read the article

  • Loading Wavefront Data into VAO and Render It

    - by Jordan LaPrise
    I have successfully loaded a triangulated wavefront(.obj) into 6 vectors, the first 3 vectors contain the locations for vertices, uv coords, and normals. The last three have the indices stored for each of the faces. I have been looking into using VAO's and VBO's to render, and I'm not quite sure how to load and render the data. One of my biggest concerns is the fact that indexed rendering only allows you to have one array of indices, meaning I somehow have to make all of the first three vectors the same size, the only way I thought of doing this, is to make 3 new vertex's of equal size, and load in the data for each face, but that would completely defeat the purpose of indexing. Any help would be appreciated. Thanks in advance, Jordan

    Read the article

  • Best practices for periodically saving game state to disk

    - by Ben Morris
    I'm working on an MMO. All of the player and environment data lives on a server and is kept in memory. There's a "world" object which keeps track of all of the maps, characters, etc. and their relations to each other. To avoid data loss in case of a crash, I've been periodically serializing the world to disk. The trouble is, this object can be quite large, so when the server starts writing, there's noticeable in-game slowdown for a few seconds, which I'd like to avoid. Any pointers on how to go about this in a more efficient way?

    Read the article

  • Reuse Business Logic between Web and API

    - by fesja
    We have a website and two mobile apps that connect through an API. All the platforms do the exactly same things. Right now the structure is the following: Website. It manages models, controllers, views for the website. It also executes all background tasks. So if a user create a place, everything is executed in this code. API. It manages models, controllers and return a JSON. If a user creates a place on the mobile app, the place is created here. After, we add a background task to update other fields. This background task is executed by the Website. We are redoing everything, so it's time to improve the approach. Which is the best way to reuse the business logic so I only need to code the insert/edit/delete of the place & other actions related in just one place? Is a service oriented approach a good idea? For example: Service. It has the models and gets, adds, updates and deletes info from the DB. Website. It send the info to the service, and it renders HTML. API. It sends info to the service, and it returns JSON. Some problems I have found: More initial work? Not sure.. It can work slower. Any experience? The benefits: We only have the business logic in one place, both for web and api. It's easier to scale. We can put each piece on different servers. Other solutions Duplicate the code and be careful not to forget anything (do tests!) DUplicate some code but execute background tasks that updates the related fields and executes other things (emails, indexing...) A "small" detail is we are 1.3 person in backend, for now ;)

    Read the article

  • Developing a php system that tracks other websites analytics

    - by CodeCrack
    I want to develop a PHP website feature where users sign up, get a javascript snippet code that display an image on their site, and let's me track the number of visitors, unique hits, clicks and average visitor duration on their page. Is that something that should be done with some open source analytic software such as http://piwik.org/ or it's pretty doable on your own? If I had to do it myself from scratch, I would use image/pixel as a way to track the visit, drop a cookie with javascript snippet to track uniques, track clicks based on image click and redirect, and not sure about the bounce rate. Any thoughts or opinions are welcome.

    Read the article

< Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >