Search Results

Search found 3403 results on 137 pages for 'monthly statements'.

Page 63/137 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • what POIs (Point of Interests) DB can I use for a commercial app

    - by Ellie
    Hi, I need a POI database for a startup project I am working on - it will be a free basic version and a premium paid for version in the sense that user will pay a monthly subscription. I would like to use foursquare type checkin to places and plancast type functionality to search for places (one-line search). Ie I need to: - perform a search for POIs around a location - associate users to that POI, with a time stamp - allow users to add own POIs - provide free-text search for POIs (a la google one-line search) Google API allows great search, but I understand there are limits in number of requests that can be done? This would prevent scaling, and may result in application breaking when too many users. Also what does google T&C say about using this in a paid for service? Openstreetmap I understand does not have these contstraints, but do they also provide a good one-line search type API? Or how could I solve this? Many thanks for any advice, Ellie

    Read the article

  • Changing a SUM returned NULL to zero

    - by Lee_McIntosh
    I have a Stored Procedure as follows: CREATE PROC [dbo].[Incidents] (@SiteName varchar(200)) AS SELECT ( SELECT SUM(i.Logged) FROM tbl_Sites s INNER JOIN tbl_Incidents i ON s.Location = i.Location WHERE s.Sites = @SiteName AND i.[month] = DATEADD(mm, DATEDIFF(mm, 0, GetDate()) -1,0) GROUP BY s.Sites ) AS LoggedIncidents 'tbl_Sites contains a list of reported on sites. 'tbl_Incidents containts a generated list of total incidents by site/date (monthly) 'If a site doesnt have any incidents that month it wont be listed. The problem I'm having is that a site doesnt have any Incidents this month and as such i get a NULL value returned for that site when i run this sproc, but i need to have a zero/0 returned to be used within a chart in SSRS. I've tried the using coalesce and isnull to no avail. SELECT COALESCE(SUM(c.Logged,0)) SELECT SUM(ISNULL(c.Logged,0)) Is there a way to get this formatted correctly? Cheers, Lee

    Read the article

  • Distributed datastore

    - by Julien Genestoux
    We're trying to add some kind of persistence in our app. The app generates about 250 entries per second. Each of these entries belong to one of 2M files. For each file, we want to keep the last 10 entries, so we can look them up later. The way our client application works : it gets a stream of all the data it fetches the right file (GET) it adds the new content it saves the file back (PUT) We're looking for an efficient way to store this data that can scale horizontally as the amount of data we're getting is doubling every few weeks. We initially looked at S3. It works fine, but becomes very expensive very fast ($1000 monthly just in PUT operations!) We then gave a shot at Riak. But it seems we can't get more than 60 write/sec on each node, which is very very slow. Any other solution out there?

    Read the article

  • Ooutsourcing design/programming (taxes!)

    - by alexeypro
    Hello, I have a full time job, but I also have some ideas in mind -- and I want to outsource some code development and design to Russia. I have friends there who help me to do that -- they'll do the development, provide me invoice (which is just description of our terms, because we decided on flat rate monthly fee), and I need to wire them money. So, I do not have business entity. If I'll pay them, am I am required to pay taxes? Or do I deduct my business expenses (say $30K/year) from my earnings (say $50K/year), and pay taxes only on what's left ($20K to be precise)? Any rules on that? Am I limited with something here? Thanks!

    Read the article

  • Benchmarking MySQL Replication with Multi-Threaded Slaves

    - by Mat Keep
    0 0 1 1145 6530 Homework 54 15 7660 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} The objective of this benchmark is to measure the performance improvement achieved when enabling the Multi-Threaded Slave enhancement delivered as a part MySQL 5.6. As the results demonstrate, Multi-Threaded Slaves delivers 5x higher replication performance based on a configuration with 10 databases/schemas. For real-world deployments, higher replication performance directly translates to: · Improved consistency of reads from slaves (i.e. reduced risk of reading "stale" data) · Reduced risk of data loss should the master fail before replicating all events in its binary log (binlog) The multi-threaded slave splits processing between worker threads based on schema, allowing updates to be applied in parallel, rather than sequentially. This delivers benefits to those workloads that isolate application data using databases - e.g. multi-tenant systems deployed in cloud environments. Multi-Threaded Slaves are just one of many enhancements to replication previewed as part of the MySQL 5.6 Development Release, which include: · Global Transaction Identifiers coupled with MySQL utilities for automatic failover / switchover and slave promotion · Crash Safe Slaves and Binlog · Optimized Row Based Replication · Replication Event Checksums · Time Delayed Replication These and many more are discussed in the “MySQL 5.6 Replication: Enabling the Next Generation of Web & Cloud Services” Developer Zone article  Back to the benchmark - details are as follows. Environment The test environment consisted of two Linux servers: · one running the replication master · one running the replication slave. Only the slave was involved in the actual measurements, and was based on the following configuration: - Hardware: Oracle Sun Fire X4170 M2 Server - CPU: 2 sockets, 6 cores with hyper-threading, 2930 MHz. - OS: 64-bit Oracle Enterprise Linux 6.1 - Memory: 48 GB Test Procedure Initial Setup: Two MySQL servers were started on two different hosts, configured as replication master and slave. 10 sysbench schemas were created, each with a single table: CREATE TABLE `sbtest` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `k` int(10) unsigned NOT NULL DEFAULT '0',    `c` char(120) NOT NULL DEFAULT '',    `pad` char(60) NOT NULL DEFAULT '',    PRIMARY KEY (`id`),    KEY `k` (`k`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 10,000 rows were inserted in each of the 10 tables, for a total of 100,000 rows. When the inserts had replicated to the slave, the slave threads were stopped. The slave data directory was copied to a backup location and the slave threads position in the master binlog noted. 10 sysbench clients, each configured with 10 threads, were spawned at the same time to generate a random schema load against each of the 10 schemas on the master. Each sysbench client executed 10,000 "update key" statements: UPDATE sbtest set k=k+1 WHERE id = <random row> In total, this generated 100,000 update statements to later replicate during the test itself. Test Methodology: The number of slave workers to test with was configured using: SET GLOBAL slave_parallel_workers=<workers> Then the slave IO thread was started and the test waited for all the update queries to be copied over to the relay log on the slave. The benchmark clock was started and then the slave SQL thread was started. The test waited for the slave SQL thread to finish executing the 100k update queries, doing "select master_pos_wait()". When master_pos_wait() returned, the benchmark clock was stopped and the duration calculated. The calculated duration from the benchmark clock should be close to the time it took for the SQL thread to execute the 100,000 update queries. The 100k queries divided by this duration gave the benchmark metric, reported as Queries Per Second (QPS). Test Reset: The test-reset cycle was implemented as follows: · the slave was stopped · the slave data directory replaced with the previous backup · the slave restarted with the slave threads replication pointer repositioned to the point before the update queries in the binlog. The test could then be repeated with identical set of queries but a different number of slave worker threads, enabling a fair comparison. The Test-Reset cycle was repeated 3 times for 0-24 number of workers and the QPS metric calculated and averaged for each worker count. MySQL Configuration The relevant configuration settings used for MySQL are as follows: binlog-format=STATEMENT relay-log-info-repository=TABLE master-info-repository=TABLE As described in the test procedure, the slave_parallel_workers setting was modified as part of the test logic. The consequence of changing this setting is: 0 worker threads:    - current (i.e. single threaded) sequential mode    - 1 x IO thread and 1 x SQL thread    - SQL thread both reads and executes the events 1 worker thread:    - sequential mode    - 1 x IO thread, 1 x Coordinator SQL thread and 1 x Worker thread    - coordinator reads the event and hands it to the worker who executes 2+ worker threads:    - parallel execution    - 1 x IO thread, 1 x Coordinator SQL thread and 2+ Worker threads    - coordinator reads events and hands them to the workers who execute them Results Figure 1 below shows that Multi-Threaded Slaves deliver ~5x higher replication performance when configured with 10 worker threads, with the load evenly distributed across our 10 x schemas. This result is compared to the current replication implementation which is based on a single SQL thread only (i.e. zero worker threads). Figure 1: 5x Higher Performance with Multi-Threaded Slaves The following figure shows more detailed results, with QPS sampled and reported as the worker threads are incremented. The raw numbers behind this graph are reported in the Appendix section of this post. Figure 2: Detailed Results As the results above show, the configuration does not scale noticably from 5 to 9 worker threads. When configured with 10 worker threads however, scalability increases significantly. The conclusion therefore is that it is desirable to configure the same number of worker threads as schemas. Other conclusions from the results: · Running with 1 worker compared to zero workers just introduces overhead without the benefit of parallel execution. · As expected, having more workers than schemas adds no visible benefit. Aside from what is shown in the results above, testing also demonstrated that the following settings had a very positive effect on slave performance: relay-log-info-repository=TABLE master-info-repository=TABLE For 5+ workers, it was up to 2.3 times as fast to run with TABLE compared to FILE. Conclusion As the results demonstrate, Multi-Threaded Slaves deliver significant performance increases to MySQL replication when handling multiple schemas. This, and the other replication enhancements introduced in MySQL 5.6 are fully available for you to download and evaluate now from the MySQL Developer site (select Development Release tab). You can learn more about MySQL 5.6 from the documentation  Please don’t hesitate to comment on this or other replication blogs with feedback and questions. Appendix – Detailed Results

    Read the article

  • NVP request - CreateRecurringPaymentsProfile

    - by jiwanje.mp
    Im facing problem with Trail period and first month payemnt. My requirement is users can signup with trial period which allow new users to have a 30 day free trial. This means they will not be charged the monthly price until after the first 30 days the regular amount will be charged to user. but the next billing date should be one month later the profile start date. but next billing data and profile start date shows same when i query by GetRecurringPaymentProfile? Please help me how can i send the Recurring bill payment for this functionality. Thanks in advance, jiwan

    Read the article

  • Developing Schema Compare for Oracle (Part 1)

    - by Simon Cooper
    SQL Compare is one of Red Gate's most successful SQL Server tools; it allows developers and DBAs to compare and synchronize the contents of their databases. Although similar tools exist for Oracle, they are quite noticeably lacking in the usability and stability that SQL Compare is known for in the SQL Server world. We could see a real need for a usable schema comparison tools for Oracle, and so the Schema Compare for Oracle project was born. Over the next few weeks, as we come up to release of v1, I'll be doing a series of posts on the development of Schema Compare for Oracle. For the first post, I thought I would start with the main pitfalls that we stumbled across when developing the product, especially from a SQL Server background. 1. Schemas and Databases The most obvious difference is that the concept of a 'database' is quite different between Oracle and SQL Server. On SQL Server, one server instance has multiple databases, each with separate schemas. There is typically little communication between separate databases, and most databases are no more than about 1000-2000 objects. This means SQL Compare can register an entire database in a reasonable amount of time, and cross-database dependencies probably won't be an issue. It is a quite different scene under Oracle, however. The terms 'database' and 'instance' are used interchangeably, (although technically 'database' refers to the datafiles on disk, and 'instance' the running Oracle process that reads & writes to the database), and a database is a single conceptual entity. This immediately presents problems, as it is infeasible to register an entire database as we do in SQL Compare; in my Oracle install, using the standard recommended options, there are 63975 system objects. If we tried to register all those, not only would it take hours, but the client would probably run out of memory before we finished. As a result, we had to allow people to specify what schemas they wanted to register. This decision had quite a few knock-on effects for the design, which I will cover in a future post. 2. Connecting to Oracle The next obvious difference is in actually connecting to Oracle – in SQL Server, you can specify a server and database, and off you go. On Oracle things are slightly more complicated. SIDs, Service Names, and TNS A database (the files on disk) must have a unique identifier for the databases on the system, called the SID. It also has a global database name, which consists of a name (which doesn't have to match the SID) and a domain. Alternatively, you can identify a database using a service name, which normally has a 1-to-1 relationship with instances, but may not if, for example, using RAC (Real Application Clusters) for redundancy and failover. You specify the computer and instance you want to connect to using TNS (Transparent Network Substrate). The user-visible parts are a config file (tnsnames.ora) on the client machine that specifies how to connect to an instance. For example, the entry for one of my test instances is: SC_11GDB1 = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = simonctest)(PORT = 1521)) ) (CONNECT_DATA = (SID = 11gR1db1) ) ) This gives the hostname, port, and SID of the instance I want to connect to, and associates it with a name (SC_11GDB1). The tnsnames syntax also allows you to specify failover, multiple descriptions and address lists, and client load balancing. You can then specify this TNS identifier as the data source in a connection string. Although using ODP.NET (the .NET dlls provided by Oracle) was fine for internal prototype builds, once we released the EAP we discovered that this simply wasn't an acceptable solution for installs on other people's machines. Due to .NET assembly strong naming, users had to have installed on their machines the exact same version of the ODP.NET dlls as we had on our build server. We couldn't ship the ODP.NET dlls with our installer as the Oracle license agreement prohibited this, and we didn't want to force users to install another Oracle client just so they can run our program. To be able to list the TNS entries in the connection dialog, we also had to locate and parse the tnsnames.ora file, which was complicated by users with several Oracle client installs and intricate TNS entries. After much swearing at our computers, we eventually decided to use a third party Oracle connection library from Devart that we could ship with our program; this could use whatever client version was installed, parse the TNS entries for us, and also had the nice feature of being able to connect to an Oracle server without having any client installed at all. Unfortunately, their current license agreement prevents us from shipping an Oracle SDK, but that's a bridge we'll cross when we get to it. 3. Running synchronization scripts The most important difference is that in Oracle, DDL is non-transactional; you cannot rollback DDL statements like you can on SQL Server. Although we considered various solutions to this, including using the flashback archive or recycle bin, or generating an undo script, no reliable method of completely undoing a half-executed sync script has yet been found; so in this case we simply have to trust that the DBA or developer will check and verify the script before running it. However, before we got to that stage, we had to get the scripts to run in the first place... To run a synchronization script from SQL Compare we essentially pass the script over to the SqlCommand.ExecuteNonQuery method. However, when we tried to do the same for an OracleConnection we got a very strange error – 'ORA-00911: invalid character', even when running the most basic CREATE TABLE command. After much hair-pulling and Googling, we discovered that Oracle has got some very strange behaviour with semicolons at the end of statements. To understand what's going on, we need to take a quick foray into SQL and PL/SQL. PL/SQL is not T-SQL In SQL Server, T-SQL is the language used to interface with the database. It has DDL, DML, control flow, and many other nice features (like Turing-completeness) that you can mix and match in the same script. In Oracle, DDL SQL and PL/SQL are two completely separate languages, with different syntax, different datatypes and different execution engines within the instance. Oracle SQL is much more like 'pure' ANSI SQL, with no state, no control flow, and only the basic DML commands. PL/SQL is the Turing-complete language, but can only do DML and DCL (i.e. BEGIN TRANSATION commands). Any DDL or SQL commands that aren't recognised by the PL/SQL engine have to be passed back to the SQL engine via an EXECUTE IMMEDIATE command. In PL/SQL, a semicolons is a valid token used to delimit the end of a statement. In SQL, a semicolon is not a valid token (even though the Oracle documentation gives them at the end of the syntax diagrams) . When you execute the command CREATE TABLE table1 (COL1 NUMBER); in SQL*Plus the semicolon on the end is a command to SQL*Plus to execute the preceding statement on the server; it strips off the semicolon before passing it on. SQL Developer does a similar thing. When executing a PL/SQL block, however, the syntax is like so: BEGIN INSERT INTO table1 VALUES (1); INSERT INTO table1 VALUES (2); END; / In this case, the semicolon is accepted by the PL/SQL engine as a statement delimiter, and instead the / is the command to SQL*Plus to execute the current block. This explains the ORA-00911 error we got when trying to run the CREATE TABLE command – the server is complaining about the semicolon on the end. This also means that there is no SQL syntax to execute more than one DDL command in the same OracleCommand. Therefore, we would have to do a round-trip to the server for every command we want to execute. Obviously, this would cause lots of network traffic and be very slow on slow or congested networks. Our first attempt at a solution was to wrap every SQL statement (without semicolon) inside an EXECUTE IMMEDIATE command in a PL/SQL block and pass that to the server to execute. One downside of this solution is that we get no feedback as to how the script execution is going; we're currently evaluating better solutions to this thorny issue. Next up: Dependencies; how we solved the problem of being unable to register the entire database, and the knock-on effects to the whole product.

    Read the article

  • How split a column in two colunms in pandas

    - by user1345283
    I have el next dataframe data=read_csv('enero.csv') data Fecha DirViento MagViento 0 2011/07/01 00:00 318 6.6 1 2011/07/01 00:15 342 5.5 2 2011/07/01 00:30 329 6.6 3 2011/07/01 00:45 279 7.5 4 2011/07/01 01:00 318 6.0 5 2011/07/01 01:15 329 7.1 6 2011/07/01 01:30 300 4.7 7 2011/07/01 01:45 291 3.1 How to split the column Fecha in two columns,for example, get a dataframe as follows: Fecha Hora DirViento MagViento 0 2011/07/01 00:00 318 6.6 1 2011/07/01 00:15 342 5.5 2 2011/07/01 00:30 329 6.6 3 2011/07/01 00:45 279 7.5 4 2011/07/01 01:00 318 6.0 5 2011/07/01 01:15 329 7.1 6 2011/07/01 01:30 300 4.7 7 2011/07/01 01:45 291 3.1 I am using pandas for to read data I try to calculate daily averages from a monthly database has daily data recorded every 15 minutes. To do this, use pandas and grouped the columns: Date and Time for get a dataframe as follow: Fecha Hora 2011/07/01 00:00 -4.4 00:15 -1.7 00:30 -3.4 2011/07/02 00:00 -4.5 00:15 -4.2 00:30 -7.6 2011/07/03 00:00 -6.3 00:15 -13.7 00:30 -0.3 with this look, I get the following grouped.mean() Fecha DirRes 2011/07/01 -3 2011/07/02 -5 2011/07/03 -6

    Read the article

  • In search of opinions on web based version control systems

    - by tom smith
    Hi. Researching various open source, web-based document management/version control systems. I've checked google/questions here, etc... I'm looking for a lightweight web-based (apache) document mgmt/version control app that runs on top of SVN. I need to have the ability to: have multiple users checkin/checkout have a workflow (when userA checks the file in, and finishes the app passes it to the next person, etc... the app needs to allow me to have a structure where the files can be moved as a group. the files will be changed on a monthly basis app needs to have a access/premission control system. some people can see certain files, and perform certain actions on the files I imagine that I'm going to have 40-50 people dealing with the different files. I imagine that I'm going to have 2000-3000 files that have to be massaged. I'd prefer that the app be php based if possible, as opposed to a straight java app. Thanks

    Read the article

  • Error : while creating a file.

    - by Harikrishna
    I am storing records of datatable to csv file with the help of the following code : // we'll use these to check for rows with nulls var columns = yourTable.Columns .Cast<DataColumn>(); using (var writer = new StreamWriter(yourPath)) { for (int i = 0; i < yourTable.Rows.Count; i++) { DataRow row = yourTable.Rows[i]; // check for any null cells if (columns.Any(column => row.IsNull(column))) continue; string[] textCells = row.ItemArray .Select(cell => cell.ToString()) // may need to pick a text qualifier here .ToArray(); // check for non-null but EMPTY cells if (textCells.Any(text => string.IsNullOrEmpty(text))) continue; writer.WriteLine(string.Join(",", textCells)); } } But I get the error like System.UnauthorizedAccessException was unhandled Message="Access to the path 'D:\\Harikrishna\\Monthly Reports' is denied." What can be the problem and solution ?

    Read the article

  • How to find a programmer for my project?

    - by Al
    I'm building a web application to generate monthly subscription fees, but I've quickly realised I'm going to need some help with the project to finish it this century. I don't have any money upfront for a freelancer and every website I've found takes bids for project work. The tasks that need doing are flexible too because I can do whatever the other coder doesn't want to. I'm also happy to guide the developer and offer tips for performance/security/etc etc. My question is; how do I go about finding someone to work with on a profit-share basis? I'm sure there are a billion people like me with the "next killer app" but I genuinely believe in it. Can anyone offer some advice? Thanks in advance! EDIT: I guess the trick is to find someone passionate enough about the subject as I am. Where would I find someone? Are there websites that broker profit-share deals on programming work?

    Read the article

  • Is there anything exciting in perl 5.11 (to become perl 5.12)?

    - by Ether
    Perl 5.11 is now released! Is there anything really exciting in this release, or is it mostly maintenance patches? (From what I've read so far, it appears to be a rollup of improvements we have already seen in prior releases.) the CHANGES file Jesse Vincent's announcement chromatic's blog post 5.11 is the development release of what will become 5.12. The release process itself is changing to a monthly release model. UPDATE: Perl 5.12 is now released (April 12, 2010). the CHANGES file Jesse Vincent's announcement

    Read the article

  • Can i make products on Paypal site

    - by Mirage
    I offer hosting to only 2 clients for 2 diff products. I don't want to build the site and tell them to pay. Is is possible that on my paypal account i create two products like Product1 - monthly subscription 20 product 2 - monnthly yearly subscription 150 So that i can just send them the link and it automatically gets deducted from their account. One thing more can i send same link to other client as well who has bought the same product. Can i group them in paypal system to see which one has paid and which one does not Thanks

    Read the article

  • Modifying NSDate to represent 1 month from today

    - by bmalicoat
    I'm adding repeating events to a Cocoa app I'm working on. I have repeat every day and week fine because I can define these mathematically (3600*24*7 = 1 week). I use the following code to modify the date: [NSDate dateWithTimeIntervalSinceNow:(3600*24*7*(weeks))] I know how many months have passed since the event was repeated but I can't figure out how to make an NSDate object that represents 1 month/3 months/6 months/9 months into the future. Ideally I want the user to say repeat monthly starting Oct. 14 and it will repeat the 14th of every month.

    Read the article

  • Push or Pull to Excel for reporting data

    - by Nathan Fisher
    I am unsure which is the best way to go here. I have a third party Excel 2003 spreadsheet that needs to be filled in on a monthly basis and emailed. Currently it is a manual process and I am in the process of automating the generation of the spreadsheet. I have been throwing around different ideas of how to get the data into the spreadsheet. I have thought of using SSRS to create a report that is in a similar format and get the user to cut and past. Alternatively writing a VBA addin that retrieves that data from a webservice and then adds the data to the spreadsheet. Or using the third party spreadsheet as a template and open it on the server via oledb and adding the data then serving it as a downloadable file. Which is better or are the better solutions out there?

    Read the article

  • Autmatically create table on MySQL server based on date?

    - by Anthony
    Is there an equivalent to cron for MySQL? I have a PHP script that queries a table based on the month and year, like: SELECT * FROM data_2010_1 What I have been doing until now is, every time the script executes it does a query for the table, and if it exists, does the work, if it doesn't it creates the table. I was wondering if I can just set something up on the MySQL server itself that will create the table (based on a default table) at the stroke of midnight on the first of the month. Update Based on the comments I've gotten, I'm thinking this isn't the best way to achieve my goal. So here's two more questions: If I have a table with thousands of rows added monthly, is this potentially a drag on resources? If so, what is the best way to partition this table, since the above is verboten? What are the potential problems with my home-grown method I originally thought up?

    Read the article

  • How to explain to a client that you've gone over-budget and you'll need more money/time to deliver w

    - by General Tapioca
    My situation is that I have agreed on a per-project proposal with the client. The proposal is vague, but still names functionality in a way that can be argued as to whether it's included or not, while leaving some room for interpretation. I originally pressed as much as I could to get a per-month contract, arguing that the project is mostly non-predictable, but the client refused. Being a small company, I had to fold and signed a contract on an estimate based on my group's estimations. At this point we have reached completion on about 85% of the features (we think) but we ran out of budget. We have been working for almost two years with this client in previous contracts, and we have delivered a good product that they are happy with, so we have a good standing relationship. More info: -There has been a bit of scope-creep, but I don't think enough for me to hide behind that argument -We've been delivering partial releases about monthly. -We don't have systematic user-testing in place.

    Read the article

  • Best way to generate an automated report in Google Analytics for a specific collection of URLs

    - by Forrest Marvez
    Currently using Google Analytics as a supplement to our paid tracking software, but neither of them are giving us exactly what we need. I have a list of about 60 or so urls (out of about 1500) on the site that I wish to setup a monthly report for that can be emailed to multiple recipients. I can't seem to figure out how to create a report showing just the hits on these 60 urls, I can apply advanced filters on the content page but those disappear after a while and sometimes error out when adding too many URL's. Is there a method I'm missing in Google Analytics to achieve this goal or am I better running an SSIS package to pull the URL's from the API and formatting a document that way?

    Read the article

  • Automatic Maintenance Jobs in every PDB? New SPM Evolve Advisor Task in Oracle 12.1.0.2

    - by Mike Dietrich
    A customer checking out our slides from the OTN Tour in August 2014 asked me a finicky question the other day: "According to the documentation the Automatic SQL Tuning Advisor maintenance task gets executed only within the CDB$ROOT, but not within each PDB - but the slides are not clear here. So what is the truth?" Ok, that's good question. In my understanding all tasks will get executed within each PDB - that's why we recommend (based on experience) to break up the default maintenance windows when using Oracle Multitenant. Otherwise all PDBs will have the same maintenance windows, and guess what will happen when 25 PDBs start gathering object statistics at the same time ... The documentation indeed says: Automatic SQL Tuning Advisor data is stored in the root. It might have results about SQL statements executed in a PDB that were analyzed by the advisor, but these results are not included if the PDB is unplugged. A common user whose current container is the root can run SQL Tuning Advisor manually for SQL statements from any PDB. When a statement is tuned, it is tuned in any container that runs the statement. This sounds reasonable. But when we have a look into our PDBs or into the CDB_AUTOTASK_CLIENT view the result is different from what the doc says. In my environment I did create just two fresh empty PDBs (CON_ID 3 and 4): SQL> select client_name, status, con_id from cdb_autotask_client; CLIENT_NAME                           STATUS         CON_ID------------------------------------- ---------- ----------auto optimizer stats collection       ENABLED             1sql tuning advisor                    ENABLED             1auto space advisor                    ENABLED             1auto optimizer stats collection       ENABLED             4sql tuning advisor                    ENABLED             4auto space advisor                    ENABLED             4auto optimizer stats collection       ENABLED             3sql tuning advisor                    ENABLED             3auto space advisor                    ENABLED             3 9 rows selected. I haven't verified the reason why this is different from the docs but it may have been related to one change in Oracle Database 12.1.0.2: The new SPM Evolve Advisor Task ( SYS_AUTO_SPM_EVOLVE_TASK) for automatic plan evolution for SQL Plan Management. This new task doesn't appear as a stand-alone job (client) in the maintenance window but runs as a sub-entity of the Automatic SQL Tuning Advisor task. And (I'm just guessing) this may be one of the reasons why every PDB will have to have its own Automatic SQL Tuning Advisor task  Here you'll find more information about how to enable, disable and configure the new Oracle 12.1.0.2 SPM Evolve Advisor Task: Oracle Database 12.1.0.2 SQL Tuning Guide:Managing the SPM Evolve Advisor Task -Mike

    Read the article

  • Best way to get record counts grouped by month, adjusted for time zone, using SQL or LINQ to SQL

    - by Jeff Putz
    I'm looking for the most efficient way to suck out a series of monthly counts of records in my database, but adjusting for time zone, since the times are actually stored as UTC. I would like my result set to be a series of objects that include month, year and count. I have LINQ to SQL objects that looks something like this: public class MyRecord { public int ID { get; set; } public DateTime TimeStamp { get; set; } public string Data { get; set; } } I'm not opposed to using straight SQL, but LINQ to SQL would at least keep the code a lot more clean. The time zone adjustment is available as an integer (-5, for example). Again, the result set what I'm looking for is objects containing the month, year and count, all integers. Any suggestions? I can think of several ways to do it straight, but not with a time zone adjustment.

    Read the article

  • php extract data from mysql

    - by florin
    I have this mysql table with the following rows: id_cont suma_lun month year -------------------------------------------- FL28 2133 March 2012 FL28 2144 April 2012 FL28 2155 May 2012 FL28 2166 June 2012 How can i extract suma_lun, month and year foreach id_cont? so that i get an output like this: ID: Month: Monthly Sum: Year: ---------------------------------------------- FL28 March 2133 2012 April 2144 2012 May 2155 2012 June 2166 2012 This is my current code: $link = mysql_connect(DB_HOST, DB_USER, DB_PASSWORD); if(!$link) die ('Could not connect to database: '.mysql_error()); mysql_select_db(DB_DATABASE,$link); $sql="SELECT * FROM test WHERE id_cont = '$cur'"; $result=mysql_query($sql); while ($row=mysql_fetch_array($result)) { $a=$row["id_cont"]; $b=$row["suma_lun"]; $c=$row["month"]; $d=$row["year"]; } I echo the data in a table Thanks!

    Read the article

  • BULK SMS, Long Codes (VMN MSIDN), T-mobile?

    - by John
    Does any US wireless carrier offer individuals or companies with a direct connection to the SMSC? The number is 747-772-3101 (repalce 7's with 6's) This number is registered to t-mobile, also verified by t-mobile to be a valid subscriber sending 160,000+ text messages monthly and that all they have is an unlimited text messaging plan on top of the cheapest voice plan. This company of the number verified to me that they don't use gsm modems as they are too slow. So I know it's possible but who would I contact, Sales or anyone else reachable through a 1-800 is ignorant to these services and developer.t-mobile is worthless and doesn't reply to emails. Any info??

    Read the article

  • Table Design in mysql

    - by RIDDHI
    Hi everyone, I need to create one table, Description : i need to create table based on schedule like daily, weekly & monthly, columns are like : sno, startdate, enddate, day, scheduletype For example i ll take weekly data, for my point of view : From sunday to saturday (1 - 7 )Id i create.... So lots of posibilities are creates like (1,2)(1,3) ..(1,2,3)....up to n....this twise posibility only but that will created up to 7 posibility in one. so how can i store this posibility in mysql database? If any one have an issue get back to me... Thanks in advanced!!!! Riddhi

    Read the article

  • filter results quarterly

    - by user339067
    I am writing a simple C# application that is handling bank savings. I want to be able to show the results on either, yearly, monthly or quarterly basic how can this be done? How can I loop through a set of results and only show every third post (if I am using quarterly) for example. In Python I can use range(1,31,3) but how is it done in C#? UPDATE 1 I want to loop 12 times (annually) and calculate the interest each loop but I only want to print the results every third loop (quarterly). How can I achieve this?

    Read the article

  • Need help writing a recurring task scheduler.

    - by Sisiutl
    I need to write a tool that will run a recurring task on a user configurable schedule. I'll write it in C# 3.5 and it will run on XP, Windows 7, or Windows Server 2008. The tasks take about 20 minutes to complete. The users will probably want to set up several configurations: e.g, daily, weekly, and monthly cycles. Using Task Scheduler is not an option. The user will schedule recurrences through an interface similar to Outlook's recurring appointment dialog. Once they set up the schedule they will start it up and it should sit in the system tray and kick off its tasks at the appointed times, then send mail to indicate it has finished. What is the best way to write this so that it doesn't eat up resources, lock up the host, or otherwise misbehave?

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >