Search Results

Search found 5233 results on 210 pages for 'records'.

Page 1/210 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • New MoReq standard for records managment under development - contribution phase commencing shortly

    - by shahid.rashid
    The DLM Forum is creating a new MoReq specification, MoReq2010, and Oracle will be contributing to this. We also highly encourage those of you in compliance, records management, and archiving (particularly those based outside the US) to participate in the development and review of the standard - the time commitment can be as little or as much as you please. The contribution phase is to commence this month with review planned in August. The official announcement from the DLM Forum and details on how to participate are located here.

    Read the article

  • Problem with squid log files

    - by Gatura
    I am using SARG to get a report on the squid log files, I get this result /usr/local/Sarg/bin/sarg -l /usr/local/squid/var/logs/access.log SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% sort: open failed: +6.5nr: No such file or directory SARG: (index) Cannot open file: /Applications/Sarg/reports/index.sort SARG: Records in file: 0, reading: 0.00% What could be the problem?

    Read the article

  • SharePoint Records Center Submitted E-mail Records not picked up

    - by Kenneth Verburg
    We have set up a new SharePoint 2007 site with a Records Repository. We're using Exchange 2007 Managed Folders to route e-mails to this repository based on the 'label' attached to the e-mail as set in the Exchange 2007 journaling options. E-mails added to a Managed Folder get sent to SharePoint, they end up in the "Submitted E-mail Records" list of the Records Repository. That's according to plan, but the e-mails are not routed to the respective document library as defined by the label. Instead an error appears in the event viewer for every e-mail listed in the Submitted E-mail Records list, on every interval of the records repository schedule (set to every two minutes for testing purposes): Value cannot be null, parameter name: g. Sending a document from the SharePoint site iself to the Records Repository via the Send To... link works fine, but e-mails get stuck in the list... We have set Document Libraries in the Respository with and without content types (with matching names with the Label and the Record Routing rule set). Any ideas what could be wrong? This is in the event log: Every two minutes the following error appears in the Application Log: Source: Office SharePoint Server Category: Records Center Type: Error Event ID: 4975 User: N/A Computer: SPS2007 Description: Value cannot be null. Parameter name: g For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

    Read the article

  • SharePoint Records Center Submitted E-mail Records not picked up

    - by Kenneth Verburg
    We have set up a new SharePoint 2007 site with a Records Repository. We're using Exchange 2007 Managed Folders to route e-mails to this repository based on the 'label' attached to the e-mail as set in the Exchange 2007 journaling options. E-mails added to a Managed Folder get sent to SharePoint, they end up in the "Submitted E-mail Records" list of the Records Repository. That's according to plan, but the e-mails are not routed to the respective document library as defined by the label. Instead an error appears in the event viewer for every e-mail listed in the Submitted E-mail Records list, on every interval of the records repository schedule (set to every two minutes for testing purposes): Value cannot be null, parameter name: g. Sending a document from the SharePoint site iself to the Records Repository via the Send To... link works fine, but e-mails get stuck in the list... We have set Document Libraries in the Respository with and without content types (with matching names with the Label and the Record Routing rule set). Any ideas what could be wrong? This is in the event log: Every two minutes the following error appears in the Application Log: Source: Office SharePoint Server Category: Records Center Type: Error Event ID: 4975 User: N/A Computer: SPS2007 Description: Value cannot be null. Parameter name: g For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

    Read the article

  • OARC's DNSSEC validating resolvers validate all my records but A records

    - by demize
    I have DNS set up with powerdns. It serves my DNS pretty well, and it AXFRs to other slaves. The slaves haven't yet updated to the most recent records, but that doesn't affect the validation, it would appear. Any record I can think of (AAAA, MX, TXT, even the CNAME for www) validates -- except for A records: dig @149.20.64.20 +dnssec www.demize95.com CNAME returns ;; flags: qr rd ra ad; QUERY: 1, ANSWER: 2, AUTHORITY: 5, ADDITIONAL: 7 while dig @149.20.64.20 +dnssec demize95.com A returns ;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 5, ADDITIONAL: 7. The same happens with any other A record I have. I set up DNSSEC with pdnssec, and it does work for all the other records, but it's never validated for my A records. What's the problem here? Also, a side-note: I have to use ISC's DLV to create the chain of trust, since my domain registrar doesn't yet support sending the DS records to the com zone.

    Read the article

  • Providing reverse records for records that map to ISP IP

    - by thejartender
    I have been instructed to use my ISP ip (as a temporary fix for mapping my name server and domain records as my router dishes out rfc 1918 adresses to devices in my network where I am running an Ubuntu server, my router and my development laptop andso I have fixed: $TTL 3H @ IN SOA ns.thejarbar.org. email. ( 13112012 28800 3600 604800 38400 ); thejarbar.org. IN A 10.0.0.42 @ IN NS ns.thejarbar.org. yuccalaptop IN A 10.0.0.19 ns IN A 10.0.0.42 gw IN A 10.0.0.138 www IN CNAME thejarbar.org. To a temporary version of: $TTL 3H @ IN SOA ns.thejarbar.org. email. ( 13112012 28800 3600 604800 38400 ); thejarbar.org. IN A 88.89.190.171 @ IN NS ns.thejarbar.org. yuccalaptop IN A 10.0.0.19 ns IN A 88.89.190.171 gw IN A 10.0.0.138 www IN CNAME thejarbar.org. I am using bind and when using named-checkzone on this file according to my zone configurations, this file has no errors. I then run dig thejarbar.org @88.89.190.171 and get an expected authorative reply. My issue is creating my reverse DNS SOA zone and I would gratly appreciate assistance and guidance. I am stuck on how to represent the reverse records correctly for the eddresses that map to my isp IP. I am trying: $TTL 3H 0.0.10.in-addr.arpa. IN SOA ns.thejarbar.org. email. ( 13112012 28800 3600 604800 38400 ); 171.190.89.88. IN PTR thejarbar.org. 171.190.89.88. IN NS ns.thejarbar.org. 19 IN PTR yuccalaptop.thejarbar.org. 138 IN PTR gw.thejarbar.org. www IN PTR www.thejarbar.org. But running named-checkzone on this file leaves an erroneous return that IN: has no NS records I would greatly appreciate assistance

    Read the article

  • Explanation of various domain name records?

    - by Kumar
    At the time of hosting, normally we just change name servers in the domain control panel. It's fine if both mail and web servers are the same. When they're different, we need to change the DNS records. When I try to point my blog to my domain name, I came to know about the various types of DNS records - A Records, AAA Records, MX Records, CNAME Records, NS Records, TXT Records, SRV Records, SOA Records, etc. I searched on Google, but would like to know more about these deeply. I found this link on the Internet - http://www.directnic.com/help/faq/?question_id=103 and got some idea about the different DNS records. But I have some more questions. How do the domain name records work? Is there any difference between NS record and other records in the way they work? Where should the NS record point to when using A record, CNAME record and MX record?

    Read the article

  • Adding GLUE records for Google Apps MX records

    - by Saif Bechan
    Is there a way of adding GLUE for the Google Apps MX records. I have added them all and it works fine, but in all the DNS tools I see that there is no glue sent. I know that this is not a really big problem, because the gain you get out of it is next to zero. Nevertheless I just wanted to know if it is possible and how you do it. Or if it is not possible, what is the reason for it. I have asked this question also on the Google Help Forum but with no responses so far, so I thought i'd give it a shot here.

    Read the article

  • Sqlite: Selecting records spread over total records

    - by Martin
    I have a sql / sqlite question. I need to write a query that select some values from a sqlite database table. I always want the maximal returned records to be 20. If the total selected records are more than 20 I need to select 20 records that are spread over the total records. It is also important that I always select the first and last value from the table when sorted on the date. I know how to accomplish this in code but it would be perfect to have a sqlite query that can do the same. The query Im using now is really simple and looks like this: "SELECT value,date,valueid FROM tblvalue WHERE tblvalue.deleted=0 ORDER BY DATE(date)" Hope I explained what I need, thanks for your help!

    Read the article

  • [php,mysql] insert only adds upto 1000 records and ignoresall records after that.

    - by user560559
    Hello i have a large database where the client stores personal messages and fire email notifications [if allowed by the users]. certain users have the option of sending messages to their entire network of friends. some users have over 5000 friends in their network so if they select the whole network they'll be sending messages to over 5000 friends and system will store all the messages into a table. the problem is this that it does not insert more than 1000 records and ignores all inserts after the first 1000. i have increased the packet size, bulk_insert_buffer_size but still no luck. since the system stores some of the info in another table for reports, every insert returns its new message id. due to this i can not use the "insert into table (column1,column2) values (value1,value2) , (value1,value2)....etc." table engine is innodb, mysql version is 5.1.3 and is hosted on amazon web services. all i want is to fix this issue of inserting more than 1000 records at a time. as mentioned earlier, it works fine but only up to 1000 records and simply ignores all the records after that. i'm using php foreach(){} to insert message for each friend and if email is available, send notification to the user. this foreach(){} also inserts the same record in another table [with only 3 columns] for generating reports. thank you in advance for all the help and support. WMA.

    Read the article

  • Insert only adds upto 1000 records and ignoresall records after that.

    - by user560559
    i have a large database where the client stores personal messages and fire email notifications [if allowed by the users]. certain users have the option of sending messages to their entire network of friends. some users have over 5000 friends in their network so if they select the whole network they'll be sending messages to over 5000 friends and system will store all the messages into a table. the problem is this that it does not insert more than 1000 records and ignores all inserts after the first 1000. i have increased the packet size, bulk_insert_buffer_size but still no luck. since the system stores some of the info in another table for reports, every insert returns its new message id. due to this i can not use the "insert into table (column1,column2) values (value1,value2) , (value1,value2)....etc." table engine is innodb, mysql version is 5.1.3 and is hosted on amazon web services. all i want is to fix this issue of inserting more than 1000 records at a time. as mentioned earlier, it works fine but only up to 1000 records and simply ignores all the records after that. i'm using php foreach(){} to insert message for each friend and if email is available, send notification to the user. this foreach(){} also inserts the same record in another table [with only 3 columns] for generating reports.

    Read the article

  • How to write a large number of nested records in JSON with Python

    - by jamesmcm
    I want to produce a JSON file, containing some initial parameters and then records of data like this: { "measurement" : 15000, "imi" : 0.5, "times" : 30, "recalibrate" : false, { "colorlist" : [234, 431, 134] "speclist" : [0.34, 0.42, 0.45, 0.34, 0.78] } { "colorlist" : [214, 451, 114] "speclist" : [0.44, 0.32, 0.45, 0.37, 0.53] } ... } How can this be achieved using the Python json module? The data records cannot be added by hand as there are very many.

    Read the article

  • Oracle multiset, collection and records

    - by Atul
    Can anybody Explain me why records are required. Can't we perform the same operation in pl/sql using loop and all. Also when multiset, records query can be used i.e. in which type of situation and which one will be the preference.

    Read the article

  • Selecting records with specific month and year in SQL Server 2005

    - by John
    I want to list records with a particular month and year. The table name is 'Arrival' and 'date' is the field that stores the date that the record was added. This is to be done from a C# application. For example, if the user selects month as 'April' and year as '2009' in the application, it will list all the records that were added on April,2009. (I only need the query, hope I can figure out the rest :) )

    Read the article

  • archiving table records to another table by trigger(move daialy table records to weekly table, evry

    - by sirvan
    I have written this trigger in mysql 5: create trigger changeToWeeklly after insert on tbl_daily for each row begin insert into tbl_weeklly SELECT * FROM vehicleslocation v where v.recivedate < curdate(); delete FROM tbl_daily where recivedate < curdate(); end; i want to archive records by date, move yesterday inserted record from dailly to weekly table and last weekly table to mounthly table and deletes this records from previous table this trigger has following error when insert in daily tabled occurred : "Can't update table 'tbl_daily' in stored function/trigger because it is already used by statement which invoked this stored function/trigger." please help me to solve th problem of archive old data in related tables: move yesterday inserted records to weekly table, if there is a reliable solution tell me please.

    Read the article

  • Copy new records from datatable and identify changes in old records

    - by Betite
    Assume there are two tables: Remote_table and My_table. Remote_table has 6 columns: **PROJECT JOB_TYPE MONTH YEAR** HOURS IS_DELETED 134393 70 1 2013 30 0 134393 70 2 2013 50 0 134393 70 3 2013 80 0 134393 70 10 2012 10 0 134393 70 11 2012 0 0 134393 70 12 2012 15 0 My_table is a copy of remote_table. I tried to copy only the new records from the remote_table by this query: SELECT * FROM [remote_DB].[LudanProjectManager].[dbo].Remote_table EXCEPT SELECT * FROM My_table It works OK but I get a duplicate primary key exception when changes have been made on the remote_table on the hours column. Can anyone think of a way to copy only the new records from remote_table and if changes has been made on old records, to identify them and update the my_table to correspond?

    Read the article

  • SQL SERVER – Select and Delete Duplicate Records – SQL in Sixty Seconds #036 – Video

    - by pinaldave
    Developers often face situations when they find their column have duplicate records and they want to delete it. A good developer will never delete any data without observing it and making sure that what is being deleted is the absolutely fine to delete. Before deleting duplicate data, one should select it and see if the data is really duplicate. In this video we are demonstrating two scripts – 1) selects duplicate records 2) deletes duplicate records. We are assuming that the table has a unique incremental id. Additionally, we are assuming that in the case of the duplicate records we would like to keep the latest record. If there is really a business need to keep unique records, one should consider to create a unique index on the column. Unique index will prevent users entering duplicate data into the table from the beginning. This should be the best solution. However, deleting duplicate data is also a very valid request. If user realizes that they need to keep only unique records in the column and if they are willing to create unique constraint, the very first requirement of creating a unique constraint is to delete the duplicate records. Let us see how to connect the values in Sixty Seconds: Here is the script which is used in the video. USE tempdb GO CREATE TABLE TestTable (ID INT, NameCol VARCHAR(100)) GO INSERT INTO TestTable (ID, NameCol) SELECT 1, 'First' UNION ALL SELECT 2, 'Second' UNION ALL SELECT 3, 'Second' UNION ALL SELECT 4, 'Second' UNION ALL SELECT 5, 'Second' UNION ALL SELECT 6, 'Third' GO -- Selecting Data SELECT * FROM TestTable GO -- Detecting Duplicate SELECT NameCol, COUNT(*) TotalCount FROM TestTable GROUP BY NameCol HAVING COUNT(*) > 1 ORDER BY COUNT(*) DESC GO -- Deleting Duplicate DELETE FROM TestTable WHERE ID NOT IN ( SELECT MAX(ID) FROM TestTable GROUP BY NameCol) GO -- Selecting Data SELECT * FROM TestTable GO DROP TABLE TestTable GO Related Tips in SQL in Sixty Seconds: SQL SERVER – Delete Duplicate Records – Rows SQL SERVER – Count Duplicate Records – Rows SQL SERVER – 2005 – 2008 – Delete Duplicate Rows Delete Duplicate Records – Rows – Readers Contribution Unique Nonclustered Index Creation with IGNORE_DUP_KEY = ON – A Transactional Behavior What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Excel

    Read the article

  • Next Phase of ECM 11g Now Available - New UCM & URM 11g, & Updated I/PM & IRM 11g

    - by michelle.huff
    We're excited to announce that the Oracle Enterprise Content Management Suite 11g is now available! Today, Oracle announced ECM Suite 11g, a part of Fusion Middleware 11gR1 Patchset 2 release, which builds upon the Imaging and Process Management (I/PM) and Information Rights Management (IRM) 11g release earlier this year. Universal Content Management (UCM) and Universal Records Management (URM) 11g are now available with many new features and enhancements. All ECM products are localized into 27 languages, use a single repository, a single installer, centralized administration, and all run on the same Fusion Middleware tech stack. Oracle ECM Suite 11g, is better integrated to fit the way you work, with extreme performance and extreme scalability. Universal Content Management One click Web content management - brings Web content management authoring, design and presentation capabilities directly into how organizations design sites, portals, and custom Web applications. Simply take in the right amount of WCM that meets your needs - all without having to rewrite the application or port it over to a new technology stack or framework. Greater business user empowerment - with next generation desktop integrations and "smart productivity folders", new Web site "design mode" for business users, and enhanced rich media support enabling users to better work with photography, graphics, videos & podcasts created today as well as contribute content within Flash files directly from the Web. Advanced manageability with extreme performance & scalability - centralized system monitoring, installation, logging, performance metrics & diagnostics, with new built in "fast check-in" features, redesigned component management interface - all running on Fusion Middleware infrastructure. Universal Records Management Enhanced user experience: Oracle URM 11g makes records management easier for both business users and records administrators. Simplifications in the end user experience allow the creation of bookmarks into often-used part of the file plan, easy copying of categories and dispositions, and integrated folder and records search. The records management dashboard provides a consolidated view into records administrator tasks and system performance. DoD 5015.02 v3: Oracle URM is fully certified against all part of the US Department of Defense records management standard - baseline, classified, and Freedom of Information and Privacy Act. This enables Federal, state, & local governments & public agencies, as well as private companies, to maintain regulated compliance. Expanded functionality through Oracle integrations: Oracle URM 11g allows for an expanded set of functionality through integration capabilities with other Oracle products. This includes configurable records definition capabilities directly within a UCM instance. An out of the box integration with Oracle BI Publisher provides easily configured and robust reporting. Additionally, 11g offers an out of the box Oracle Secure Enterprise Search integration enabling real time full text discovery across disparate systems in an organization. Read the Press Release Watch the 3 Minute ECM 11g Video Get Up to Speed with the What's New in ECM Suite Datasheet Learn More on OTN with new tutorials, downloads and whitepapers

    Read the article

  • Using Take and skip keyword to filter records in LINQ

    - by vik20000in
    In LINQ we can use the take keyword to filter out the number of records that we want to retrieve from the query. Let’s say we want to retrieve only the first 5 records for the list or array then we can use the following query     int[] numbers = { 5, 4, 1, 3, 9, 8, 6, 7, 2, 0 };     var first3Numbers = numbers.Take(3); The TAKE keyword can also be easily applied to list of object in the following way. var first3WAOrders = (         from cust in customers         from order in cust.Orders         select cust ) .Take(3); [Note in the query above we are using the order clause so that the data is first ordered based on the orders field and then the first 3 records are taken. In both the above example we have been able to filter out data based on the number of records we want to fetch. But in both the cases we were fetching the records from the very beginning. But there can be some requirements whereby we want to fetch the records after skipping some of the records like in paging. For this purpose LINQ has provided us with the skip method which skips the number of records passed as parameter in the result set. int[] numbers = { 5, 4, 1, 3, 9, 8, 6, 7, 2, 0 }; var allButFirst4Numbers = numbers.Skip(4); The SKIP keyword can also be easily applied to list of object in the following way. var first3WAOrders = (         from cust in customers         from order in cust.Orders         select cust ).Skip(3);  Vikram

    Read the article

  • Free Webinar: Filling the Gap in SharePoint Records Management

    - by CatherineRussell
    Webinar: Filling the Gap in SharePoint Records Management Find out how you can solve your challenges with conceptClassifier for SharePoint and leverage SharePoint 2007 and 2010 in this free one hour webinar. This informative webinar will focus on records management in SharePoint and how Concept Searching’s award winning conceptClassifier for SharePoint automatically generates conceptual and descriptor metadata from documents, automatically changes the Content Type, and automatically declares records. Juan J. Celaya, President and CEO of COMPU-DATA International, LLC will share his expertise and experience using the U.S. Army’s Joint Services Records Research Center (JSRRC) as a case study and illustrates how they solved the challenge of processing millions of records to support veteran’s claims using conceptClassifier.    Webinar is on June 23rd from 11:30am – 12:30pm EST and explore real world examples of how to simplify your Records Management processes in SharePoint: http://www.clicktoattend.com/?id=149003

    Read the article

  • Finding gaps (missing records) in database records using SQL

    - by Tony_Henrich
    I have a table with records for every consecutive hour. Each hour has some value. I want a T-SQL query to retrieve the missing records (missing hours, the gaps). So for the DDL below, I should get a record for missing hour 04/01/2010 02:00 AM (assuming date range is between the first and last record). Using SQL Server 2005. Prefer a set based query. DDL: CREATE TABLE [Readings]( [StartDate] [datetime] NOT NULL, [SomeValue] [int] NOT NULL ) INSERT INTO [Readings]([StartDate], [SomeValue]) SELECT '20100401 00:00:00.000', 2 UNION ALL SELECT '20100401 01:00:00.000', 3 UNION ALL SELECT '20100401 03:00:00.000', 45

    Read the article

  • Read variable-length records from a buffer - weird memory issues

    - by bsg
    Hi, I'm trying to implement an i/o intensive quicksort (C++ qsort) on a very large dataset. In the interests of speed, I'd like to read in a chunk of data at a time into a buffer and then use qsort to sort it inside the buffer. (I am currently working with text files but would like to move to binary soon.) However, my data is composed of variable-length records, and qsort needs to be told the length of the record in order to sort. Is there any way to standardize this? The only thing I could think of was rather convoluted: my program currently reads from the buffer until it hits a linefeed character ('10' in ascii), transferring each character over to another array. When it finds a linefeed (the delimiter in the input file), it fills the number of spaces remaining in the buffer for that record (record size is set to 30) with null characters. This way, I should end up with a buffer full of fixed-size records to give qsort. I know there are several problems with my approach, one being that it's just clumsy, another that the record size might conceivably be larger than 30, but is generally much less. Is there a better way of doing this? As well, my current code doesn't even work. When I debug it, it seems to be transferring characters from one buffer to the other, but when I try to print out the buffer, it contains only the first record. Here is my code: FILE *fp; unsigned char *buff; unsigned char *realbuff; FILE *inputFiles[NUM_INPUT_FILES]; buff = (unsigned char *) malloc(2048); realbuff = (unsigned char *) malloc(NUM_RECORDS * RECORD_SIZE); fp = fopen("postings0.txt", "r"); if(fp) { fread(buff, 1, 2048, fp); /*for(int i=0; i <30; i++) cout << buff[i] <<endl;*/ int y=0; int recordcounter = 0; //cout << buff; for(int i=0;i <100; i++) { if(buff[i] != char(10)) { realbuff[y] = buff[i]; y++; recordcounter++; } else { if(recordcounter < RECORD_SIZE) for(int j=recordcounter; j < RECORD_SIZE;j++) { realbuff[y] = char(0); y++; } recordcounter = 0; } } cout << realbuff <<endl; cout << buff; } else cout << "sorry"; Thank you very much, bsg

    Read the article

  • Create Duplicate Records on SELECT for Calendar Date Range

    - by peterallcdn
    Hey all, I've built a pretty shnazzy calendar system but there is one tweak that I need to make so that I'm completely happy with it. My calendar has three tables: calevents - The calendared event. caldates - The occurrences and date-range of each occurrence for each event. calcats - The categories that can be applied to an event. The short: For each calevent, there can be many caldates, one for each occurrence of calevent. So a calevent that repeats weekly and spans 3 days might have caldates like this: date_id date_eid date_start date_end 2 37 2010-06-21 2010-06-23 3 37 2010-06-28 2010-06-30 7 37 2010-07-05 2010-07-07 9 37 2010-07-12 2010-07-14 What I want to do, is when selecting all the caldates for a specified month such as 2010-06, to return not just the two records above, but instead a record for each date in the range of date_start and date_end for each caldate. So if I searched for 2010-06, I would get: date_id date_eid date_start date_end date_day 2 37 2010-06-21 2010-06-23 2010-06-21 2 37 2010-06-21 2010-06-23 2010-06-22 2 37 2010-06-21 2010-06-23 2010-06-23 3 37 2010-06-28 2010-06-30 2010-06-28 3 37 2010-06-28 2010-06-30 2010-06-29 3 37 2010-06-28 2010-06-30 2010-06-30 The Long: The reason I want to do this, is so when displaying a list of events(calevents) for a specified month, an occurrence(caldates) of that event will be displayed for EACH of the days it spans. I could do this with php by looping through each day of the current month and displaying a copy of each caldate if the month day falls between date_start and date_end. But doing it this way will prevent me from using record pagination if needed. For example, if for a specified month the following caldates were returned: date_id date_eid date_start date_end 2 37 2010-06-21 2010-06-27 94 53 2010-06-09 2010-07-08 Doing record pagination would see this as only 2 records("rows"). But looping through them with PHP would generate 29 "rows". So, I figure if I use mysql to create each row instead of PHP, I can achieve the same thing AND still be able to use pagination if a month has a lot of events/dates. As far as performance goes, I'm not sure which option is more efficient. Both would send the same amount of info to the browser, so it's really only the work required to generate the info that matters. My current query which fetches all the occurrences for a specified month, and to make things just a little more complicated... joins them with their event and category, looks like this: $sql_to_execute = " SELECT date_id, date_eid, date_start, date_end, event_id, event_title, event_category, event_private, event_location, SUBSTRING_INDEX(event_detailsstripped, ' ', 40) AS event_detailsstripped, event_time, event_starttime, event_endtime, event_active, cat_colour FROM ( caldates LEFT JOIN calevents ON caldates.date_eid = calevents.event_id ) LEFT JOIN calcats ON calevents.event_category = calcats.cat_id WHERE date_start <= '".mysql_real_escape_string($dbi_list_end_date)."' AND date_end >= '".mysql_real_escape_string($dbi_list_start_date)."' ".$dbi_category." ORDER BY date_start ASC "; Any help or advice would be greatly appreciated! Thanks, Peter

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >