Search Results

Search found 5233 results on 210 pages for 'records'.

Page 2/210 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Updating several records at once using Django

    - by 47
    I want to create a list of records with checkboxes on the left side....kinda like the inbox in Gmail. Then if a user selects some or all of these checkboxes, then the selected record(s) can be updated (only one field will be updated BTW), possibly by clicking a button. I'm stuck on how to do this though....ideas?

    Read the article

  • Rtti for Variant Records

    - by Coco
    I try to write a kind of object/record serializer with Delphi 2010 and wonder if there is a way to detect, if a record is a variant record. E.g. the TRect record as defined in Types.pas: TRect = record case Integer of 0: (Left, Top, Right, Bottom: Longint); 1: (TopLeft, BottomRight: TPoint); end; As my serializer should work recursively on my data structures, it will descent on the TPoint records and generate redundant information in my serialized file. Is there a way to avoid this, by getting detailed information on the record?

    Read the article

  • saving a records containing a member of type string to a file (Delphi, Windows)

    - by wonderer
    I have a record that looks similar to: type TNote = record Title : string; Note : string; Index : integer; end; Simple. The reason I chose to set the variables as string (as opposed to an array of chars) is that I have no idea how long those strings are going to be. They can be 1 char long, 200 or 2000. Of course when I try to save the record to a type file (file of...) the compiler complains that I have to give a size to string. Is there a way to overcome this? or a way to save those records to an untyped file and still maintain a sort of searchable way? Please do not point me to possible solutions, if you know the solution please post code. Thank you

    Read the article

  • delphi Ado (mdb) update records

    - by ml
    I´m trying to copy data from one master table and 2 more child tables when i select one record in the master table i copy all the fields from that table for the other (table1 copy from ADOQuery the selected record) procedure TForm1.copyButton7Click(Sender: TObject); SQL.Clear; SQL.Add('SELECT * from ADOQuery'); SQL.Add('Where numeracao LIKE ''%'+NInterv.text);// locate record selected in Table1 NInterv.text) Open; // initiate copy of records begin while not tableADoquery.Eof do begin Table1.Last; Table1.Append;// how to append if necessary! Table1.Edit; Table1.FieldByName('C').Value := ADoquery.FieldByName('C').Value; Table1.FieldByName('client').Value := ADoquery.FieldByName('client').Value; Table1.FieldByName('Cnpj_cpf').Value := ADoquery.FieldByName('Cnpj_cpf').Value; table1.Post; table2.next;/// end; end; //How can i update the TableChield, TableChield1 field´s at the same time? do the same for the child tables TableChield <= TableChield_1 TableChield1 <= TableChield_2 thanks

    Read the article

  • Advice on "Invalid Pointer Operation" when using complex records

    - by Xaz
    Env: Delphi 2007 <JustificationI tend to use complex records quite frequently as they offer almost all of the advantages of classes but with much simpler handling.</Justification Anyhoo, one particularly complex record I have just implemented is trashing memory (later leading to an "Invalid Pointer Operation" error). This is an example of the memory trashing code: sSignature := gProfiles.Profile[_stPrimary].Signature.Formatted(True); On the second time i call it i get "Invalid Pointer Operation" It works OK if i call it like this: AProfile := gProfiles.Profile[_stPrimary]; ASignature := AProfile.Signature; sSignature := ASignature.Formatted(True); Background Code: gProfiles: TProfiles; TProfiles = Record private FPrimaryProfileID: Integer; FCachedProfile: TProfile; ... public < much code removed > property Profile[ProfileType: TProfileType]: TProfile Read GetProfile; end; function TProfiles.GetProfile(ProfileType: TProfileType): TProfile; begin case ProfileType of _stPrimary : Result := ProfileByID(FPrimaryProfileID); ... end; end; function TProfiles.ProfileByID(iID: Integer): TProfile; begin <snip> if LoadProfileOfID(iID, FCachedProfile) then begin Result := FCachedProfile; end else ... end; TProfile = Record private ... public ... Signature: TSignature; ... end; TSignature = Record private public PlainTextFormat : string; HTMLFormat : string; // The text to insert into a message when using this profile function Formatted(bHTML: boolean): string; end; function TSignature.Formatted(bHTML: boolean): string; begin if bHTML then result := HTMLFormat else result := PlainTextFormat; < SNIP MUCH CODE > end; OK, so I have a record within a record within a record, which is approaching Inception level confusion and I'm the first to admit is not really a good model. Clearly i am going to have to restructure it. What I would like from you gurus is a better understanding of why it is trashing the memory (something to do with the string object that is created then freed...) so that i can avoid making these kinds of errors in future. Thanks

    Read the article

  • C# Multi CheckboxList update inserts checked records but doesn't delete unchecked records

    - by DLL
    I have a multi checkboxlist on a formview. Both use queries in a tableadapter. I'm using VS 2012. When the user updates the form, I use the following code to update the checkbox data. If a user checks a new box, a new record is inserted correctly, however if the user unchecks a box the existing record is not deleted. The delete query works fine if I run it from the query builder in the table adapter, it's reaching the expected line in the code correctly, all values are correct and I receive no errors. I use a similar query to delete records when the form level data is deleted which works fine. The very last line is the one that doesn't work. Query: DELETE FROM [SLA_Categories] WHERE (([SLA_ID] = @SLA_ID) AND ([Choice_ID] = @Choice_ID)) protected void FormView1_ItemUpdating(object sender, FormViewUpdateEventArgs e) { if (FormView1.DataKey.Value != null) { Categs = (CheckBoxList)FormView1.FindControl("CheckBoxList1"); CurrentSLA_ID = (int)FormView1.DataKey.Value; } if (CurrentSLA_ID > 0) { foreach (ListItem li in Categs.Items) { // See if there's a record for the current sla in this category int CurrentChoice_ID = Convert.ToInt32(li.Value); SLADataSetTableAdapters.SLA_CategoriesTableAdapter myAdapter; myAdapter = new SLADataSetTableAdapters.SLA_CategoriesTableAdapter(); int myCount = (int)myAdapter.FindCategoryBySLA_IDAndChoice_ID(CurrentSLA_ID, CurrentChoice_ID); // If this category is checked and there is not an existing rec, insert one if (li.Selected == true && myCount < 1) { // Insert a rec for this sla myAdapter.InsertCategory(CurrentChoice_ID, CurrentSLA_ID); } // If this category is unchecked and there is and existing rec, delete it if (li.Selected == false && myCount > 0) { // Delete this rec myAdapter.DeleteCategoryBySLA_IDAndChoice_ID(CurrentChoice_ID, CurrentSLA_ID); } } } }

    Read the article

  • iphone coredata removing records between two entities connected by relation ship

    - by satyam
    I'm implementing core data in my iphone app. It has two entities. Entity1: LatestData Entity2: LatestDetailedData LatestData has URL, publishedDate, heading LatestDetailedData has URL, NewsDescription, PublishedDate, Author Both entities have same URL for a record. Both the entities are connected with inverse relation ship. And the relation ship is "delete-Cascaded" What I want: If I remove a record in LatestData, I want the record with same URL in LatestDetailedData must also be deleted. How?

    Read the article

  • How to quickly search through a very large list of strings / records on a database

    - by Giorgio
    I have the following problem: I have a database containing more than 2 million records. Each record has a string field X and I want to display a list of records for which field X contains a certain string. Each record is about 500 bytes in size. To make it more concrete: in the GUI of my application I have a text field where I can enter a string. Above the text field I have a table displaying the (first N, e.g. 100) records that match the string in the text field. When I type or delete one character in the text field, the table content must be updated on the fly. I wonder if there is an efficient way of doing this using appropriate index structures and / or caching. As explained above, I only want to display the first N items that match the query. Therefore, for N small enough, it should not be a big issue loading the matching items from the database. Besides, caching items in main memory can make retrieval faster. I think the main problem is how to find the matching items quickly, given the pattern string. Can I rely on some DBMS facilities, or do I have to build some in-memory index myself? Any ideas? EDIT I have run a first experiment. I have split the records into different text files (at most 200 records per file) and put the files in different directories (I used the content of one data field to determine the directory tree). I end up with about 50000 files in about 40000 directories. I have then run Lucene to index the files. Searching for a string with the Lucene demo program is pretty fast. Splitting and indexing took a few minutes: this is totally acceptable for me because it is a static data set that I want to query. The next step is to integrate Lucene in the main program and use the hits returned by Lucene to load the relevant records into main memory.

    Read the article

  • Mysql: Working With 192 Trillion Records... (Yes, 192 Trillion)

    - by Sarah
    Here's the question... Considering 192 trillion records, what should my considerations be? My main concern is speed. Here's the table... CREATE TABLE `ref` ( `id` INTEGER(13) AUTO_INCREMENT DEFAULT NOT NULL, `rel_id` INTEGER(13) NOT NULL, `p1` INTEGER(13) NOT NULL, `p2` INTEGER(13) DEFAULT NULL, `p3` INTEGER(13) DEFAULT NULL, `s` INTEGER(13) NOT NULL, `p4` INTEGER(13) DEFAULT NULL, `p5` INTEGER(13) DEFAULT NULL, `p6` INTEGER(13) DEFAULT NULL, PRIMARY KEY (`id`), KEY (`s`), KEY (`rel_id`), KEY (`p3`), KEY (`p4`) ); Here's the queries... SELECT id, s FROM ref WHERE red_id="$rel_id" AND p3="$p3" AND p4="$p4" SELECT rel_id, p1, p2, p3, p4, p5, p6 FROM ref WHERE id="$id" INSERT INTO rel (rel_id, p1, p2, p3, s, p4, p5, p6) VALUES ("$rel_id", "$p1", "$p2", "$p3", "$s", "$p4", "$p5", "$p6") Here's some notes... The SELECT's will be done much more frequently than the INSERT. However, occasionally I want to add a few hundred records at a time. Load-wise, there will be nothing for hours then maybe a few thousand queries all at once. Don't think I can normalize any more (need the p values in a combination) The database as a whole is very relational. This will be the largest table by far (next largest is about 900k) UPDATE (08/11/2010) Interestingly, I've been given a second option... Instead of 192 trillion I could store 2.6*10^16 (15 zeros, meaning 26 Quadrillion)... But in this second option I would only need to store one bigint(18) as the index in a table. That's it - just the one column. So I would just be checking for the existence of a value. Occasionally adding records, never deleting them. So that makes me think there must be a better solution then mysql for simply storing numbers... Given this second option, should I take it or stick with the first... [edit] Just got news of some testing that's been done - 100 million rows with this setup returns the query in 0.0004 seconds [/edit]

    Read the article

  • Merge Records in Session bean by using ADF Drag/Drop

    - by shantala.sankeshwar
    This article describes how to merge multiple selected records in Session Bean using ADF drag & drop feature. Below described is simple use case that shows how exactly this can be achieved. Here we will have table & user input field.Table shows  EMP records & user input field accepts Salary.When we drag & drop multiple records on user input field,the selected records get updated with the new Salary provided. Steps: Let us suppose that we have created Java EE Web Application with Entities from Emp table.Then create EJB Session Bean & generate Data control for the same. Write a simple code in sessionEJBBean & expose this method to local interface :  public void updateEmprecords(List empList, Object sal) {       Emp emp = null;       for (int i = 0; i < empList.size(); i++)       {        emp = em.find(Emp.class, empList.get(i));         emp.setSal((BigDecimal)sal);       }      em.merge(emp);   } Now let us create updateEmpRecords.jspx page in viewController project & Drop empFindAll object as ADF Table Define custom SelectionListener method for the table :   public void selectionListener(SelectionEvent selectionEvent)     {     // This method gets the Empno of the selected record & stores in the list object      UIXTable table = (UIXTable)selectionEvent.getComponent();      FacesCtrlHierNodeBinding fcr      =(FacesCtrlHierNodeBinding)table.getSelectedRowData();      Number empNo = (Number)fcr.getAttribute("empno") ;      this.getSelectedRowsList().add(empNo);     }Set table's selectedRowKeys to #{bindings.empFindAll.collectionModel.selectedRow}"Drop inputText on the same jspx page that accepts Salary .Now we would like to drag records from the above table & drop that on the inputtext field.This feature can be achieved by inserting dragSource operation inside the table & dropTraget operation inside the inputText:<af:dragSource discriminant="tab"/> //Insert this inside the table<af:inputText label="Enter Salary" id="it13" autoSubmit="true"       binding="# {test.deptValue}">       <af:dropTarget dropListener="#{test.handleTableDrop}">       <af:dataFlavor        flavorClass="org.apache.myfaces.trinidad.model.RowKeySet"    discriminant="tab"/>       </af:dropTarget>       <af:convertNumber/> </af:inputText> In the above code when the user drags & drops multiple records on inputText,the dropListener method gets called.Goto the respective page definition file & create updateEmprecords method action& execute action dropListener method code:        public DnDAction handleTableDrop(DropEvent dropEvent)        {          //Below code gets the updateEmprecords method,passes parameters & executes method            DataFlavor<RowKeySet> df = DataFlavor.getDataFlavor(RowKeySet.class);            RowKeySet droppedKeySet = dropEvent.getTransferable().getData(df);            if (droppedKeySet != null && droppedKeySet.size() > 0)           {                  DCBindingContainer bindings =                  (DCBindingContainer)BindingContext.getCurrent().getCurrentBindingsEntry();                  OperationBinding updateEmp;                  updateEmp= bindings.getOperationBinding("updateEmprecords");                  updateEmp.getParamsMap().put("sal",                  this.getDeptValue().getAttributes().get("value"));                            updateEmp.getParamsMap().put("empList", this.getSelectedRowsList());                  updateEmp.execute(); //Below code performs execute operation to refresh the updated records                 OperationBinding executeBinding;                 executeBinding= bindings.getOperationBinding("Execute");                 executeBinding.execute(); AdfFacesContext.getCurrentInstance().addPartialTarget(dropEvent.getDragComponent());                this.getSelectedRowsList().clear();          }                 return DnDAction.NONE;        }Run updateEmpRecords.jspx page & enter any Salary say '5000'.Select multiple records in table & drop these selected records on the inputText Salary. Note that all the selected records salary value gets updated to 5000.Technorati Tags: ADF Drag and drop,EJB Session bean,ADF table,inputText,DropEvent  

    Read the article

  • What's the progress on Haskell records?

    - by mmh
    Recently I stumbled once again on the issues of Haskells records, in particular the uniqueness of field names (it's a pain ...) I already read A proposal for records in Haskell from SPJ and Greg Morrisett but it's last update was 2003. Another paper Lightweight Extensible Records for Haskell from SPJ and Mark Jones is even older: It's from a Haskell workshop in 1999. Now I wonder if the process of giving Haskell new records made any progress. Does anybody know something about it or can point me to some further reading ?

    Read the article

  • Ghost Records, Backups, and Database Compression…With a Pinch of Security Considerations

    - by Argenis
      Today Jeffrey Langdon (@jlangdon) posed on #SQLHelp the following questions: So I set to answer his question, and I said to myself: “Hey, I haven’t blogged in a while, how about I blog about this particular topic?”. Thus, this post was born. (If you have never heard of Ghost Records and/or the Ghost Cleanup Task, go see this blog post by Paul Randal) 1) Do ghost records get copied over in a backup? If you guessed yes, you guessed right. The backup process in SQL Server takes all data as it is on disk – it doesn’t crack the pages open to selectively pick which slots have actual data and which ones do not. The whole page is backed up, regardless of its contents. Even if ghost cleanup has run and processed the ghost records, the slots are not overwritten immediately, but rather until another DML operation comes along and uses them. As a matter of fact, all of the allocated space for a database will be included in a full backup. So, this poses a bit of a security/compliance problem for some of you DBA folk: if you want to take a full backup of a database after you’ve purged sensitive data, you should rebuild all of your indexes (with FILLFACTOR set to 100%). But the empty space on your data file(s) might still contain sensitive data! A SHRINKFILE might help get rid of that (not so) empty space, but that might not be the end of your troubles. You might _STILL_ have (not so) empty space on your files! One approach that you can follow is to export all of the data on your database to another SQL Server instance that does NOT have Instant File Initialization enabled. This can be a tedious and time-consuming process, though. So you have to weigh in your options and see what makes sense for you. Snapshot Replication is another idea that comes to mind. 2) Does Compression get rid of ghost records (2008)? The answer to this is no. The Ghost Records/Ghost Cleanup Task mechanism is alive and well on compressed tables and indexes. You can prove this running a simple script: CREATE DATABASE GhostRecordsTest GO USE GhostRecordsTest GO CREATE TABLE myTable (myPrimaryKey int IDENTITY(1,1) PRIMARY KEY CLUSTERED,                       myWideColumn varchar(1000) NOT NULL DEFAULT 'Default string value')                         ALTER TABLE myTable REBUILD PARTITION = ALL WITH (DATA_COMPRESSION = PAGE) GO INSERT INTO myTable DEFAULT VALUES GO 10 DELETE myTable WHERE myPrimaryKey % 2 = 0 DBCC TRACEON(2514) DBCC CHECKTABLE(myTable) TraceFlag 2514 will make DBCC CHECKTABLE give you an extra tidbit of information on its output. For the above script: “Ghost Record count = 5” Until next time,   -Argenis

    Read the article

  • Ghost Records, Backups, and Database Compression…With a Pinch of Security Considerations

    - by Argenis
      Today Jeffrey Langdon (@jlangdon) posed on #SQLHelp the following questions: So I set to answer his question, and I said to myself: “Hey, I haven’t blogged in a while, how about I blog about this particular topic?”. Thus, this post was born. (If you have never heard of Ghost Records and/or the Ghost Cleanup Task, go see this blog post by Paul Randal) 1) Do ghost records get copied over in a backup? If you guessed yes, you guessed right. The backup process in SQL Server takes all data as it is on disk – it doesn’t crack the pages open to selectively pick which slots have actual data and which ones do not. The whole page is backed up, regardless of its contents. Even if ghost cleanup has run and processed the ghost records, the slots are not overwritten immediately, but rather until another DML operation comes along and uses them. As a matter of fact, all of the allocated space for a database will be included in a full backup. So, this poses a bit of a security/compliance problem for some of you DBA folk: if you want to take a full backup of a database after you’ve purged sensitive data, you should rebuild all of your indexes (with FILLFACTOR set to 100%). But the empty space on your data file(s) might still contain sensitive data! A SHRINKFILE might help get rid of that (not so) empty space, but that might not be the end of your troubles. You might _STILL_ have (not so) empty space on your files! One approach that you can follow is to export all of the data on your database to another SQL Server instance that does NOT have Instant File Initialization enabled. This can be a tedious and time-consuming process, though. So you have to weigh in your options and see what makes sense for you. Snapshot Replication is another idea that comes to mind. 2) Does Compression get rid of ghost records (2008)? The answer to this is no. The Ghost Records/Ghost Cleanup Task mechanism is alive and well on compressed tables and indexes. You can prove this running a simple script: CREATE DATABASE GhostRecordsTest GO USE GhostRecordsTest GO CREATE TABLE myTable (myPrimaryKey int IDENTITY(1,1) PRIMARY KEY CLUSTERED,                       myWideColumn varchar(1000) NOT NULL DEFAULT 'Default string value')                         ALTER TABLE myTable REBUILD PARTITION = ALL WITH (DATA_COMPRESSION = PAGE) GO INSERT INTO myTable DEFAULT VALUES GO 10 DELETE myTable WHERE myPrimaryKey % 2 = 0 DBCC TRACEON(2514) DBCC CHECKTABLE(myTable) TraceFlag 2514 will make DBCC CHECKTABLE give you an extra tidbit of information on its output. For the above script: “Ghost Record count = 5” Until next time,   -Argenis

    Read the article

  • What are the main differences between SRV records and TXT records?

    - by Chris Adams
    Hi there, I'm trying to consolidate domain names for the servers I look after to just use one panel instead of 3 or 4, and one thing stopping me is that the provider I originally wanted to move them to only lets me the following kinds of records: A MX NS CNAME TXT The first four I understand, but I'm not sure about the relationship (if any) between SRV records and TXT records. Can I use TXT records in the place of SRV records? They both seem to be general text records to just point at a particular server without needing to specify a particular protocol, so it doesn't sound like a totally unreasonable assumption, but I'd rather check here before I break something. If I can only set the above records, does that mean I'm essentially unable to so any SRC record redirecting? Thanks!

    Read the article

  • Adding Column to a SQL Server Table

    - by Dinesh Asanka
    Adding a column to a table is  common task for  DBAs. You can add a column to a table which is a nullable column or which has default values. But are these two operations are similar internally and which method is optimal? Let us start this with an example. I created a database and a table using following script: USE master Go --Drop Database if exists IF EXISTS (SELECT 1 FROM SYS.databases WHERE name = 'AddColumn') DROP DATABASE AddColumn --Create the database CREATE DATABASE AddColumn GO USE AddColumn GO --Drop the table if exists IF EXISTS ( SELECT 1 FROM sys.tables WHERE Name = 'ExistingTable') DROP TABLE ExistingTable GO --Create the table CREATE TABLE ExistingTable (ID BIGINT IDENTITY(1,1) PRIMARY KEY CLUSTERED, DateTime1 DATETIME DEFAULT GETDATE(), DateTime2 DATETIME DEFAULT GETDATE(), DateTime3 DATETIME DEFAULT GETDATE(), DateTime4 DATETIME DEFAULT GETDATE(), Gendar CHAR(1) DEFAULT 'M', STATUS1 CHAR(1) DEFAULT 'Y' ) GO -- Insert 100,000 records with defaults records INSERT INTO ExistingTable DEFAULT VALUES GO 100000 Before adding a Column Before adding a column let us look at some of the details of the database. DBCC IND (AddColumn,ExistingTable,1) By running the above query, you will see 637 pages for the created table. Adding a Column You can add a column to the table with following statement. ALTER TABLE ExistingTable Add NewColumn INT NULL Above will add a column with a null value for the existing records. Alternatively you could add a column with default values. ALTER TABLE ExistingTable Add NewColumn INT NOT NULL DEFAULT 1 The above statement will add a column with a 1 value to the existing records. In the below table I measured the performance difference between above two statements. Parameter Nullable Column Default Value CPU 31 702 Duration 129 ms 6653 ms Reads 38 116,397 Writes 6 1329 Row Count 0 100000 If you look at the RowCount parameter, you can clearly see the difference. Though column is added in the first case, none of the rows are affected while in the second case all the rows are updated. That is the reason, why it has taken more duration and CPU to add column with Default value. We can verify this by several methods. Number of Pages The number of data pages can be obtained by using DBCC IND command. Though, this an undocumented dbcc command, many experts are ok to use this command in production. However, since there is no official word from Microsoft, use this “at your own risk”. DBCC IND (AddColumn,ExistingTable,1) Before Adding the Columns 637 Adding a Column with NULL 637 Adding a column with DEFAULT value 1270 This clearly shows that pages are physically modified. Please note, a high value indicated in the Adding a column with DEFAULT value  column is also a result of page splits. Continues…

    Read the article

  • Synthetic database records

    - by michipili
    Assume we are getting some statistics from a customer which we analyse and we send our comments to the customer. Now, the customer tells us that the statistic they computed between January and March are based on a wrong methodology and sends us corrected series. We want perform analysis with the wrong and with the correct set of data, which are huge and only differ from January to March. Therefore, we need something like synthetic database records implementing the following logic: synthetic[1] = wrong_data synthetic[2] = correct_data between Januar and March, wrong_data otherwise With this, we can easily perform our analyses on synthetic records. Should such synthetic records be implemented in the application logic or on the side of the database? What are common pitfalls of such an implementation?

    Read the article

  • TinyDNS and proper settings for SPF records

    - by Teddy
    I've inherited a TinyDNS configuration that have following entries for SPF: @domain.com:x.x.x.3:a::86400 @domain.com:x.x.x.103:c:10:86400 =domain.com:x.x.x.3:86400 =mail.domain.com:x.x.x.3:86400 =mail.domain.com:x.x.x.103:86400 'domain.com:v=spf1 ip4\072x.x.x.3 ip4\07231.130.96.103 ptr\072mail.domain.com +mx a -all:3600 'mail.domain.com:v=spf1 ip4\072x.x.x.3 ip4\072x.x.x.103 ptr\072mail.domain.com +mx a -all:3600 'a.mx.domain.com:v=spf1 ip4\072x.x.x.3 ip4\072x.x.x.103 ptr\072mail.domain.com +mx a -all:3600 This is the result from http://www.kitterman.com/spf/validate.html SPF record lookup and validation for: domain.com SPF records are primarily published in DNS as TXT records. The TXT records found for your domain are: v=spf1 ip4:x.x.x.3 ip4:x.x.x.103 ptr:mail.domain.com +mx a -all SPF records should also be published in DNS as type SPF records. No type SPF records found. Checking to see if there is a valid SPF record. Found v=spf1 record for domain.com: v=spf1 ip4:x.x.x.3 ip4:x.x.x.103 ptr:mail.domain.com +mx a -all evaluating... SPF record passed validation test with pySPF (Python SPF library)! I'm struggling with this from yesterday and cant figure it why this validator returns No type SPF records found. I see in BIND we cand define SPF type record with example.com. IN SPF "v=spf1 a -all", but in TinyDNS we only have TXT records that we set for SPF, maybe this is a problem?

    Read the article

  • SQL SERVER – How to Recover SQL Database Data Deleted by Accident

    - by Pinal Dave
    In Repair a SQL Server database using a transaction log explorer, I showed how to use ApexSQL Log, a SQL Server transaction log viewer, to recover a SQL Server database after a disaster. In this blog, I’ll show you how to use another SQL Server disaster recovery tool from ApexSQL in a situation when data is accidentally deleted. You can download ApexSQL Recover here, install, and play along. With a good SQL Server disaster recovery strategy, data recovery is not a problem. You have a reliable full database backup with valid data, a full database backup and subsequent differential database backups, or a full database backup and a chain of transaction log backups. But not all situations are ideal. Here we’ll address some sub-optimal scenarios, where you can still successfully recover data. If you have only a full database backup This is the least optimal SQL Server disaster recovery strategy, as it doesn’t ensure minimal data loss. For example, data was deleted on Wednesday. Your last full database backup was created on Sunday, three days before the records were deleted. By using the full database backup created on Sunday, you will be able to recover SQL database records that existed in the table on Sunday. If there were any records inserted into the table on Monday or Tuesday, they will be lost forever. The same goes for records modified in this period. This method will not bring back modified records, only the old records that existed on Sunday. If you restore this full database backup, all your changes (intentional and accidental) will be lost and the database will be reverted to the state it had on Sunday. What you have to do is compare the records that were in the table on Sunday to the records on Wednesday, create a synchronization script, and execute it against the Wednesday database. If you have a full database backup followed by differential database backups Let’s say the situation is the same as in the example above, only you create a differential database backup every night. Use the full database backup created on Sunday, and the last differential database backup (created on Tuesday). In this scenario, you will lose only the data inserted and updated after the differential backup created on Tuesday. If you have a full database backup and a chain of transaction log backups This is the SQL Server disaster recovery strategy that provides minimal data loss. With a full chain of transaction logs, you can recover the SQL database to an exact point in time. To provide optimal results, you have to know exactly when the records were deleted, because restoring to a later point will not bring back the records. This method requires restoring the full database backup first. If you have any differential log backup created after the last full database backup, restore the most recent one. Then, restore transaction log backups, one by one, it the order they were created starting with the first created after the restored differential database backup. Now, the table will be in the state before the records were deleted. You have to identify the deleted records, script them and run the script against the original database. Although this method is reliable, it is time-consuming and requires a lot of space on disk. How to easily recover deleted records? The following solution enables you to recover SQL database records even if you have no full or differential database backups and no transaction log backups. To understand how ApexSQL Recover works, I’ll explain what happens when table data is deleted. Table data is stored in data pages. When you delete table records, they are not immediately deleted from the data pages, but marked to be overwritten by new records. Such records are not shown as existing anymore, but ApexSQL Recover can read them and create undo script for them. How long will deleted records stay in the MDF file? It depends on many factors, as time passes it’s less likely that the records will not be overwritten. The more transactions occur after the deletion, the more chances the records will be overwritten and permanently lost. Therefore, it’s recommended to create a copy of the database MDF and LDF files immediately (if you cannot take your database offline until the issue is solved) and run ApexSQL Recover on them. Note that a full database backup will not help here, as the records marked for overwriting are not included in the backup. First, I’ll delete some records from the Person.EmailAddress table in the AdventureWorks database.   I can delete these records in SQL Server Management Studio, or execute a script such as DELETE FROM Person.EmailAddress WHERE BusinessEntityID BETWEEN 70 AND 80 Then, I’ll start ApexSQL Recover and select From DELETE operation in the Recovery tab.   In the Select the database to recover step, first select the SQL Server instance. If it’s not shown in the drop-down list, click the Server icon right to the Server drop-down list and browse for the SQL Server instance, or type the instance name manually. Specify the authentication type and select the database in the Database drop-down list.   In the next step, you’re prompted to add additional data sources. As this can be a tricky step, especially for new users, ApexSQL Recover offers help via the Help me decide option.   The Help me decide option guides you through a series of questions about the database transaction log and advises what files to add. If you know that you have no transaction log backups or detached transaction logs, or the online transaction log file has been truncated after the data was deleted, select No additional transaction logs are available. If you know that you have transaction log backups that contain the delete transactions you want to recover, click Add transaction logs. The online transaction log is listed and selected automatically.   Click Add if to add transaction log backups. It would be best if you have a full transaction log chain, as explained above. The next step for this option is to specify the time range.   Selecting a small time range for the time of deletion will create the recovery script just for the accidentally deleted records. A wide time range might script the records deleted on purpose, and you don’t want that. If needed, you can check the script generated and manually remove such records. After that, for all data sources options, the next step is to select the tables. Be careful here, if you deleted some data from other tables on purpose, and don’t want to recover them, don’t select all tables, as ApexSQL Recover will create the INSERT script for them too.   The next step offers two options: to create a recovery script that will insert the deleted records back into the Person.EmailAddress table, or to create a new database, create the Person.EmailAddress table in it, and insert the deleted records. I’ll select the first one.   The recovery process is completed and 11 records are found and scripted, as expected.   To see the script, click View script. ApexSQL Recover has its own script editor, where you can review, modify, and execute the recovery script. The insert into statements look like: INSERT INTO Person.EmailAddress( BusinessEntityID, EmailAddressID, EmailAddress, rowguid, ModifiedDate) VALUES( 70, 70, N'[email protected]' COLLATE SQL_Latin1_General_CP1_CI_AS, 'd62c5b4e-c91f-403f-b630-7b7e0fda70ce', '20030109 00:00:00.000' ); To execute the script, click Execute in the menu.   If you want to check whether the records are really back, execute SELECT * FROM Person.EmailAddress WHERE BusinessEntityID BETWEEN 70 AND 80 As shown, ApexSQL Recover recovers SQL database data after accidental deletes even without the database backup that contains the deleted data and relevant transaction log backups. ApexSQL Recover reads the deleted data from the database data file, so this method can be used even for databases in the Simple recovery model. Besides recovering SQL database records from a DELETE statement, ApexSQL Recover can help when the records are lost due to a DROP TABLE, or TRUNCATE statement, as well as repair a corrupted MDF file that cannot be attached to as SQL Server instance. You can find more information about how to recover SQL database lost data and repair a SQL Server database on ApexSQL Solution center. There are solutions for various situations when data needs to be recovered. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • cloudflare's mx record should set cname or A records

    - by user7787
    The cloudflare offical support said https://support.cloudflare.com/hc/en-us/articles/200168876-My-email-or-mail-stopped-working-What-should-I-do- But traditionally mx record should not set as cname http://www.exchangepedia.com/blog/2006/12/should-mx-record-point-to-cname-records.html But cloudflare has a service called "cname Flattening" is it related for a reason to set cname as mx records? So should i set cloudflare's mx record as cname ?

    Read the article

  • What is the difference between a constructor and a procedure in Delphi records?

    - by HMcG
    Is there difference in behavior between a constructor call and a procedure call in Delphi records? I have a D2010 code sample I want to convert to D2009 (which I am using). The sample uses a parameterless constructor, which is not permitted in Delphi 2009. If I substitute a simple parameterless procedure call, is there any functional difference for records? I.E. TVector = record private FImpl: IVector; public constructor Create; // not allowed in D2009 end; becomes TVector = record private FImpl: IVector; public procedure Create; // so change to procedure end; As far as I can see this should work, but I may be missing something.

    Read the article

  • Grouping records from while loop | PHP

    - by Wayne
    I'm trying to group down records by their priority levels, e.g. --- Priority: High --- Records... --- Priority: Medium --- Records... --- Priority: Low --- Records... Something like that, how do I do that in PHP? The while loop orders records by the priority column which has int value (high = 3, medium = 2, low = 1). e.g. WHERE priority = '1' The label: "Priority: [priority level]" has to be set above the grouped records regarding their level

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >