Search Results

Search found 49554 results on 1983 pages for 'database users'.

Page 111/1983 | < Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >

  • What Pattern will solve this - fetching dependent record from database

    - by tunmise fasipe
    I have these classes class Match { int MatchID, int TeamID, //used to reference Team ... other fields } Note: Match actually have 2 teams which means 2 TeamID class Team { int TeamID, string TeamName } In my view I need to display List<Match> showing the TeamName. So I added another field class Match { int MatchID, int TeamID, //used to reference Team ... other fields string TeamName; } I can now do Match m = getMatch(id); m.TeamName = getTeamName(m.TeamId); //get name from database But for a List<Match>, getTeamName(TeamId) will go to the database to fetch TeamName for each TeamID. For a page of 10 Matches per page, that could be (10x2Teams)=20 trip to database. To avoid this, I had the idea of loading everything once, store it in memory and only lookup the TeamName in memory. This made me have a rethink that what if the records are 5000 or more. What pattern is used to solve this and how?

    Read the article

  • Retrieving database column using JSON [migrated]

    - by arokia
    I have a database consist of 4 columns (id-symbol-name-contractnumber). All 4 columns with their data are being displayed on the user interface using JSON. There is a function which is responisble to add new column to the database e.g (countrycode). The coulmn is added successfully to the database BUT not able to show the new added coulmn in the user interface. Below is my code that is displaying the columns. Can you help me? table.php $(document).ready(function () { // prepare the data var theme = getDemoTheme(); var source = { datatype: "json", datafields: [ { name: 'id' }, { name: 'symbol' }, { name: 'name' }, { name: 'contractnumber' } ], url: 'data.php', filter: function() { // update the grid and send a request to the server. $("#jqxgrid").jqxGrid('updatebounddata', 'filter'); }, cache: false }; var dataAdapter = new $.jqx.dataAdapter(source); // initialize jqxGrid $("#jqxgrid").jqxGrid( { source: dataAdapter, width: 670, theme: theme, showfilterrow: true, filterable: true, columns: [ { text: 'id', datafield: 'id', width: 200 }, { text: 'symbol', datafield: 'symbol', width: 200 }, { text: 'name', datafield: 'name', width: 100 }, { text: 'contractnumber', filtertype: 'list', datafield: 'contractnumber' } ] }); }); data.php <?php #Include the db.php file include('db.php'); $query = "SELECT * FROM pricelist"; $result = mysql_query($query) or die("SQL Error 1: " . mysql_error()); $orders = array(); // get data and store in a json array while ($row = mysql_fetch_array($result, MYSQL_ASSOC)) { $pricelist[] = array( 'id' => $row['id'], 'symbol' => $row['symbol'], 'name' => $row['name'], 'contractnumber' => $row['contractnumber'] ); } echo json_encode($pricelist); ?>

    Read the article

  • What is a Relational Database Management System (RDBMS)?

    A Relational Database Management System (RDBMS)  can also be called a traditional database that uses a Structured Query Language (SQL) to provide access to stored data while insuring the integrity of the data. The data is stored in a collection of tables that is defined by relationships between data items. In addition, data permitted to be joined in new relationships. Traditional databases primarily process data through transactions called transaction processing. Transaction processing is the methodology of grouping related business operations based predefined business events. An example of this can be seen when a person attempts to purchase an item from an online e-tailor. The business must execute specific operations for a related  business event. In this case, a business must store the following information: Customer Info, Order Info, Order Item Info, Customer Payment Data, Payment Results, and Current Order Status. Example: Pseudo SQL Operations needed for processing an online e-tailor sale. Insert Customer into Customers Insert New Order into Orders Insert Each New Order Item into OrderItems Insert Customer Payment Info into PaymentInfo Insert Payment Processing Result into PaymentDetails Update Customer for Current Order Status Common Relational Database Management System Microsoft SQL Server Microsoft Access Oracle MySQL DB2 It is important to note that no current RDBMS has fully implemented all of the Relational Principles. Common RDBMS Traits Volatile Data Supports Transaction Processing Optimized for Updates and Simple Queries 

    Read the article

  • Project Showcase: SaaS Web Apps Hits a Home Run with New SCMS Database

    - by Webgui
    We love seeing projects from start to finish, and we’re happy to share the latest example with you. Who: SaaS Web Apps – they use Software as a Service to create web applications that look and feel like desktop applications. What: SaaS Web Apps needed to build a Sports Contract Management System (SCMS) for one of its customers, Premier Stinson Sports. Why: The SCMS database is used for collecting, analyzing and recording college coach and athletic directors’ employment and contract data. The Challenge: Premier Stinson Sports works with a number of partners, each with its own needs and unique requirements. For example, USA Today uses the system to provide cutting edge news analysis while The National Sports Law Institute of Marquette University Law School uses it to for the latest sports contract data and student analysis. In addition, the system needed to be secure due to the sensitivity of the data; it was essential that the user security and permissions be easily configurable. As always, performance was a key factor, especially with the intense reporting and analytical capabilities for this project. Because of this, most of the processing had to be done on a dedicated server but the project called for the richness and responsiveness of a desktop application. The Solution: To execute the project, SaaS Web Apps used APS.Net-based Visual WebGui from Gizmox, combined with SQL Server 2008 and SQL Reporting Services. This combination resulted in a quick deployment for SaaS Web Apps’ customers. The Result: The completed project gave each partner the scalability and availability of a web application with the performance and security of a desktop application. As an example, USA Today pulls data from this database to give readers the latest sports stats – Salary analysis of 2010 Football Bowl Subdivision Coaches. And here’s a screenshot of the database itself. Great work, SaaS Web Apps!

    Read the article

  • EmblaCom Oy Maximizes Database Availability and Reduces Costs with MySQL Cluster

    - by Bertrand Matthelié
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Headquartered in Finland, EmblaCom Oy provides turnkey and cloud-hosted voice solutions to mobile operators around the globe. Since launching the original mobile private branch exchange (PBX) in 1998, the company has focused on helping its partners provide efficient voice communications to their key business customers. The company’s voice solutions are used by millions of subscribers, worldwide. EmblaCom Oy needed to replace several database engines with a standardized, scalable, development-friendly database solution to maximize availability and cut costs. The company chose MySQL Cluster Carrier Grade Edition, which has maximized accessibility to EmblaCom’s services for its clients and their hundreds of thousands of subscribers. The initiative has also reduced, by half, the cost of the database solution installation for customers, as well as lowered maintenance and customer service costs. Read the entire case study here.

    Read the article

  • How to export SQL Server data from corrupted database (with disk write error)

    - by damitamit
    IT realised there was a disk write error on our production SQL Server 2005 and hence was causing the backups to fail. By the time they had realised this the nightly backup was old, so were not able to just restore the backup on another server. The database is still running and being used constantly. However DBCC CheckDB fails. Also the SQL Server backup task fails, Copy Database fails, Export Data Wizard fails. However it seems all the data can be read from the tables (i.e using bcp etc) Another observation I have made is that the Transaction Log is nearly double the size of the Database. (Does that mean all the changes arent being written to the MDF?) What would be the best plan of attack to get the database to a state where backups are working and the data is safe? Take the database offline and use the MDF/LDF to somehow create the database on another sql server? Export the data from the database using bcp. Create the database (use the Generate Scripts function on the corrupt db to create the schema on the new db) on another sql server and use bcp again to import the data. Some other option that is the right course of action in this situation? The IT manager says the data is safe as if the server fails, the data can be restored from the mdf/ldf. I'm not sure so insisted that we start exporting the data each night as a failsafe (using bcp for example). IT are also having issues on the hardware side of things as supposedly the disk error in on a virtualized disk and can't be rebuilt like a normal raid array (or something like that). Please excuse my use of incorrect terminology and incorrect assumptions on how Sql Server operates. I'm the application developer and have been called to help (as it seems IT know less about SQL Server than I do). Many Thanks, Amit

    Read the article

  • Database cluster... without Master/Slave?

    - by RadiantHex
    Hi there! I'm wondering if it is possible to have a set of SQLdb servers to which data is written and have them replicate, avoiding conflicting information. I imagine that a Master/Slave structure would be mandatory, I would like to know if a system where servers have no hierarchy could support replication. Currently I'm using MySQL, but I would be happy to move to another database if needed. Any ideas? :)

    Read the article

  • Oracle TimesTen In-Memory Database Performance on SPARC T4-2

    - by Brian
    The Oracle TimesTen In-Memory Database is optimized to run on Oracle's SPARC T4 processor platforms running Oracle Solaris 11 providing unsurpassed scalability, performance, upgradability, protection of investment and return on investment. The following demonstrate the value of combining Oracle TimesTen In-Memory Database with SPARC T4 servers and Oracle Solaris 11: On a Mobile Call Processing test, the 2-socket SPARC T4-2 server outperforms: Oracle's SPARC Enterprise M4000 server (4 x 2.66 GHz SPARC64 VII+) by 34%. Oracle's SPARC T3-4 (4 x 1.65 GHz SPARC T3) by 2.7x, or 5.4x per processor. Utilizing the TimesTen Performance Throughput Benchmark (TPTBM), the SPARC T4-2 server protects investments with: 2.1x the overall performance of a 4-socket SPARC Enterprise M4000 server in read-only mode and 1.5x the performance in update-only testing. This is 4.2x more performance per processor than the SPARC64 VII+ 2.66 GHz based system. 10x more performance per processor than the SPARC T2+ 1.4 GHz server. 1.6x better performance per processor than the SPARC T3 1.65 GHz based server. In replication testing, the two socket SPARC T4-2 server is over 3x faster than the performance of a four socket SPARC Enterprise T5440 server in both asynchronous replication environment and the highly available 2-Safe replication. This testing emphasizes parallel replication between systems. Performance Landscape Mobile Call Processing Test Performance System Processor Sockets/Cores/Threads Tps SPARC T4-2 SPARC T4, 2.85 GHz 2 16 128 218,400 M4000 SPARC64 VII+, 2.66 GHz 4 16 32 162,900 SPARC T3-4 SPARC T3, 1.65 GHz 4 64 512 80,400 TimesTen Performance Throughput Benchmark (TPTBM) Read-Only System Processor Sockets/Cores/Threads Tps SPARC T3-4 SPARC T3, 1.65 GHz 4 64 512 7.9M SPARC T4-2 SPARC T4, 2.85 GHz 2 16 128 6.5M M4000 SPARC64 VII+, 2.66 GHz 4 16 32 3.1M T5440 SPARC T2+, 1.4 GHz 4 32 256 3.1M TimesTen Performance Throughput Benchmark (TPTBM) Update-Only System Processor Sockets/Cores/Threads Tps SPARC T4-2 SPARC T4, 2.85 GHz 2 16 128 547,800 M4000 SPARC64 VII+, 2.66 GHz 4 16 32 363,800 SPARC T3-4 SPARC T3, 1.65 GHz 4 64 512 240,500 TimesTen Replication Tests System Processor Sockets/Cores/Threads Asynchronous 2-Safe SPARC T4-2 SPARC T4, 2.85 GHz 2 16 128 38,024 13,701 SPARC T5440 SPARC T2+, 1.4 GHz 4 32 256 11,621 4,615 Configuration Summary Hardware Configurations: SPARC T4-2 server 2 x SPARC T4 processors, 2.85 GHz 256 GB memory 1 x 8 Gbs FC Qlogic HBA 1 x 6 Gbs SAS HBA 4 x 300 GB internal disks Sun Storage F5100 Flash Array (40 x 24 GB flash modules) 1 x Sun Fire X4275 server configured as COMSTAR head SPARC T3-4 server 4 x SPARC T3 processors, 1.6 GHz 512 GB memory 1 x 8 Gbs FC Qlogic HBA 8 x 146 GB internal disks 1 x Sun Fire X4275 server configured as COMSTAR head SPARC Enterprise M4000 server 4 x SPARC64 VII+ processors, 2.66 GHz 128 GB memory 1 x 8 Gbs FC Qlogic HBA 1 x 6 Gbs SAS HBA 2 x 146 GB internal disks Sun Storage F5100 Flash Array (40 x 24 GB flash modules) 1 x Sun Fire X4275 server configured as COMSTAR head Software Configuration: Oracle Solaris 11 11/11 Oracle TimesTen 11.2.2.4 Benchmark Descriptions TimesTen Performance Throughput BenchMark (TPTBM) is shipped with TimesTen and measures the total throughput of the system. The workload can test read-only, update-only, delete and insert operations as required. Mobile Call Processing is a customer-based workload for processing calls made by mobile phone subscribers. The workload has a mixture of read-only, update, and insert-only transactions. The peak throughput performance is measured from multiple concurrent processes executing the transactions until a peak performance is reached via saturation of the available resources. Parallel Replication tests using both asynchronous and 2-Safe replication methods. For asynchronous replication, transactions are processed in batches to maximize the throughput capabilities of the replication server and network. In 2-Safe replication, also known as no data-loss or high availability, transactions are replicated between servers immediately emphasizing low latency. For both environments, performance is measured in the number of parallel replication servers and the maximum transactions-per-second for all concurrent processes. See Also SPARC T4-2 Server oracle.com OTN Oracle TimesTen In-Memory Database oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 1 October 2012.

    Read the article

  • How do I rename the Tfs_xxxConfiguration database for TFS 2010?

    - by Jaxidian
    When we installed TFS 2010, we made some poor decisions in naming our databases. I have figured out how to rename everything except for the Tfs_xxxConfiguration database. How can I get this renamed? I can only find a read-only display of the connection string to it - cannot find where to alter that connection string either directly or indirectly. I'd really prefer to not have to rebuild it from scratch... again... Thanks!

    Read the article

  • how many tables can an MS SQL database hold?

    - by Peter Turner
    I've ran into this cryptic statement for SQL Server: Files Per Database 32,767. What does that mean exactly? Is there a maximum number of tables for a given version of SQL Server. We try to support SQL Server post 2005 32-bit and 64-bit. So if anyone has a handy dandy table they use to figure out how many tables they can have per DB for Microsoft SQL Servers I'd heartily appreciate seeing it.

    Read the article

  • How can I synchronize one set of data with another?

    - by RenderIn
    I have an old database and a new database. The old records were converted to the new database recently. All our old applications continue to point to the old database, but the new applications point to the new database. Currently the old database is the only one being updated, so throughout the day the new database becomes out of sync. It is acceptable for the new database to be out of sync for a day, so until all our applications are pointed to the new database I just need to write a nightly cron job that will bring it up to date. I do not want to purge the new database and run the complete conversion script each night, as that would reduce uptime and would create a mess in our auditing of that table. I'm thinking about selecting all the data from the old database, converting it to the new database structure in memory, and then checking for the existence of each record before inserting it in the new database. After that's done, I'd select everything from the new database and check if it exists in the old one, and if not delete it. Is this the simplest way to do this?

    Read the article

  • Auth-Type :- Reject in RADIUS users file matches inner tunnel request but sends Access-Accept

    - by mgorven
    I have WPA2 802.11x EAP authentication setup using FreeRADIUS 2.1.8 on Ubuntu 10.04.4 talking to OpenLDAP, and can successfully authenticate using PEAP/MSCHAPv2, TTLS/MSCHAPv2 and TTLS/PAP (both via the AP and using eapol_test). I am now trying to restrict access to specific SSIDs based on the LDAP groups which the user belongs to. I have configured group membership checking in /etc/freeradius/modules/ldap like so: groupname_attribute = cn groupmembership_filter = "(|(&(objectClass=posixGroup)(memberUid=%{User-Name}))(&(objectClass=posixGroup)(uniquemember=%{User-Name})))" and I have configured extraction of the SSID from Called-Station-Id into Called-Station-SSID based on the Mac Auth wiki page. In /etc/freeradius/eap.conf I have enabled copying attributes from the outer tunnel into the inner tunnel, and usage of the inner tunnel response in the outer tunnel (for both PEAP and TTLS). I had the same behaviour before changing these options however. copy_request_to_tunnel = yes use_tunneled_reply = yes I'm running eapol_test like this to test the setup: eapol_test -c peap-mschapv2.conf -a 172.16.0.16 -s testing123 -N 30:s:01-23-45-67-89-01:Example-EAP with the following peap-mschapv2.conf file: network={ ssid="Example-EAP" key_mgmt=WPA-EAP eap=PEAP identity="mgorven" anonymous_identity="anonymous" password="foobar" phase2="autheap=MSCHAPV2" } With the following in /etc/freeradius/users: DEFAULT Ldap-Group == "employees" and running freeradius-Xx, I can see that the LDAP group retrieval works, and that the SSID is extracted. Debug: [ldap] performing search in dc=example,dc=com, with filter (&(cn=employees)(|(&(objectClass=posixGroup)(memberUid=mgorven))(&(objectClass=posixGroup)(uniquemember=mgorven)))) Debug: rlm_ldap::ldap_groupcmp: User found in group employees ... Info: expand: %{7} -> Example-EAP Next I try to only allow access to users in the employees group (regardless of SSID), so I put the following in /etc/freeradius/users: DEFAULT Ldap-Group == "employees" DEFAULT Auth-Type := Reject But this immediately rejects the Access-Request in the outer tunnel because the anonymous user is not in the employees group. So I modify it to only match inner tunnel requests like so: DEFAULT Ldap-Group == "employees" DEFAULT FreeRADIUS-Proxied-To == "127.0.0.1" Auth-Type := Reject, Reply-Message = "User does not belong to any groups which may access this SSID." Now users which are in the employees group are authenticated, but so are users which are not in the employees group. I see the reject entry being matched, and the Reply-Message is set, but the client receives an Access-Accept. Debug: rlm_ldap::ldap_groupcmp: Group employees not found or user is not a member. Info: [files] users: Matched entry DEFAULT at line 209 Info: ++[files] returns ok ... Auth: Login OK: [mgorven] (from client test port 0 cli 02-00-00-00-00-01 via TLS tunnel) Info: WARNING: Empty section. Using default return values. ... Info: [peap] Got tunneled reply code 2 Auth-Type := Reject Reply-Message = "User does not belong to any groups which may access this SSID." ... Info: [peap] Got tunneled reply RADIUS code 2 Auth-Type := Reject Reply-Message = "User does not belong to any groups which may access this SSID." ... Info: [peap] Tunneled authentication was successful. Info: [peap] SUCCESS Info: [peap] Saving tunneled attributes for later ... Sending Access-Accept of id 11 to 172.16.2.44 port 60746 Reply-Message = "User does not belong to any groups which may access this SSID." User-Name = "mgorven" and eapol_test reports: RADIUS message: code=2 (Access-Accept) identifier=11 length=233 Attribute 18 (Reply-Message) length=64 Value: 'User does not belong to any groups which may access this SSID.' Attribute 1 (User-Name) length=9 Value: 'mgorven' ... SUCCESS Why isn't the request being rejected, and is this the right way to implement this?

    Read the article

  • Lockdown users on Windows Server 2012

    - by el.severo
    I set up a Active Directory on a server machine with Windows Server 2012 and I'd like to create some users with limitations like Windows Steady State does in Windows XP (locally). Seen already the Windows SteadyState Handbook (with Windows Server 2008), but I'd like to know if anyone has tried this before, the limitations are the following: 1. Prevent locked or roaming user profiles that cannot be found on the computer from logging on 2. Do not cache copies of locked or roaming user profiles for users who have previously logged on to this computer 3. Do not allow Windows to compute and store passwords using LAN Manager Hash values 4. Do not store usernames or passwords used to log on to the Windows Live ID or the domain 5. Prevent users from creating folders and files on drive C:\ 6. Lock profile to prevent the user from making permanent changes 7. Remove the Control Panel, Printer and Network Settings from the Classic Start menu 8. Remove the Favorites icon 9. Remove the My Network Places icon 10. Remove the Frequently Used Program list 11. Remove the Shared documents folder from My Computer 12. Remove control Panel icon 13. Remove the Set Program Access and Defaults icon 14. Remove the Network Connection(Connect To)icon 15. Remove the Printers and Faxes icon 16. Remove the Run icon 17. Prevent access to Windows Explorer features: Folder Options, Customize Toolbar, and the Notification Area 18. Prevent access to the taskbar 19. Prevent access to the command prompt 20. Prevent access to the registry editor 21. Prevent access to the Task Manager 22. Prevent access to Microsoft Management Console utilities 23. Prevent users from adding or removing printers 24. Prevent users from locking the computer 25. Prevent password changes (also requires the Control Panel icon to be removed) 26. Disable System Tools and other management programs 27. Prevent users from saving files to the desktop 28. Hide A Drive 29. Hide B Drive 30. Hide C Drive 31. Prevent changes to Internet Explorer registry settings 32. Empty the Temporary Internet Files folder when Internet Explorer is closed 33. Remove Internet Options 34. Remove General tab in Internet Options 35. Remove Security tab in Internet Options 36. Remove Privacy tab in Internet Options 37. Remove Content tab in Internet Options 38. Remove Connections tab in Internet Options 39. Remove Programs tab in Internet Options 40. Remove Advanced tab in Internet Options 41. Set a home page (Internet Explorer) 42. Restrict the possibility to change desktop image 43. Restrict the possibility to change wallpaper 44. Restrict usb flash drives Any suggestions for this? UPDATE: As @Dan suggested me I'd like to specify that would be applied to a educational scenario where students can login from a computer and want to add some restrictions to them.

    Read the article

  • Sqlite3 : "Database is locked" error

    - by Miraaj
    Hi all, In my cocoa application I am maintaining a SQLite db within resources folder and trying to do some select, delete operations in it but after some time it starts giving me 'Database is locked' error. The methods which I am using for select delete operations are as follows: // method to retrieve data if (sqlite3_open([databasePath UTF8String], &database) != SQLITE_OK) { sqlite3_close(database); NSAssert(0, @"Failed to open database"); } NSLog(@"mailBodyFor:%d andFlag:%d andFlag:%@",UId,x,Ffolder); NSMutableArray *recordsToReturn = [[NSMutableArray alloc] initWithCapacity:2]; NSString *tempMsg; const char *sqlStatementNew; NSLog(@"before switch"); switch (x) { case 9: // tempMsg=[NSString stringWithFormat:@"SELECT * FROM users_messages"]; tempMsg=[NSString stringWithFormat:@"SELECT message,AttachFileOriName as oriFileName,AttachmentFileName as fileName FROM users_messages WHERE id = (select message_id from users_messages_status where id= '%d')",UId]; NSLog(@"mail body query - %@",tempMsg); break; default: break; } sqlStatementNew = [tempMsg cStringUsingEncoding:NSUTF8StringEncoding]; sqlite3_stmt *compiledStatementNew; NSLog(@"before if statement"); if(sqlite3_prepare_v2(database, sqlStatementNew, -1, &compiledStatementNew, NULL) == SQLITE_OK) { NSLog(@"the sql is finalized"); while(sqlite3_step(compiledStatementNew) == SQLITE_ROW) { NSMutableDictionary *recordDict = [[NSMutableDictionary alloc] initWithCapacity:3]; NSString *message; if((char *)sqlite3_column_text(compiledStatementNew, 0)){ message = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatementNew, 0)]; } else{ message = @""; } NSLog(@"message - %@",message); NSString *oriFileName; if((char *)sqlite3_column_text(compiledStatementNew, 1)){ oriFileName = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatementNew, 1)]; } else{ oriFileName = @""; } NSLog(@"oriFileName - %@",oriFileName); NSString *fileName; if((char *)sqlite3_column_text(compiledStatementNew, 2)){ fileName = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatementNew, 2)]; } else{ fileName = @""; } NSLog(@"fileName - %@",fileName); [recordDict setObject:message forKey:@"message"]; [recordDict setObject:oriFileName forKey:@"oriFileName"]; [recordDict setObject:fileName forKey:@"fileName"]; [recordsToReturn addObject:recordDict]; [recordDict release]; } sqlite3_finalize(compiledStatementNew); sqlite3_close(database); NSLog(@"user messages return -%@",recordsToReturn); return recordsToReturn; } else{ NSLog(@"Error while creating retrieving mailBodyFor in messaging '%s'", sqlite3_errmsg(database)); sqlite3_close(database); } // method to delete data if (sqlite3_open([databasePath UTF8String], &database) != SQLITE_OK) { sqlite3_close(database); NSAssert(0, @"Failed to open database"); } NSString *deleteQuery = [[NSString alloc] initWithFormat:@"delete from users_messages_status where id IN(%@)",ids]; NSLog(@"users_messages_status msg deleteQuery - %@",deleteQuery); sqlite3_stmt *deleteStmnt; const char *sql = [deleteQuery cStringUsingEncoding:NSUTF8StringEncoding]; if(sqlite3_prepare_v2(database, sql, -1, &deleteStmnt, NULL) != SQLITE_OK){ NSLog(@"Error while creating delete statement. '%s'", sqlite3_errmsg(database)); } else{ NSLog(@"successful deletion from users_messages"); } if(SQLITE_DONE != sqlite3_step(deleteStmnt)){ NSLog(@"Error while deleting. '%s'", sqlite3_errmsg(database)); } sqlite3_close(database); Things are going wrong in this sequence Data is retrieved 'Database is locked' error arises on performing delete operation. When I retry to perform 1st step.. it now gives same error. Can anyone suggest me: If I am doing anything wrong or missing some check? Is there any way to unlock it when it gives locked error? Thanks, Miraaj

    Read the article

  • Rails : soap4r - Error while running wsdl2ruby.rb

    - by Mathieu
    when I execute Mathieu$ /Users/Mathieu/.gem/ruby/1.8/bin/wsdl2ruby.rb path --wsdl https://www.arello.com/webservice/verify.cfc?wsdl --type client --force I get at depth 0 - 20: unable to get local issuer certificate F, [2010-05-06T10:41:11.040288 #35933] FATAL -- app: Detected an exception. Stopping ... SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed (OpenSSL::SSL::SSLError) /Users/Mathieu/.gem/ruby/1.8/gems/httpclient-2.1.5.2/lib/httpclient/session.rb:247:in connect' /Users/Mathieu/.gem/ruby/1.8/gems/httpclient-2.1.5.2/lib/httpclient/session.rb:247:inssl_connect' /Users/Mathieu/.gem/ruby/1.8/gems/httpclient-2.1.5.2/lib/httpclient/session.rb:639:in connect' /Users/Mathieu/.gem/ruby/1.8/gems/httpclient-2.1.5.2/lib/httpclient/timeout.rb:128:intimeout' /Users/Mathieu/.gem/ruby/1.8/gems/httpclient-2.1.5.2/lib/httpclient/session.rb:631:in connect' /Users/Mathieu/.gem/ruby/1.8/gems/httpclient-2.1.5.2/lib/httpclient/session.rb:522:inquery' /Users/Mathieu/.gem/ruby/1.8/gems/httpclient-2.1.5.2/lib/httpclient/session.rb:147:in query' /Users/Mathieu/.gem/ruby/1.8/gems/httpclient-2.1.5.2/lib/httpclient.rb:953:indo_get_block' /Users/Mathieu/.gem/ruby/1.8/gems/httpclient-2.1.5.2/lib/httpclient.rb:765:in do_request' /Users/Mathieu/.gem/ruby/1.8/gems/httpclient-2.1.5.2/lib/httpclient.rb:848:inprotect_keep_alive_disconnected' /Users/Mathieu/.gem/ruby/1.8/gems/httpclient-2.1.5.2/lib/httpclient.rb:764:in do_request' /Users/Mathieu/.gem/ruby/1.8/gems/httpclient-2.1.5.2/lib/httpclient.rb:833:infollow_redirect' /Users/Mathieu/.gem/ruby/1.8/gems/httpclient-2.1.5.2/lib/httpclient.rb:519:in get_content' /Users/Mathieu/.gem/ruby/1.8/gems/soap4r-1.5.8/lib/wsdl/xmlSchema/importer.rb:73:infetch' /Users/Mathieu/.gem/ruby/1.8/gems/soap4r-1.5.8/lib/wsdl/xmlSchema/importer.rb:36:in import' /Users/Mathieu/.gem/ruby/1.8/gems/soap4r-1.5.8/lib/wsdl/importer.rb:18:inimport' /Users/Mathieu/.gem/ruby/1.8/gems/soap4r-1.5.8/lib/wsdl/soap/wsdl2ruby.rb:206:in import' /Users/Mathieu/.gem/ruby/1.8/gems/soap4r-1.5.8/lib/wsdl/soap/wsdl2ruby.rb:36:inrun' /Users/Mathieu/.gem/ruby/1.8/gems/soap4r-1.5.8/bin/wsdl2ruby.rb:46:in run' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/logger.rb:659:instart' /Users/Mathieu/.gem/ruby/1.8/gems/soap4r-1.5.8/bin/wsdl2ruby.rb:137 /Users/Mathieu/.gem/ruby/1.8/bin/wsdl2ruby.rb:19:in `load' /Users/Mathieu/.gem/ruby/1.8/bin/wsdl2ruby.rb:19 I, [2010-05-06T10:41:11.040855 #35933] INFO -- app: End of app. (status: -1)

    Read the article

  • How do I get my ubuntu server to listen for database connections?

    - by Bob Flemming
    I am having a problems connecting to my database outside of phpmyadmin. Im pretty sure this is because my server isn't listening on port 3306. When I type: sudo netstat -ntlp on my OTHER working server I can see the following line: tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 20445/mysqld However, this line does not appear on the server I am having difficulty with. How do I make my sever listen for mysql connections? Here my my.conf file: # # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp lc-messages-dir = /usr/share/mysql #skip-networking=off #skip_networking=off #skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. #bind-address = 0.0.0.0 # # * Fine Tuning # key_buffer = 64M max_allowed_packet = 64M thread_stack = 650K thread_cache_size = 32 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #max_connections = 100 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 2M query_cache_size = 32M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 # # Error logging goes to syslog due to /etc/mysql/conf.d/mysqld_safe_syslog.cnf. # # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 32M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 32M # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/

    Read the article

  • What is the relation between database books by Ullman et al.?

    - by macias
    A First Course in Database Systems by Jeffrey D. Ullman, Jennifer Widom (Amazon links) Database System Implementation by Hector Garcia-Molina, Jeffrey D. Ullman, Jennifer D. Widom Database Systems: The Complete Book by Hector Garcia-Molina, Jeffrey D. Ullman, Jennifer Widom As far as I know the second one is the second "part" of the first one. But what about the third one -- is it just first+second published in one volume? I would like to buy them, but I don't want to get redundant reading. Thank you in advance for clarification.

    Read the article

  • user creating/saving

    - by Xaver
    i want to write 2 program: 1) programm saves all local users to the file. 2) loads file find that users not found on local machine and create user. for searching all users which create on local machine i use next code: foreach (ManagementObject user in userSearcher.Get()) { if ((bool)user["LocalAccount"]) { string UserName = (string)user["FullName"]; } } How can i save the settings of user by name and create user?

    Read the article

  • Membership Provider users in different tables

    - by Mike
    I have an existing database with users and administrators in different tables. I am rewriting an existing website in ASP.net and need to decide - should I merge the two tables into one users table and just have one provider, OR leave the tables separated and have two different providers. Administrators, they need the ability to create, edit and delete users. I am thinking that the membership/profile provider way of editing users (i.e. System.Web.Profile.ProfileBase pro = System.Web.Profile.ProfileBase.Create("User1"); pro.Initialize("User1", true); txtEmail.Text = pro["SecondaryEmail"].ToString(); is the best way to edit users because the provider handles it? You cannot use this if you have two separate providers? (because they are both looking at different tables). Or should I make a whole lot of methods to edit the users for the administrators? UPDATE: Making a custom membership provider look at both tables is fine, but then what about the profile provider? The profile provider GetPropertyValues and SetPropertyValues would be going on the same set of properties for users and admins. Mike

    Read the article

< Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >