Search Results

Search found 3710 results on 149 pages for 'databases'.

Page 57/149 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • What strategy do you use to sync your code when working from home

    - by Ben Daniel
    At my work I currently have my development environment inside a Virtual Machine. When I need to do work from home I copy my VM and any databases I need onto a laptop drive sized external USB drive. After about 10 minutes of copying I put the drive in my pocket and head home, copy back the VM and databases onto my personal computer and I'm ready to work. I follow the same steps to take the work back with me. So if I count the total amount of time I spend waiting around for files to finish copying in order for me to take work home and bring it back again, it comes to around 40 minutes! I do have a VPN connection to my work from home (providing the internet is up at both sites) and a decent internet speed (8mbits down/?up) but I find Remote Desktoping into my work machine laggy enough for me to want to work on my VM directly. So in looking at what other options I have or how I could improve my existing option I'm interested in what strategy you use or recommend to do work at home and keeping your code/environment in sync. EDIT: I'd prefer an option where I don't have to commit my changes into version control before I leave work - as I like to make meaningful descriptive comments in my commits, committing would take longer than just copying my VM onto a portable drive! lol Also I'd prefer a solution where my dev environment stays in sync too. Having said that I'm still very interested in your own solutions even if they don't exactly solve my problem as best as I'd like. :)

    Read the article

  • Variant datatype library for C

    - by Joey Adams
    Is there a decent open-source C library for storing and manipulating dynamically-typed variables (a.k.a. variants)? I'm primarily interested in atomic values (int8, int16, int32, uint, strings, blobs, etc.), while JSON-style arrays and objects as well as custom objects would also be nice. A major case where such a library would be useful is in working with SQL databases. The most obvious feature of such a library would be a single type for all supported values, e.g.: struct Variant { enum Type type; union { int8_t int8_; int16_t int16_; // ... }; }; Other features might include converting Variant objects to/from C structures (using a binding table), converting values to/from strings, and integration with an existing database library such as SQLite. Note: I do not believe this is question is a duplicate of http://stackoverflow.com/questions/649649/any-library-for-generic-datatypes-in-c , which refers to "queues, trees, maps, lists". What I'm talking about focuses more on making working with SQL databases roughly as smooth as working with them in interpreted languages.

    Read the article

  • open source business intelligence solutions

    - by opensas
    which open source business intelligence solution would you recommend? All I need is to build some cubes and let the end user play with dimensions, filter data, sort, etc, and once it's done being able to export it to excel... I'd like the solution to be as simple and easy on resources as possible, and also I'd like it to be as much open source as possible, by the way. I've heard that many solutions available do have many restrictions when it comes to there community version. I'd like to ear your advices and the pros/cons of each alternative, to help me choose the right tool, and if you could point me to some basic demo and tutorial to get started. thanks a lot ps: I'm using sql server databases, they aren't huge databases (in general less than a million records) and I doesn't necessarily have to work on "live" data... ps: some useful links: http://en.wikipedia.org/wiki/Business_intelligence_tools#Open_source_free_products http://www.manageability.org/blog/stuff/open-source-java-business-intelligence http://www.jaspersoft.com/jasperanalysis http://community.pentaho.com/projects/bi_platform/ http://community.pentaho.com/faq/platform_licensing.php http://www.eclipse.org/birt/phoenix/ http://www.spagoworld.org/xwiki/bin/view/SpagoWorld/ http://docs.google.com/viewer?a=v&q=cache:vhsqMQXwCUkJ:www.ow2.org/xwiki/bin/download/Activities/EuropeLocalChapterWebinars/ELCWebinarOSBI.pdf+open+source+business+intelligence&hl=en&pid=bl&srcid=ADGEESgpJJ2MqaKprJQOF2jX2UXCZQjg_asv8d7EVYtq0Vma-e-tR1tFxS-I0SOW0IhJC5acYc94rkDOrgP1WckCp_vk4qhKqR9y2Klp_u9cL8hlXoKoUpMkpAd5wabu61A4W0y15E5P&sig=AHIEtbRJ5FAI-3YK-qtayPjKkF_CwOgZag

    Read the article

  • JPA - Performance with using multiple entity manager

    - by Nguyen Tuan Linh
    My situation is: The code is not mine I have two kinds of database: one is Dad, one is Son. In Dad, I have a table to store JNDI name. I will look up Dad using JNDI, create entity manager, and retrieve this table. From these retrieved JNDI names, I will create multiple entity managers using multiple Son databases. The problem is: Son have thousands of entities. It takes each Son database around 10 minutes to load all entities. If there is 4 Son databases, it will be 40 minutes. My question: Is there any way to load all entities and use them for all entity manager? Please look at the code below For each Son JNDI: Map<String, String> puSonProperties = new HashMap<String, String>(); puSonProperties.put("javax.persistence.jtaDataSource", sonJndi); EntityManagerFactory emf = Persistence.createEntityManagerFactory("PUSon", puSonProperties); PUSon - All of them use the same persistence unit log.info("Verify entity manager for son: {0} - {1}", sonCode, emSon.find(Son_configuration.class, 0) != null ? "ok" : "failed!"); This is the actual code where the loading of all entities begins. 10 mins.

    Read the article

  • Is it possible to update old database from dbml file ? (C#, .Net 4, Linq, SQL Server)

    - by Emil
    Hi all, I began recently a new job, a very interesting project (C#,.Net 4, Linq, VS 2010 and SQL Server). And immediately I got a very exciting challenge: I must implement either a new tool or integrate the logic when program start, or whatever, but what must happen is the following: the customers have previous application and database (full with their specific data). Now a new version is ready and the customer gets the update. In the mean time we made some modification on DB (new table, columns, maybe an old column deleted, or whatever). I’m pretty new in Linq and also SQL databases and my first solution can be: I check the applications/databases version and implement all the changes step by step comparing all tables, columns, keys, constrains, etc. (all this new information I have in my dbml and the old I asked from the existing DB). And I’ll do this each time the version changed. But somehow I feel, this is NOT a smart solution so I look for a general solution of this problem. Is there a way to update customers DB from the dbml file? To create a new one is not a problem (CreateDatabase with DataContext), is there any update/alter database methods? I guess I’m not the only one who search for such a solution (I found nothing in internet – or I looked for bad keywords). How did you solve this problem? I look also for an external tool, but first for a solution with C#, Linq or something similar. For any idea thank you in advance! Best regards, Emil

    Read the article

  • New to web development - backend questions

    - by James
    I'm new to web development although I'm confident in the roadmap for the front-end. I need direction on two things: Basic architecture Back-end technologies For architecture, what do I need to get started? From what I know, its: Get a domain name registered (godaddy?) Find a web host ??? anything else? or start developing the site? I don't think its that easy, there must be something I'm missing, right? For the back-end technologies, I have application development experience with Java and Python, but how likely is it to find a back-end hosting site that supports these languages over PHP? Is PHP a better choice? If I stick with what I know for the back-end, am I sabotaging myself later on? If I need help, how is the market for a python/java developer vs. a php developer? What do I need to know about databases? I have some basic SQL experience. Do hosting sites have limitations on the type of databases or bandwidth I need to worry about? I'm working through some of the common sites: StackOverflow, Sitepoint forums, Google, etc...are there other resources I should use?

    Read the article

  • While trying to set up Django on Windows: AttributeError: 'Settings' object has no attribute 'DATABA

    - by user326370
    I'm following these instructions in order to set up Django on Windows. I have installed Python 2.6, PostgreSQL 8.4, Psycopg 2.0.14 for Python 2.6 and the latest version of Django from SVN. I'm now following these instructions to run a test project (copied from the page linked to above): C:\Documents and Settings\John>cd C:\ C:\>mkdir django C:\>cd django C:\django>django-admin.py startproject testproject C:\django>cd testproject C:\django\testproject>python manage.py runserver When I run the last line, this is the output: Validating models... Unhandled exception in thread started by <function inner_run at 0x01ECB930> Traceback (most recent call last): File "J:\Python26\lib\site-packages\django\core\management\commands\runserver.py", line 48, in inn er_run self.validate(display_num_errors=True) File "J:\Python26\lib\site-packages\django\core\management\base.py", line 249, in validate num_errors = get_validation_errors(s, app) File "J:\Python26\lib\site-packages\django\core\management\validation.py", line 22, in get_validat ion_errors from django.db import models, connection File "J:\Python26\lib\site-packages\django\db\__init__.py", line 14, in <module> if not settings.DATABASES: File "J:\Python26\lib\site-packages\django\utils\functional.py", line 273, in __getattr__ return getattr(self._wrapped, name) AttributeError: 'Settings' object has no attribute 'DATABASES' Did I forget to do something with the database? Any help will be appreciated. Thank you!

    Read the article

  • Are Parameters really enough to prevent Sql injections?

    - by Rune Grimstad
    I've been preaching both to my colleagues and here on SO about the goodness of using parameters in SQL queries, especially in .NET applications. I've even gone so far as to promise them as giving immunity against SQL injection attacks. But I'm starting to wonder if this really is true. Are there any known SQL injection attacks that will be successfull against a parameterized query? Can you for example send a string that causes a buffer overflow on the server? There are of course other considerations to make to ensure that a web application is safe (like sanitizing user input and all that stuff) but now I am thinking of SQL injections. I'm especially interested in attacks against MsSQL 2005 and 2008 since they are my primary databases, but all databases are interesting. Edit: To clarify what I mean by parameters and parameterized queries. By using parameters I mean using "variables" instead of building the sql query in a string. So instead of doing this: SELECT * FROM Table WHERE Name = 'a name' We do this: SELECT * FROM Table WHERE Name = @Name and then set the value of the @Name parameter on the query / command object.

    Read the article

  • Test-First development tool for SQL Server 2005?

    - by Jeff Jones
    For several years I have been using a testing tool called qmTest that allows me to do test-driven database development for some Firebird databases. I write a test for a new feature (table, trigger, stored procedure, etc.) until it fails, then modify the database until the test passes. If necessary, I do more work on the test until it fails again, then modify the database until the test passes. Once the test for the feature is complete and passes 100% of the time, I save it in a suite of other tests for the database. Before moving on to another test or a deployment, I run all the tests as a suite to make sure nothing is broken. Tests can have dependencies on other tests, and the results are recorded and displayed in a browser. Nothing new here, I am sure. Our shop is aiming toward standardizing on MSSQLServer and I want to use the same procedure for developing our databases. Does anyone know of tools that allow or encourage this kind of development? I believe the Team System does, but we do not own that at this point, and probably will not for some time. I am not opposed to scripting, but would welcome a more graphical environment. Any suggestions?

    Read the article

  • Is it possible to unstick a remote IIS ASP server after an exception hangs the session?

    - by user89691
    I have been coding an app in classic ASP that accesses 2 Access databases. I had a page I was working on throw an exception, which is normal during development and causes no lasting problems. This time however, after the exception any attempt to open either of the databases would freeze the session with an infinite script timeout. If I delete the session cookie I an able to access ASP pages again until I try to open the database again. The database that was open when the exception was thrown is left open. There is a LDB lock file and I can't rename or delete either the LDB or MDB file, though I can download the MDB file with FTP. The 2nd access database is not open but any attempt to read this also hangs the session. Accessing HTML pages is fine. The site is hosted with Hostway and they are not interested ("Coding problem = Your problem" even though it leaves my site dead in the water, I suspect until the next reboot, whenever that might be). Here is the dump from the relevant ASP page that threw the exception: Active Server Pages error 'ASP 0115' Unexpected error /translatestats.asp A trappable error (C0000005) occurred in an external object. The script cannot continue running. Active Server Pages error 'ASP 0240' Script Engine Exception /translatestats.asp A ScriptEngine threw exception 'C0000005' in 'IActiveScript::Close()' from 'CActiveScriptEngine::FinalRelease()'. Is there any way I can unstick the site / force close the database remotely ?

    Read the article

  • MongoDB - proper use of collections?

    - by zmg
    In Mongo my understanding is that you can have databases and collections. I'm working on a social-type app that will have blogs and comments (among other things) and had previously be using MySQL and pretty heavy partitioning in an attempt to limit possible concurrency issues. With MySQL I've stuffed all my user data into a _user database with several tables to further partition the data (blogs, pages, etc). My immediate reaction with Mongo would be to create a 'users' database with one collection per user. In this way user 'zach' blog entries would go into the 'zach' collection with associated comments and such becoming sub-objects in the same collection. Basically like dynamically creating one table per user in MySQL, but apparently without the complexity and limitations that might impose. Of course since I haven't really used Mongo before I'm having trouble gauging the (ahem..) quality of this idea and the potential problems it might cause down the road. I'd like user data to be treated a lot like a users directory in a *nix environment where user created/non-shared (mostly) gets put into one place (currently with MySQL that would be the appname_users as mentioned above). Most of the users data will be specific to the users page(s). Some of the user data which is queried across all site users (searchable user profiles) is currently kept in a separate database/table and I expect things like this could be put into a appname_system database and be broken up into collections and/or application specific databases (appname_profiles). Anyway, since the available documentation on this is currently a little thin and my experience is extremely limited I thought I might find a little guidance from someone with a better working understanding of the system. On the plus side I'd really already been attempting to treat MySQL as a schema-less document-store and doing this with Mongo seems much more intuitive/sane/rational so I'm really looking forward to getting started. Thanks, Zach

    Read the article

  • Is there an XML XQuery interface to existing XML files?

    - by xsaero00
    My company is in education industry and we use XML to store course content. We also store some course related information (mostly metainfo) in relational database. Right now we are in the process of switching from our proprietary XML Schema to DocBook 5. Along with the switch we want to move course related information from database to XML files. The reason for this is to have all course data in one place and to put it under Subversion. However, we would like to keep flexibility of the relational database and be able to easily extract specific information about a course from an XML document. XQuery seems to be up to the task so I was researching databases that supports it but so far could not find what I needed. What I basically want, is to have my XML files in a certain directory structure and then on top of this I would like to have a system that would index my files and let me select anything out of any file using XQuery. This way I can have "my cake and eat it too": I will have XQuery interface and still keep my files in plain text and versioned. Is there anything out there at least remotely resembling to what I want? If you think that what I an asking for is nonsense please make an alternative suggestion. On the related note: What XML Databases (preferably native and open source) do you have experience with and what would you recommend?

    Read the article

  • MS SQL : Can you help me with this query?

    - by rlb.usa
    I want to run a diagnostic report on our MS SQL 2008 database server. I am looping through all of the databases, and then for each database, I want to look at each table. But, when I go to look at each table (with tbl_cursor), it always picks up the tables in the database 'master'. I think it's because of my tbl_cursor selection : SELECT table_name FROM information_schema.tables WHERE table_type = 'base table' How do I fix this? Here's the entire code: SET NOCOUNT ON DECLARE @table_count INT DECLARE @db_cursor VARCHAR(100) DECLARE database_cursor CURSOR FOR SELECT name FROM sys.databases where name<>N'master' OPEN database_cursor FETCH NEXT FROM database_cursor INTO @db_cursor WHILE @@Fetch_status = 0 BEGIN PRINT @db_cursor SET @table_count = 0 DECLARE @table_cursor VARCHAR(100) DECLARE tbl_cursor CURSOR FOR SELECT table_name FROM information_schema.tables WHERE table_type = 'base table' OPEN tbl_cursor FETCH NEXT FROM tbl_cursor INTO @table_cursor WHILE @@Fetch_status = 0 BEGIN DECLARE @table_cmd NVARCHAR(255) SET @table_cmd = N'IF NOT EXISTS( SELECT TOP(1) * FROM ' + @table_cursor + ') PRINT N'' Table ''''' + @table_cursor + ''''' is empty'' ' --PRINT @table_cmd --debug EXEC sp_executesql @table_cmd SET @table_count = @table_count + 1 FETCH NEXT FROM tbl_cursor INTO @table_cursor END CLOSE tbl_cursor DEALLOCATE tbl_cursor PRINT @db_cursor + N' Total Tables : ' + CAST( @table_count as varchar(2) ) PRINT N'' -- print another blank line SET @table_count = 0 FETCH NEXT FROM database_cursor INTO @db_cursor END CLOSE database_cursor DEALLOCATE database_cursor SET NOCOUNT OFF

    Read the article

  • PHP Database connection practice

    - by Phill Pafford
    I have a script that connects to multiple databases (Oracle, MySQL and MSSQL), each database connection might not be used each time the script runs but all could be used in a single script execution. My question is, "Is it better to connect to all the databases once in the beginning of the script even though all the connections might not be used. Or is it better to connect to them as needed, the only catch is that I would need to have the connection call in a loop (so the database connection would be new for X amount of times in the loop). Yeah Example Code #1: // Connections at the beginning of the script $dbh_oracle = connect2db(); $dbh_mysql = connect2db(); $dbh_mssql = connect2db(); for ($i=1; $i<=5; $i++) { // NOTE: might not use all the connections $rs = queryDb($query,$dbh_*); // $dbh can be any of the 3 connections } Yeah Example Code #2: // Connections in the loop for ($i=1; $i<=5; $i++) { // NOTE: Would use all the connections but connecting multiple times $dbh_oracle = connect2db(); $dbh_mysql = connect2db(); $dbh_mssql = connect2db(); $rs_oracle = queryDb($query,$dbh_oracle); $rs_mysql = queryDb($query,$dbh_mysql); $rs_mssql = queryDb($query,$dbh_mssql); } now I know you could use a persistent connection but would that be one connection open for each database in the loop as well? Like mysql_pconnect(), mssql_pconnect() and adodb for Oracle persistent connection method. I know that persistent connection can also be resource hogs and as I'm looking for best performance/practice.

    Read the article

  • SQL Server database change workflow best practices

    - by kubi
    The Background My group has 4 SQL Server Databases: Production UAT Test Dev I work in the Dev environment. When the time comes to promote the objects I've been working on (tables, views, functions, stored procs) I make a request of my manager, who promotes to Test. After testing, she submits a request to an Admin who promotes to UAT. After successful user testing, the same Admin promotes to Production. The Problem The entire process is awkward for a few reasons. Each person must manually track their changes. If I update, add, remove any objects I need to track them so that my promotion request contains everything I've done. In theory, if I miss something testing or UAT should catch it, but this isn't certain and it's a waste of the tester's time, anyway. Lots of changes I make are iterative and done in a GUI, which means there's no record of what changes I made, only the end result (at least as far as I know). We're in the fairly early stages of building out a data mart, so the majority of the changes made, at least count-wise, are minor things: changing the data type for a column, altering the names of tables as we crystallize what they'll be used for, tweaking functions and stored procs, etc. The Question People have been doing this kind of work for decades, so I imagine there have got to be a much better way to manage the process. What I would love is if I could run a diff between two databases to see how the structure was different, use that diff to generate a change script, use that change script as my promotion request. Is this possible? If not, are there any other ways to organize this process? For the record, we're a 100% Microsoft shop, just now updating everything to SQL Server 2008, so any tools available in that package would be fair game.

    Read the article

  • Why isn't the Cache invalidated after table update using the SqlCacheDependency?

    - by Jason
    I have been trying to get SqlCacheDependency working. I think I have everything set up correctly, but when I update the table, the item in the Cache isn't invalidated. Can you look at my code and see if I am missing anything? I enabled the Service Broker for the Sandbox database. I have placed the following code in the Global.asax file. I also restart IIS to make sure it is called. void Application_Start(object sender, EventArgs e) { SqlDependency.Start(ConfigurationManager.ConnectionStrings["SandboxConnectionString"].ConnectionString); } I have placed this entry in the web.config file: <system.web> <caching> <sqlCacheDependency enabled="true" pollTime="10000"> <databases> <add name="Sandbox" connectionStringName="SandboxConnectionString"/> </databases> </sqlCacheDependency> </caching> </system.web> I call this code to put the item into the cache: protected void CacheDataSetButton_Click(object sender, EventArgs e) { using (SqlConnection sqlConnection = new SqlConnection(ConfigurationManager.ConnectionStrings["SandboxConnectionString"].ConnectionString)) { using (SqlCommand sqlCommand = new SqlCommand("SELECT PetID, Name, Breed, Age, Sex, Fixed, Microchipped FROM dbo.Pets", sqlConnection)) { using (SqlDataAdapter sqlDataAdapter = new SqlDataAdapter(sqlCommand)) { DataSet petsDataSet = new DataSet(); sqlDataAdapter.Fill(petsDataSet, "Pets"); SqlCacheDependency petsSqlCacheDependency = new SqlCacheDependency(sqlCommand); Cache.Insert("Pets", petsDataSet, petsSqlCacheDependency, DateTime.Now.AddSeconds(10), Cache.NoSlidingExpiration); } } } } Then I bind the GridView with this code: protected void BindGridViewButton_Click(object sender, EventArgs e) { if (Cache["Pets"] != null) { GridView1.DataSource = Cache["Pets"] as DataSet; GridView1.DataBind(); } } Between attempts to DataBind the GridView, I change the table's values expecting it to invalidate the Cache["Pets"] item, but it seems to stay in the Cache indefinitely.

    Read the article

  • Auto increment with a Unit Of Work

    - by Derick
    Context I'm building a persistence layer to abstract different types of databases that I'll be needing. On the relational part I have mySQL, Oracle and PostgreSQL. Let's take the following simplified MySQL tables: CREATE TABLE Contact ( ID varchar(15), NAME varchar(30) ); CREATE TABLE Address ( ID varchar(15), CONTACT_ID varchar(15), NAME varchar(50) ); I use code to generate system specific alpha numeric unique ID's fitting 15 chars in this case. Thus, if I insert a Contact record with it's Addresses I have my generated Contact.ID and Address.CONTACT_IDs before committing. I've created a Unit of Work (amongst others) as per Martin Fowler's patterns to add transaction support. I'm using a key based Identity Map in the UoW to track the changed records in memory. It works like a charm for the scenario above, all pretty standard stuff so far. The question scenario comes in when I have a database that is not under my control and the ID fields are auto-increment (or in Oracle sequences). In this case I do not have the db generated Contact.ID beforehand, so when I create my Address I do not have a value for Address.CONTACT_ID. The transaction has not been started on the DB session since all is kept in the Identity Map in memory. Question: What is a good approach to address this? (Avoiding unnecessary db round trips) Some ideas: Retrieve the last ID: I can do a call to the database to retrieve the last Id like: SELECT Auto_increment FROM information_schema.tables WHERE table_name='Contact'; But this is MySQL specific and probably something similar can be done for the other databases. If do this then would need to do the 1st insert, get the ID and then update the children (Address.CONTACT_IDs) – all in the current transaction context.

    Read the article

  • Database structure for ecommerce site

    - by imanc
    Hey Guys, I have been tasked with designing an ecommerce solution. The aspect that is causing me the most problems is the database. Currently the site consists of 10+ country based shops each with their own database (all residing on the same mysql instance). For the new site I'd rather all these shop databases be merged into one database so that all tables (products, orders, customers etc.) have a shop_id field. From a programming perspective this seems to make the most sense as we won't have to manage data across multiple databases. Currently the entire site generates about 120k orders a year, but is experiencing fairly heavy growth and we need to design a solution that will scale. In 5 years there may be more than a million orders per year and a database that contains 5 years order history (archiving maybe a solution here). The question is - do we use a single database, or do we keep the database-per-shop structure? I am currently trying to find supporting evidence for either avenue. The company I am designing the solution for prefer the per-shop database structure because they believe it will allow the sites to scale. But my argument is that the shop's database probably won't get that busy over the next few years that they exceed the capacity of a mysql database and a "no expenses spared" hardware set-up. I am wondering if anyone has any advice either way? Does anyone have experience with websites / ecommerce sites that have tables containing millions of records? I know there is probably not a clear answer here, but at what stage do we have too many records or too large table files to have a fast loading site? Also, if anyone has any advice on sources of information - books, websites, etc. where I can do further research, it would be highly appreciated! Cheers, imanc

    Read the article

  • Performance of Multiple Joins

    - by geeko
    Greetings Overflowers, I need to query against objects with many/complex spacial conditions. In relational databases that is translated to many joins (possibly 10+). I'm new to this business and wondering whether to go with MS SQL Server 2008 R2 or Oracle 11g or document-based solutions such as RavenDB or simply go with some spacial database (GIS)... Any thoughts ? Regards UPDATE: Thank you all for your answers. Would anybody opt for document/spatial databases ? My database would consist of tens of millions to few billion records. Mostly read-only. Almost no updates unless in case of mistakes in input. Overnight inserts and not that frequent. The join tables are predicted beforehand but the number of self joins (tables joining themselves multiple times) is not. Small pages of results from such queries are going to be viewed on an highly interactive website so response time is critical. Any predictions on how this can perform on MS SQL Server 2008 R2 or Oracle 11g ? I'm also concerned about boosting performance by adding more servers, which one scales better ? How about PostgresQL ?

    Read the article

  • Fastest way to become a MySQL expert?

    - by Kerry
    I have been using MySQL for years, mainly on smaller projects until the last year or so. I'm not sure if it's the nature of the language or my lack of real tutorials that gives me the feeling of being unsure if what I'm writing is the proper way for optimization purposes and scaling purposes. While self-taught in PHP I'm very sure of myself and the code I write, easily can compare it to others and so on. With MySQL, I'm not sure whether (and in what cases) an INNER JOIN or LEFT JOIN should be used, nor am I aware of the large amount of functionality that it has. While I've written code for databases that handled tens of millions of records, I don't know if it's optimum. I often find that a small tweak will make a query take less than 1/10 of the original time... but how do I know that my current query isn't also slow? I would like to become completely confident in this field in the ability to optimize databases and be scalable. Use is not a problem -- I use it on a daily basis in a number of different ways. So, the question is, what's the path? Reading a book? Website/tutorials? Recommendations?

    Read the article

  • MySQL 5.5.8 Gets Periodic Lag

    - by CYREX
    Am using MySQL 5.5.8 on an Ubuntu system and every X amount of time it creates a huge lag that lasts a couple of seconds. Then all goes back to normal until the next lag. The time period varies but it looks like it happen periodically. Am using InnoDB. It is like hiccups in the MySQL. What could be creating this sort of periodic problem. Do not have any cron jobs or process running every time the X period happens. The X period could be between 30 minutes to 2 hours. So for example it could happen every 30 minutes for the next 12 hours or it could happen every 2 hours for the next 8 hours. key_buffer_size = 256M max_allowed_packet = 1M table_cache = 1024 table_open_cache = 1024 sort_buffer_size = 2M read_buffer_size = 2M read_rnd_buffer_size = 4M myisam_sort_buffer_size = 32M thread_cache_size = 128 query_cache_size= 128M log-slow-queries = slow.log long_query_time = 5 log-queries-not-using-indexes # Try number of CPU's*2 for thread_concurrency thread_concurrency = 4 max_connections=512 #innodb_data_file_path = ibdata1:10M:autoextend #innodb_log_group_home_dir = /usr/local/mysql/data # You can set .._buffer_pool_size up to 50 - 80 % # of RAM but beware of setting memory usage too high innodb_buffer_pool_size = 1G #innodb_additional_mem_pool_size = 20M # Set .._log_file_size to 25 % of buffer pool size #innodb_log_file_size = 64M #innodb_log_buffer_size = 8M #innodb_flush_log_at_trx_commit = 0 #innodb_lock_wait_timeout = 50 [mysqldump] quick max_allowed_packet = 16M [myisamchk] key_buffer_size = 64M sort_buffer_size = 64M read_buffer = 2M write_buffer = 2M There are about 200+ tables divided in 3 databases. The most written too is in InnoDB. The other ones are more read. Several of the tables in the InnoDB have more than 2 million records. The other databases top at about 400 thousand records and do not change so often. The PC is a Core 2 Duo 8400 with 4GB RAM, 32Bit Ubuntu.

    Read the article

  • What database strategy to choose for a large web application

    - by Snoopy
    I have to rewrite a large database application, running on 32 servers. The hardware is up to date, each machine has two quad core Xeon and 32 GByte RAM. The database is multi-tenant, each customer has his own file, around 5 to 10 GByte each. I run around 50 databases on this hardware. The app is open to the web, so I have no control on the load. There are no really complex queries, so SQL is not required if there is a better solution. The databases get updated via FTP every day at midnight. The database is read-only. C# is my favourite language and I want to use ASP.NET MVC. I thought about the following options: Use two big SQL servers running SQL Server 2012 to serve the 32 servers with data. On the 32 servers running IIS hosting providing REST services. Denormalize the database and use Redis on each webserver. Use booksleeve as a Redis client. Use a combination of SQL Server and Redis Use SQL Server 2012 together with Hadoop Use Hadoop without SQL Server What is the best way for a read-only database, to get the best performance without loosing maintainability? Does Map-Reduce make sense at all in such a scenario? The reason for the rewrite is, the old app written in C++ with ISAM technology is too slow, the interfaces are old fashioned and not nice to use from an website, especially when using ajax. The app uses a relational datamodel with many tables, but it is possible to write one accerlerator table where all queries can be performed on, and all other information from the other tables are possible by a simple key lookup.

    Read the article

  • Set service dependencies after install

    - by Dennis
    I have an application that runs as a Windows service. It stores various things settings in a database that are looked up when the service starts. I built the service to support various types of databases (SQL Server, Oracle, MySQL, etc). Often times end users choose to configure the software to use SQL Server (they can simply modify a config file with the connection string and restart the service). The problem is that when their machine boots up, often times SQL Server is started after my service so my service errors out on start up because it can't connect to the database. I know that I can specify dependencies for my service to help guide the Windows service manager to start the appropriate services before mine. However, I don't know what services to depend upon at install time (when my service is registered) since the user can change databases later on. So my question is: is there a way for the user to manually indicate the service dependencies based on the database that they are using? If not, what is the proper design approach that I should be taking? I've thought about trying to do something like wait 30 seconds after my service starts up before connecting to the database but this seems really flaky for various reasons. I've also considered trying to "lazily" connect to the database; the problem is that I need a connection immediately upon start up since the database contains various pieces of vital info that my service needs when it first starts. Any ideas?

    Read the article

  • WiX major upgrade! Need different behaviors for different components...

    - by Joshua
    Okay! I have finally more closely identified the problem I'm having. In my installer, I was attempting to get a settings file to REMAIN INTACT on a major upgrade. I finally got this to work with the suggestion to set <InstallExecuteSequence> <RemoveExistingProducts After="InstallFinalize" /> </InstallExecuteSequence> This is successful in forcing this component to leave the original, not replacing it if it exists: <Component Id="Settings" Guid="A3513208-4F12-4496-B609-197812B4A953" NeverOverwrite="yes"> <File Id="settingsXml" KeyPath="yes" ShortName="SETTINGS.XML" Name="Settings.xml" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\Settings\settings.xml" Vital="yes" /> </Component> HOWEVER! This is a problem! I have another component listed here: <Component Id="Database" Guid="1D8756EF-FD6C-49BC-8400-299492E8C65D" KeyPath="yes"> <File Id="pathwaysMdf" Name="Pathways.mdf" DiskId="1" Source="\\fileserver\Shared\Databases\Pathways\SystemDBs\Pathways.mdf" Vital="yes"/> <File Id="pathwaysLdf" Name="Pathways_log.ldf" DiskId="1" Source="\\fileserver\Shared\Databases\Pathways\SystemDBs\Pathways.ldf" Vital="yes"/> </Component> And this component MUST BE REPLACED on a major upgrade. I can only accomplish this so far by setting <RemoveExistingProducts After="InstallInitialize" /> THIS BREAKS THE FIRST FUNCTIONALITY I NEED WITH THE SETTINGS FILE. HOW CAN I DO BOTH?!

    Read the article

  • Convert charset in mysql query

    - by Yousf
    Hi, I have a question about converting charset from inside mysql query. I have a 2 databases. One for the website (joomla), the other for forum (IPB). I am doing query from inside joomla, which by default have "SET NAMES UTF8". I want to query a table inside the forum databases. A table called "ibf_topics". This table has latin1 encoding. I do the following to select anything from the not-utf8 table. //convert connection to handle latin1. $query = "SET NAMES latin1"; $db->setQuery($query); $db->query(); $query = "select id, title from other_database.ibf_topics"; $db->setQuery($query); $db->query(); //read result into an array. //return connection to handle UTF8. $query = "SET NAMES UTF8"; $db->setQuery($query); $db->query(); After that, when I want to use the selected tile, I use the following: echo iconv("CP1256", "UTF-8", $topic['title']) The question is, is there anyway to avoid all this hassle? For now, I can't change forum database to UTF8 and I can't change joomla database to latin1 :S

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >