Search Results

Search found 17194 results on 688 pages for 'document databases'.

Page 207/688 | < Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >

  • New to web development - backend questions

    - by James
    I'm new to web development although I'm confident in the roadmap for the front-end. I need direction on two things: Basic architecture Back-end technologies For architecture, what do I need to get started? From what I know, its: Get a domain name registered (godaddy?) Find a web host ??? anything else? or start developing the site? I don't think its that easy, there must be something I'm missing, right? For the back-end technologies, I have application development experience with Java and Python, but how likely is it to find a back-end hosting site that supports these languages over PHP? Is PHP a better choice? If I stick with what I know for the back-end, am I sabotaging myself later on? If I need help, how is the market for a python/java developer vs. a php developer? What do I need to know about databases? I have some basic SQL experience. Do hosting sites have limitations on the type of databases or bandwidth I need to worry about? I'm working through some of the common sites: StackOverflow, Sitepoint forums, Google, etc...are there other resources I should use?

    Read the article

  • While trying to set up Django on Windows: AttributeError: 'Settings' object has no attribute 'DATABA

    - by user326370
    I'm following these instructions in order to set up Django on Windows. I have installed Python 2.6, PostgreSQL 8.4, Psycopg 2.0.14 for Python 2.6 and the latest version of Django from SVN. I'm now following these instructions to run a test project (copied from the page linked to above): C:\Documents and Settings\John>cd C:\ C:\>mkdir django C:\>cd django C:\django>django-admin.py startproject testproject C:\django>cd testproject C:\django\testproject>python manage.py runserver When I run the last line, this is the output: Validating models... Unhandled exception in thread started by <function inner_run at 0x01ECB930> Traceback (most recent call last): File "J:\Python26\lib\site-packages\django\core\management\commands\runserver.py", line 48, in inn er_run self.validate(display_num_errors=True) File "J:\Python26\lib\site-packages\django\core\management\base.py", line 249, in validate num_errors = get_validation_errors(s, app) File "J:\Python26\lib\site-packages\django\core\management\validation.py", line 22, in get_validat ion_errors from django.db import models, connection File "J:\Python26\lib\site-packages\django\db\__init__.py", line 14, in <module> if not settings.DATABASES: File "J:\Python26\lib\site-packages\django\utils\functional.py", line 273, in __getattr__ return getattr(self._wrapped, name) AttributeError: 'Settings' object has no attribute 'DATABASES' Did I forget to do something with the database? Any help will be appreciated. Thank you!

    Read the article

  • Are Parameters really enough to prevent Sql injections?

    - by Rune Grimstad
    I've been preaching both to my colleagues and here on SO about the goodness of using parameters in SQL queries, especially in .NET applications. I've even gone so far as to promise them as giving immunity against SQL injection attacks. But I'm starting to wonder if this really is true. Are there any known SQL injection attacks that will be successfull against a parameterized query? Can you for example send a string that causes a buffer overflow on the server? There are of course other considerations to make to ensure that a web application is safe (like sanitizing user input and all that stuff) but now I am thinking of SQL injections. I'm especially interested in attacks against MsSQL 2005 and 2008 since they are my primary databases, but all databases are interesting. Edit: To clarify what I mean by parameters and parameterized queries. By using parameters I mean using "variables" instead of building the sql query in a string. So instead of doing this: SELECT * FROM Table WHERE Name = 'a name' We do this: SELECT * FROM Table WHERE Name = @Name and then set the value of the @Name parameter on the query / command object.

    Read the article

  • Test-First development tool for SQL Server 2005?

    - by Jeff Jones
    For several years I have been using a testing tool called qmTest that allows me to do test-driven database development for some Firebird databases. I write a test for a new feature (table, trigger, stored procedure, etc.) until it fails, then modify the database until the test passes. If necessary, I do more work on the test until it fails again, then modify the database until the test passes. Once the test for the feature is complete and passes 100% of the time, I save it in a suite of other tests for the database. Before moving on to another test or a deployment, I run all the tests as a suite to make sure nothing is broken. Tests can have dependencies on other tests, and the results are recorded and displayed in a browser. Nothing new here, I am sure. Our shop is aiming toward standardizing on MSSQLServer and I want to use the same procedure for developing our databases. Does anyone know of tools that allow or encourage this kind of development? I believe the Team System does, but we do not own that at this point, and probably will not for some time. I am not opposed to scripting, but would welcome a more graphical environment. Any suggestions?

    Read the article

  • Is it possible to unstick a remote IIS ASP server after an exception hangs the session?

    - by user89691
    I have been coding an app in classic ASP that accesses 2 Access databases. I had a page I was working on throw an exception, which is normal during development and causes no lasting problems. This time however, after the exception any attempt to open either of the databases would freeze the session with an infinite script timeout. If I delete the session cookie I an able to access ASP pages again until I try to open the database again. The database that was open when the exception was thrown is left open. There is a LDB lock file and I can't rename or delete either the LDB or MDB file, though I can download the MDB file with FTP. The 2nd access database is not open but any attempt to read this also hangs the session. Accessing HTML pages is fine. The site is hosted with Hostway and they are not interested ("Coding problem = Your problem" even though it leaves my site dead in the water, I suspect until the next reboot, whenever that might be). Here is the dump from the relevant ASP page that threw the exception: Active Server Pages error 'ASP 0115' Unexpected error /translatestats.asp A trappable error (C0000005) occurred in an external object. The script cannot continue running. Active Server Pages error 'ASP 0240' Script Engine Exception /translatestats.asp A ScriptEngine threw exception 'C0000005' in 'IActiveScript::Close()' from 'CActiveScriptEngine::FinalRelease()'. Is there any way I can unstick the site / force close the database remotely ?

    Read the article

  • MongoDB - proper use of collections?

    - by zmg
    In Mongo my understanding is that you can have databases and collections. I'm working on a social-type app that will have blogs and comments (among other things) and had previously be using MySQL and pretty heavy partitioning in an attempt to limit possible concurrency issues. With MySQL I've stuffed all my user data into a _user database with several tables to further partition the data (blogs, pages, etc). My immediate reaction with Mongo would be to create a 'users' database with one collection per user. In this way user 'zach' blog entries would go into the 'zach' collection with associated comments and such becoming sub-objects in the same collection. Basically like dynamically creating one table per user in MySQL, but apparently without the complexity and limitations that might impose. Of course since I haven't really used Mongo before I'm having trouble gauging the (ahem..) quality of this idea and the potential problems it might cause down the road. I'd like user data to be treated a lot like a users directory in a *nix environment where user created/non-shared (mostly) gets put into one place (currently with MySQL that would be the appname_users as mentioned above). Most of the users data will be specific to the users page(s). Some of the user data which is queried across all site users (searchable user profiles) is currently kept in a separate database/table and I expect things like this could be put into a appname_system database and be broken up into collections and/or application specific databases (appname_profiles). Anyway, since the available documentation on this is currently a little thin and my experience is extremely limited I thought I might find a little guidance from someone with a better working understanding of the system. On the plus side I'd really already been attempting to treat MySQL as a schema-less document-store and doing this with Mongo seems much more intuitive/sane/rational so I'm really looking forward to getting started. Thanks, Zach

    Read the article

  • Is there an XML XQuery interface to existing XML files?

    - by xsaero00
    My company is in education industry and we use XML to store course content. We also store some course related information (mostly metainfo) in relational database. Right now we are in the process of switching from our proprietary XML Schema to DocBook 5. Along with the switch we want to move course related information from database to XML files. The reason for this is to have all course data in one place and to put it under Subversion. However, we would like to keep flexibility of the relational database and be able to easily extract specific information about a course from an XML document. XQuery seems to be up to the task so I was researching databases that supports it but so far could not find what I needed. What I basically want, is to have my XML files in a certain directory structure and then on top of this I would like to have a system that would index my files and let me select anything out of any file using XQuery. This way I can have "my cake and eat it too": I will have XQuery interface and still keep my files in plain text and versioned. Is there anything out there at least remotely resembling to what I want? If you think that what I an asking for is nonsense please make an alternative suggestion. On the related note: What XML Databases (preferably native and open source) do you have experience with and what would you recommend?

    Read the article

  • MS SQL : Can you help me with this query?

    - by rlb.usa
    I want to run a diagnostic report on our MS SQL 2008 database server. I am looping through all of the databases, and then for each database, I want to look at each table. But, when I go to look at each table (with tbl_cursor), it always picks up the tables in the database 'master'. I think it's because of my tbl_cursor selection : SELECT table_name FROM information_schema.tables WHERE table_type = 'base table' How do I fix this? Here's the entire code: SET NOCOUNT ON DECLARE @table_count INT DECLARE @db_cursor VARCHAR(100) DECLARE database_cursor CURSOR FOR SELECT name FROM sys.databases where name<>N'master' OPEN database_cursor FETCH NEXT FROM database_cursor INTO @db_cursor WHILE @@Fetch_status = 0 BEGIN PRINT @db_cursor SET @table_count = 0 DECLARE @table_cursor VARCHAR(100) DECLARE tbl_cursor CURSOR FOR SELECT table_name FROM information_schema.tables WHERE table_type = 'base table' OPEN tbl_cursor FETCH NEXT FROM tbl_cursor INTO @table_cursor WHILE @@Fetch_status = 0 BEGIN DECLARE @table_cmd NVARCHAR(255) SET @table_cmd = N'IF NOT EXISTS( SELECT TOP(1) * FROM ' + @table_cursor + ') PRINT N'' Table ''''' + @table_cursor + ''''' is empty'' ' --PRINT @table_cmd --debug EXEC sp_executesql @table_cmd SET @table_count = @table_count + 1 FETCH NEXT FROM tbl_cursor INTO @table_cursor END CLOSE tbl_cursor DEALLOCATE tbl_cursor PRINT @db_cursor + N' Total Tables : ' + CAST( @table_count as varchar(2) ) PRINT N'' -- print another blank line SET @table_count = 0 FETCH NEXT FROM database_cursor INTO @db_cursor END CLOSE database_cursor DEALLOCATE database_cursor SET NOCOUNT OFF

    Read the article

  • SQL Server database change workflow best practices

    - by kubi
    The Background My group has 4 SQL Server Databases: Production UAT Test Dev I work in the Dev environment. When the time comes to promote the objects I've been working on (tables, views, functions, stored procs) I make a request of my manager, who promotes to Test. After testing, she submits a request to an Admin who promotes to UAT. After successful user testing, the same Admin promotes to Production. The Problem The entire process is awkward for a few reasons. Each person must manually track their changes. If I update, add, remove any objects I need to track them so that my promotion request contains everything I've done. In theory, if I miss something testing or UAT should catch it, but this isn't certain and it's a waste of the tester's time, anyway. Lots of changes I make are iterative and done in a GUI, which means there's no record of what changes I made, only the end result (at least as far as I know). We're in the fairly early stages of building out a data mart, so the majority of the changes made, at least count-wise, are minor things: changing the data type for a column, altering the names of tables as we crystallize what they'll be used for, tweaking functions and stored procs, etc. The Question People have been doing this kind of work for decades, so I imagine there have got to be a much better way to manage the process. What I would love is if I could run a diff between two databases to see how the structure was different, use that diff to generate a change script, use that change script as my promotion request. Is this possible? If not, are there any other ways to organize this process? For the record, we're a 100% Microsoft shop, just now updating everything to SQL Server 2008, so any tools available in that package would be fair game.

    Read the article

  • PHP Database connection practice

    - by Phill Pafford
    I have a script that connects to multiple databases (Oracle, MySQL and MSSQL), each database connection might not be used each time the script runs but all could be used in a single script execution. My question is, "Is it better to connect to all the databases once in the beginning of the script even though all the connections might not be used. Or is it better to connect to them as needed, the only catch is that I would need to have the connection call in a loop (so the database connection would be new for X amount of times in the loop). Yeah Example Code #1: // Connections at the beginning of the script $dbh_oracle = connect2db(); $dbh_mysql = connect2db(); $dbh_mssql = connect2db(); for ($i=1; $i<=5; $i++) { // NOTE: might not use all the connections $rs = queryDb($query,$dbh_*); // $dbh can be any of the 3 connections } Yeah Example Code #2: // Connections in the loop for ($i=1; $i<=5; $i++) { // NOTE: Would use all the connections but connecting multiple times $dbh_oracle = connect2db(); $dbh_mysql = connect2db(); $dbh_mssql = connect2db(); $rs_oracle = queryDb($query,$dbh_oracle); $rs_mysql = queryDb($query,$dbh_mysql); $rs_mssql = queryDb($query,$dbh_mssql); } now I know you could use a persistent connection but would that be one connection open for each database in the loop as well? Like mysql_pconnect(), mssql_pconnect() and adodb for Oracle persistent connection method. I know that persistent connection can also be resource hogs and as I'm looking for best performance/practice.

    Read the article

  • Why isn't the Cache invalidated after table update using the SqlCacheDependency?

    - by Jason
    I have been trying to get SqlCacheDependency working. I think I have everything set up correctly, but when I update the table, the item in the Cache isn't invalidated. Can you look at my code and see if I am missing anything? I enabled the Service Broker for the Sandbox database. I have placed the following code in the Global.asax file. I also restart IIS to make sure it is called. void Application_Start(object sender, EventArgs e) { SqlDependency.Start(ConfigurationManager.ConnectionStrings["SandboxConnectionString"].ConnectionString); } I have placed this entry in the web.config file: <system.web> <caching> <sqlCacheDependency enabled="true" pollTime="10000"> <databases> <add name="Sandbox" connectionStringName="SandboxConnectionString"/> </databases> </sqlCacheDependency> </caching> </system.web> I call this code to put the item into the cache: protected void CacheDataSetButton_Click(object sender, EventArgs e) { using (SqlConnection sqlConnection = new SqlConnection(ConfigurationManager.ConnectionStrings["SandboxConnectionString"].ConnectionString)) { using (SqlCommand sqlCommand = new SqlCommand("SELECT PetID, Name, Breed, Age, Sex, Fixed, Microchipped FROM dbo.Pets", sqlConnection)) { using (SqlDataAdapter sqlDataAdapter = new SqlDataAdapter(sqlCommand)) { DataSet petsDataSet = new DataSet(); sqlDataAdapter.Fill(petsDataSet, "Pets"); SqlCacheDependency petsSqlCacheDependency = new SqlCacheDependency(sqlCommand); Cache.Insert("Pets", petsDataSet, petsSqlCacheDependency, DateTime.Now.AddSeconds(10), Cache.NoSlidingExpiration); } } } } Then I bind the GridView with this code: protected void BindGridViewButton_Click(object sender, EventArgs e) { if (Cache["Pets"] != null) { GridView1.DataSource = Cache["Pets"] as DataSet; GridView1.DataBind(); } } Between attempts to DataBind the GridView, I change the table's values expecting it to invalidate the Cache["Pets"] item, but it seems to stay in the Cache indefinitely.

    Read the article

  • Auto increment with a Unit Of Work

    - by Derick
    Context I'm building a persistence layer to abstract different types of databases that I'll be needing. On the relational part I have mySQL, Oracle and PostgreSQL. Let's take the following simplified MySQL tables: CREATE TABLE Contact ( ID varchar(15), NAME varchar(30) ); CREATE TABLE Address ( ID varchar(15), CONTACT_ID varchar(15), NAME varchar(50) ); I use code to generate system specific alpha numeric unique ID's fitting 15 chars in this case. Thus, if I insert a Contact record with it's Addresses I have my generated Contact.ID and Address.CONTACT_IDs before committing. I've created a Unit of Work (amongst others) as per Martin Fowler's patterns to add transaction support. I'm using a key based Identity Map in the UoW to track the changed records in memory. It works like a charm for the scenario above, all pretty standard stuff so far. The question scenario comes in when I have a database that is not under my control and the ID fields are auto-increment (or in Oracle sequences). In this case I do not have the db generated Contact.ID beforehand, so when I create my Address I do not have a value for Address.CONTACT_ID. The transaction has not been started on the DB session since all is kept in the Identity Map in memory. Question: What is a good approach to address this? (Avoiding unnecessary db round trips) Some ideas: Retrieve the last ID: I can do a call to the database to retrieve the last Id like: SELECT Auto_increment FROM information_schema.tables WHERE table_name='Contact'; But this is MySQL specific and probably something similar can be done for the other databases. If do this then would need to do the 1st insert, get the ID and then update the children (Address.CONTACT_IDs) – all in the current transaction context.

    Read the article

  • audio onprogress in chrome not working

    - by user351709
    Hi I am having a problem getting onprogress event for the audio tag working on chrome. it seems to work on fire fox. http://www.scottandrew.com/pub/html5audioplayer/ works on chrome but there is no progress bar update. When I copy the code and change the src to a .wav file and run it on fire fox it works perfectly. <style type="text/css"> #content { clear:both; width:60%; } .player_control { float:left; margin-right:5px; height: 20px; } #player { height:22px; } #duration { width:400px; height:15px; border: 2px solid #50b; } #duration_background { width:400px; height:15px; background-color:#ddd; } #duration_bar { width:0px; height:13px; background-color:#bbd; } #loader { width:0px; height:2px; } .style1 { height: 35px; } </style> <script type="text/javascript"> var audio_duration; var audio_player; function pageLoaded() { audio_player = $("#aplayer").get(0); //get the duration audio_duration = audio_player.duration; $('#totalTime').text(formatTimeSeconds(audio_player.duration)); //set the volume } function update(){ //get the duration of the player dur = audio_player.duration; time = audio_player.currentTime; fraction = time/dur; percent = (fraction*100); wrapper = document.getElementById("duration_background"); new_width = wrapper.offsetWidth*fraction; document.getElementById("duration_bar").style.width = new_width + "px"; $('#currentTime').text(formatTimeSeconds(audio_player.currentTime)); $('#totalTime').text(formatTimeSeconds(audio_player.duration)); } function formatTimeSeconds(time) { var minutes = Math.floor(time / 60); var seconds = "0" + (Math.floor(time) - (minutes * 60)).toString(); if (isNaN(minutes) || isNaN(seconds)) { return "0:00"; } var Strseconds = seconds.substr(seconds.length - 2); return minutes + ":" + Strseconds; } function playClicked(element){ //get the state of the player if(audio_player.paused) { audio_player.play(); newdisplay = "||"; }else{ audio_player.pause(); newdisplay = ">"; } $('#totalTime').text(formatTimeSeconds(audio_player.duration)); element.value = newdisplay; } function trackEnded(){ //reset the playControl to 'play' document.getElementById("playControl").value=">"; } function durationClicked(event){ //get the position of the event clientX = event.clientX; left = event.currentTarget.offsetLeft; clickoffset = clientX - left; percent = clickoffset/event.currentTarget.offsetWidth; duration_seek = percent*audio_duration; document.getElementById("aplayer").currentTime=duration_seek; } function Progress(evt){ $('#progress').val(Math.round(evt.loaded / evt.total * 100)); var width = $('#duration_background').css('width') $('#loader').css('width', evt.loaded / evt.total * width.replace("px","")); } function getPosition(name) { var obj = document.getElementById(name); var topValue = 0, leftValue = 0; while (obj) { leftValue += obj.offsetLeft; obj = obj.offsetParent; } finalvalue = leftValue; return finalvalue; } function SetValues() { var xPos = xMousePos; var divPos = getPosition("duration_background"); var divWidth = xPos - divPos; var Totalwidth = $('#duration_background').css('width').replace("px","") audio_player.currentTime = divWidth / Totalwidth * audio_duration; $('#duration_bar').css('width', divWidth); } </script> </head> <script type="text/javascript" src="js/MousePosition.js" ></script> <body onLoad="pageLoaded();"> <table> <tr> <td valign="bottom"><input id="playButton" type="button" onClick="playClicked(this);" value=">"/></td> <td colspan="2" class="style1" valign="bottom"> <div id='player'> <div id="duration" class='player_control' > <div id="duration_background" onClick="SetValues();"> <div id="loader" style="background-color: #00FF00; width: 0px;"></div> <div id="duration_bar" class="duration_bar"></div> </div> </div> </div> </td> </tr> <tr> <td> </td> <td> <span id="currentTime">0:00</span> </td> <td align="right" > <span id="totalTime">0:00</span> </td> </tr> </table> <audio id='aplayer' src='<%=getDownloadLink() %>' type="audio/ogg; codecs=vorbis" onProgress="Progress(event);" onTimeUpdate="update();" onEnded="trackEnded();" > <b>Your browser does not support the <code>audio</code> element. </b> </audio> </body>

    Read the article

  • Database structure for ecommerce site

    - by imanc
    Hey Guys, I have been tasked with designing an ecommerce solution. The aspect that is causing me the most problems is the database. Currently the site consists of 10+ country based shops each with their own database (all residing on the same mysql instance). For the new site I'd rather all these shop databases be merged into one database so that all tables (products, orders, customers etc.) have a shop_id field. From a programming perspective this seems to make the most sense as we won't have to manage data across multiple databases. Currently the entire site generates about 120k orders a year, but is experiencing fairly heavy growth and we need to design a solution that will scale. In 5 years there may be more than a million orders per year and a database that contains 5 years order history (archiving maybe a solution here). The question is - do we use a single database, or do we keep the database-per-shop structure? I am currently trying to find supporting evidence for either avenue. The company I am designing the solution for prefer the per-shop database structure because they believe it will allow the sites to scale. But my argument is that the shop's database probably won't get that busy over the next few years that they exceed the capacity of a mysql database and a "no expenses spared" hardware set-up. I am wondering if anyone has any advice either way? Does anyone have experience with websites / ecommerce sites that have tables containing millions of records? I know there is probably not a clear answer here, but at what stage do we have too many records or too large table files to have a fast loading site? Also, if anyone has any advice on sources of information - books, websites, etc. where I can do further research, it would be highly appreciated! Cheers, imanc

    Read the article

  • Performance of Multiple Joins

    - by geeko
    Greetings Overflowers, I need to query against objects with many/complex spacial conditions. In relational databases that is translated to many joins (possibly 10+). I'm new to this business and wondering whether to go with MS SQL Server 2008 R2 or Oracle 11g or document-based solutions such as RavenDB or simply go with some spacial database (GIS)... Any thoughts ? Regards UPDATE: Thank you all for your answers. Would anybody opt for document/spatial databases ? My database would consist of tens of millions to few billion records. Mostly read-only. Almost no updates unless in case of mistakes in input. Overnight inserts and not that frequent. The join tables are predicted beforehand but the number of self joins (tables joining themselves multiple times) is not. Small pages of results from such queries are going to be viewed on an highly interactive website so response time is critical. Any predictions on how this can perform on MS SQL Server 2008 R2 or Oracle 11g ? I'm also concerned about boosting performance by adding more servers, which one scales better ? How about PostgresQL ?

    Read the article

  • What database strategy to choose for a large web application

    - by Snoopy
    I have to rewrite a large database application, running on 32 servers. The hardware is up to date, each machine has two quad core Xeon and 32 GByte RAM. The database is multi-tenant, each customer has his own file, around 5 to 10 GByte each. I run around 50 databases on this hardware. The app is open to the web, so I have no control on the load. There are no really complex queries, so SQL is not required if there is a better solution. The databases get updated via FTP every day at midnight. The database is read-only. C# is my favourite language and I want to use ASP.NET MVC. I thought about the following options: Use two big SQL servers running SQL Server 2012 to serve the 32 servers with data. On the 32 servers running IIS hosting providing REST services. Denormalize the database and use Redis on each webserver. Use booksleeve as a Redis client. Use a combination of SQL Server and Redis Use SQL Server 2012 together with Hadoop Use Hadoop without SQL Server What is the best way for a read-only database, to get the best performance without loosing maintainability? Does Map-Reduce make sense at all in such a scenario? The reason for the rewrite is, the old app written in C++ with ISAM technology is too slow, the interfaces are old fashioned and not nice to use from an website, especially when using ajax. The app uses a relational datamodel with many tables, but it is possible to write one accerlerator table where all queries can be performed on, and all other information from the other tables are possible by a simple key lookup.

    Read the article

  • Fastest way to become a MySQL expert?

    - by Kerry
    I have been using MySQL for years, mainly on smaller projects until the last year or so. I'm not sure if it's the nature of the language or my lack of real tutorials that gives me the feeling of being unsure if what I'm writing is the proper way for optimization purposes and scaling purposes. While self-taught in PHP I'm very sure of myself and the code I write, easily can compare it to others and so on. With MySQL, I'm not sure whether (and in what cases) an INNER JOIN or LEFT JOIN should be used, nor am I aware of the large amount of functionality that it has. While I've written code for databases that handled tens of millions of records, I don't know if it's optimum. I often find that a small tweak will make a query take less than 1/10 of the original time... but how do I know that my current query isn't also slow? I would like to become completely confident in this field in the ability to optimize databases and be scalable. Use is not a problem -- I use it on a daily basis in a number of different ways. So, the question is, what's the path? Reading a book? Website/tutorials? Recommendations?

    Read the article

  • WiX major upgrade! Need different behaviors for different components...

    - by Joshua
    Okay! I have finally more closely identified the problem I'm having. In my installer, I was attempting to get a settings file to REMAIN INTACT on a major upgrade. I finally got this to work with the suggestion to set <InstallExecuteSequence> <RemoveExistingProducts After="InstallFinalize" /> </InstallExecuteSequence> This is successful in forcing this component to leave the original, not replacing it if it exists: <Component Id="Settings" Guid="A3513208-4F12-4496-B609-197812B4A953" NeverOverwrite="yes"> <File Id="settingsXml" KeyPath="yes" ShortName="SETTINGS.XML" Name="Settings.xml" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\Settings\settings.xml" Vital="yes" /> </Component> HOWEVER! This is a problem! I have another component listed here: <Component Id="Database" Guid="1D8756EF-FD6C-49BC-8400-299492E8C65D" KeyPath="yes"> <File Id="pathwaysMdf" Name="Pathways.mdf" DiskId="1" Source="\\fileserver\Shared\Databases\Pathways\SystemDBs\Pathways.mdf" Vital="yes"/> <File Id="pathwaysLdf" Name="Pathways_log.ldf" DiskId="1" Source="\\fileserver\Shared\Databases\Pathways\SystemDBs\Pathways.ldf" Vital="yes"/> </Component> And this component MUST BE REPLACED on a major upgrade. I can only accomplish this so far by setting <RemoveExistingProducts After="InstallInitialize" /> THIS BREAKS THE FIRST FUNCTIONALITY I NEED WITH THE SETTINGS FILE. HOW CAN I DO BOTH?!

    Read the article

  • Set service dependencies after install

    - by Dennis
    I have an application that runs as a Windows service. It stores various things settings in a database that are looked up when the service starts. I built the service to support various types of databases (SQL Server, Oracle, MySQL, etc). Often times end users choose to configure the software to use SQL Server (they can simply modify a config file with the connection string and restart the service). The problem is that when their machine boots up, often times SQL Server is started after my service so my service errors out on start up because it can't connect to the database. I know that I can specify dependencies for my service to help guide the Windows service manager to start the appropriate services before mine. However, I don't know what services to depend upon at install time (when my service is registered) since the user can change databases later on. So my question is: is there a way for the user to manually indicate the service dependencies based on the database that they are using? If not, what is the proper design approach that I should be taking? I've thought about trying to do something like wait 30 seconds after my service starts up before connecting to the database but this seems really flaky for various reasons. I've also considered trying to "lazily" connect to the database; the problem is that I need a connection immediately upon start up since the database contains various pieces of vital info that my service needs when it first starts. Any ideas?

    Read the article

  • MySQL 5.5.8 Gets Periodic Lag

    - by CYREX
    Am using MySQL 5.5.8 on an Ubuntu system and every X amount of time it creates a huge lag that lasts a couple of seconds. Then all goes back to normal until the next lag. The time period varies but it looks like it happen periodically. Am using InnoDB. It is like hiccups in the MySQL. What could be creating this sort of periodic problem. Do not have any cron jobs or process running every time the X period happens. The X period could be between 30 minutes to 2 hours. So for example it could happen every 30 minutes for the next 12 hours or it could happen every 2 hours for the next 8 hours. key_buffer_size = 256M max_allowed_packet = 1M table_cache = 1024 table_open_cache = 1024 sort_buffer_size = 2M read_buffer_size = 2M read_rnd_buffer_size = 4M myisam_sort_buffer_size = 32M thread_cache_size = 128 query_cache_size= 128M log-slow-queries = slow.log long_query_time = 5 log-queries-not-using-indexes # Try number of CPU's*2 for thread_concurrency thread_concurrency = 4 max_connections=512 #innodb_data_file_path = ibdata1:10M:autoextend #innodb_log_group_home_dir = /usr/local/mysql/data # You can set .._buffer_pool_size up to 50 - 80 % # of RAM but beware of setting memory usage too high innodb_buffer_pool_size = 1G #innodb_additional_mem_pool_size = 20M # Set .._log_file_size to 25 % of buffer pool size #innodb_log_file_size = 64M #innodb_log_buffer_size = 8M #innodb_flush_log_at_trx_commit = 0 #innodb_lock_wait_timeout = 50 [mysqldump] quick max_allowed_packet = 16M [myisamchk] key_buffer_size = 64M sort_buffer_size = 64M read_buffer = 2M write_buffer = 2M There are about 200+ tables divided in 3 databases. The most written too is in InnoDB. The other ones are more read. Several of the tables in the InnoDB have more than 2 million records. The other databases top at about 400 thousand records and do not change so often. The PC is a Core 2 Duo 8400 with 4GB RAM, 32Bit Ubuntu.

    Read the article

  • Integration transport choice (Oracle + SQL Server)

    - by lak-b
    We have several systems with Oracle (A) and SQL Server (B) databases on backend. I have to consolidate data from those systems into the new SQL Server database. Something like that: (A) =>|---------------| | some software | => SQL Server (B) =>|---------------| where some software is: transport (A and B systems located in the network) processing business logic (custom .NET code) Due to first point, I need some queue software or something similar (like MSMQ, Service Broker or something). In another hand, I can implement a web-service instead of queue. (A) =>|---------------|-------------| | queue/service | custom code | => SQL Server (B) =>|---------------|-------------| The question is: which queue/transport framework should I use with Oracle and SQL Server databases? It would be nice, if I can post messages to MSMQ in both Oracle and SQL Server stored procedures (can I?) It would be nice, if I can call a web-service in both Oracle and SQL Server stored procedures (can I?) It would be nice, if I can use something similar in both Oracle and SQL Server stored procedures (what exactly?) What software should I prefer to my requirements?

    Read the article

  • Convert charset in mysql query

    - by Yousf
    Hi, I have a question about converting charset from inside mysql query. I have a 2 databases. One for the website (joomla), the other for forum (IPB). I am doing query from inside joomla, which by default have "SET NAMES UTF8". I want to query a table inside the forum databases. A table called "ibf_topics". This table has latin1 encoding. I do the following to select anything from the not-utf8 table. //convert connection to handle latin1. $query = "SET NAMES latin1"; $db->setQuery($query); $db->query(); $query = "select id, title from other_database.ibf_topics"; $db->setQuery($query); $db->query(); //read result into an array. //return connection to handle UTF8. $query = "SET NAMES UTF8"; $db->setQuery($query); $db->query(); After that, when I want to use the selected tile, I use the following: echo iconv("CP1256", "UTF-8", $topic['title']) The question is, is there anyway to avoid all this hassle? For now, I can't change forum database to UTF8 and I can't change joomla database to latin1 :S

    Read the article

  • Database choices

    - by flobadob
    I have a prickly design issue regarding the choice of database technologies to use for a group of new applications. The final suite of applications would have the following database requirements... Central databases (more than one database) using mysql (myst be mysql due to justhost.com). An application to be written which accesses the multiple mysql databases on the web host. This application will also write to local serverless database (sqlite/firebird/vistadb/whatever). Different flavors of this application will be created for windows (.NET), windows mobile, android if possible, iphone if possible. So, the design task is to minimise the quantity of code to achieve this. This is going to be tricky since the languages used are already c# / java (android) and objc (iphone). Not too worried about that, but can the work required to implement the various database access layers be minimised? The serverless database will hold similar data to the mysql server, so some kind of inheritance in the DAL would be useful. Looking at hibernate/nhibernate and there is linq to whatever. So many choices!

    Read the article

  • SQL Server: Can you help me with this query?

    - by rlb.usa
    I want to run a diagnostic report on our MS SQL 2008 database server. I am looping through all of the databases, and then for each database, I want to look at each table. But, when I go to look at each table (with tbl_cursor), it always picks up the tables in the database 'master'. I think it's because of my tbl_cursor selection : SELECT table_name FROM information_schema.tables WHERE table_type = 'base table' How do I fix this? Here's the entire code: SET NOCOUNT ON DECLARE @table_count INT DECLARE @db_cursor VARCHAR(100) DECLARE database_cursor CURSOR FOR SELECT name FROM sys.databases where name<>N'master' OPEN database_cursor FETCH NEXT FROM database_cursor INTO @db_cursor WHILE @@Fetch_status = 0 BEGIN PRINT @db_cursor SET @table_count = 0 DECLARE @table_cursor VARCHAR(100) DECLARE tbl_cursor CURSOR FOR SELECT table_name FROM information_schema.tables WHERE table_type = 'base table' OPEN tbl_cursor FETCH NEXT FROM tbl_cursor INTO @table_cursor WHILE @@Fetch_status = 0 BEGIN DECLARE @table_cmd NVARCHAR(255) SET @table_cmd = N'IF NOT EXISTS( SELECT TOP(1) * FROM ' + @table_cursor + ') PRINT N'' Table ''''' + @table_cursor + ''''' is empty'' ' --PRINT @table_cmd --debug EXEC sp_executesql @table_cmd SET @table_count = @table_count + 1 FETCH NEXT FROM tbl_cursor INTO @table_cursor END CLOSE tbl_cursor DEALLOCATE tbl_cursor PRINT @db_cursor + N' Total Tables : ' + CAST( @table_count as varchar(2) ) PRINT N'' -- print another blank line SET @table_count = 0 FETCH NEXT FROM database_cursor INTO @db_cursor END CLOSE database_cursor DEALLOCATE database_cursor SET NOCOUNT OFF

    Read the article

  • Are there any NOSQL-compatible CMS projects?

    - by Michael
    This question is partially related to an older question (Any CMS is Google App Engine compatible?) , but is slightly more general. It seems that in most CMS systems, the most fragile failure point is the database. Traditional database implementations scale poorly and will never be able to handle unforeseen spikes of traffic. Since Google App Engine was designed to help even small businesses overcome that problem, I had the same question that was asked earlier this year with less than satisfactory answers. But more generally, where are the CMS projects that support NOSQL databases? Looking over Wikipedia's list of CMS platforms, I see without much effort that only traditional RDBMS are supported by every single vendor on the list. I would have expected to see at least one or two projects handling CouchDB or similar engines. I understand the complexities of implementing a NOSQL solution to a problem that is typically solved using the relations cleanly expressed in any RDBMS, but there seems to be a rather wide market gap. Since databases are, today, easily outsourced to Google, Amazon, and others which use NOSQL models, I am amazed that there are not more projects actively pursuing this path. Am I simply not aware? Can someone please point me to projects that have real momentum that are developing on this path? I'm looking for two things: a CMS that has as its backbone a NOSQL database enabling easy database outsourcing (hosted MySQL clusters and similar solutions are not what I'm looking for) a project that is built to run on either a PaaS architecture like Google App Engine or an IaaS architecture like Amazon EC2 Any pointers in that direction would be most welcome.

    Read the article

< Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >