Search Results

Search found 5749 results on 230 pages for 'rdiff backup'.

Page 210/230 | < Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >

  • What Counts For a DBA – Depth

    - by Louis Davidson
    SQL Server offers very simple interfaces to many of its features. Most people could open up SSMS, connect to a server, write a simple query and see the results. Even several of the core DBA tasks are deceptively straightforward. It doesn’t take a rocket scientist to perform a basic database backup or run a trace (even using the newfangled Extended Events!). However, appearances can be deceptive, and often times it is really important that a DBA understands not just the basics of how to perform a task, but why we do a task, and how that task works. As an analogy, consider a child walking into a darkened room. Most would know that they need to turn on the light, and how to do it, so they flick the switch. But what happens if light fails to shine forth. Most would immediately tell you that you need to consider changing the light bulb. So you hop in the car and take them to the local home store and instruct them to buy a replacement. Confronted with a 40 foot display of light bulbs, how will they decide which of the hundreds of types of bulbs, of different types, fittings, shapes, colors, power and efficiency ratings, is the right choice? Obviously the main lesson the child is going to learn this day is how to use their cell phone as a flashlight so they don’t have to ask for help the next time. Likewise, when the metaphorical toddlers who use your database server have issues, they will instinctively know something is wrong, and may even have some idea what caused it, but will have no depth of knowledge to figure out the right solution. That is where the DBA comes in and attempts to save the day. However, when one looks beneath the shiny UI, SQL Server has its own “40 foot display of light bulbs”, in the form of the tremendous number of tools and the often-bewildering amount of information they can present to the DBA, to help us find issues. Unfortunately, resorting to guesswork, to trying different “bulbs” over and over, hoping to stumble on the answer. This is where the right depth of knowledge goes a long way. If we need to write a SELECT statement, then knowing the syntax and where to find the data is not enough. Knowledge of indexes and query plans is essential. Without it, we might hit on a query that “works”, but we are basically still a user, not a programmer, because we have no real control over our platform. Is that level of knowledge deep enough? Probably not, since knowledge of the underlying metadata and structures would be very useful in helping us make sense of any query plan. Understanding the structure of an index makes the “key lookup” operator not sound like what you do when someone tapes your car key to the ceiling. So is even this level of understanding deep enough? Do we need to understand the memory architecture used to process the query? It might be a comforting level of knowledge, and will doubtless come in handy at some point, but is not strictly necessary in most cases. Beyond that lies (more or less) full knowledge of SQL language and the intricacies of every step the SQL Server engine takes to process our query. My personal theory is that, as a professional, our knowledge of a given task should extend, at a minimum, one level deeper than is strictly necessary to perform the task. Anything deeper can be left to the ridiculously smart, or obsessive, or both. As an example. tasked with storing an integer value between 0 and 99999999, it’s essential that I know that choosing an Integer over Decimal(8,0) will likely offer performance benefits. It is then useful that I also understand the value of adding a CHECK constraint, to make sure the values are valid to the desired range; and comforting that I know a little about the underlying processors, registers and computer math. Anything further, I leave to the likes of Joe Chang, whose recent blog post on the topic offers depth by the bucketful!  

    Read the article

  • Cloud – the forecast is improving

    - by Rob Farley
    There is a lot of discussion about “the cloud”, and how that affects people’s data stories. Today the discussion enters the realm of T-SQL Tuesday, hosted this month by Jorge Segarra. Over the years, companies have invested a lot in making sure that their data is good, and I mean every aspect of it – the quality of it, the security of it, the performance of it, and more. Experts such as those of us at LobsterPot Solutions have helped these companies with this, and continue to work with clients to make sure that data is a strong part of their business, not an oversight. Whether business intelligence systems are being utilised or not, every business needs to be able to rely on its data, and have the confidence in it. Data should be a foundation upon which a business is built. In the past, data had been stored in paper-based systems. Filing cabinets stored vital information. Today, people have server rooms with storage of various kinds, recognising that filing cabinets don’t necessarily scale particularly well. It’s easy to ‘lose’ data in a filing cabinet, when you have people who need to make sure that the sheets of paper are in the right spot, and that you know how things are stored. Databases help solve that problem, but still the idea of a large filing cabinet continues, it just doesn’t involve paper. If something happens to the physical ‘filing cabinet’, then the problems are larger still. Then the data itself is under threat. Many clients have generators in case the power goes out, redundant cables in case the connectivity dies, and spare servers in other buildings just in case they’re required. But still they’re maintaining filing cabinets. You see, people like filing cabinets. There’s something to be said for having your data ‘close’. Even if the data is not in readable form, living as bits on a disk somewhere, the idea that its home is ‘in the building’ is comforting to many people. They simply don’t want to move their data anywhere else. The cloud offers an alternative to this, and the human element is an obstacle. By leveraging the cloud, companies can have someone else look after their filing cabinet. A lot of people really don’t like the idea of this, partly because the administrators of the data, those people who could potentially log in with escalated rights and see more than they should be allowed to, who need to be trusted to respond if there’s a problem, are now a faceless entity in the cloud. But this doesn’t mean that the cloud is bad – this is simply a concern that some people may have. In new functionality that’s on its way, we see other hybrid mechanisms that mean that people can leverage parts of the cloud with less fear. Companies can use cloud storage to hold their backup data, for example, backups that have been encrypted and are therefore not able to be read by anyone (including administrators) who don’t have the right password. Companies can have a database instance that runs locally, but which has its data files in the cloud, complete with Transparent Data Encryption if needed. There can be a higher level of control, making the change easier to accept. Hybrid options allow people who have had fears (potentially very justifiable) to take a new look at the cloud, and to start embracing some of the benefits of the cloud (such as letting someone else take care of storage, high availability, and more) without losing the feeling of the data being close. @rob_farley

    Read the article

  • SSMS Tools Pack 2.7 is released. New website, improved licensing and features.

    - by Mladen Prajdic
    New website Nice, isn't it? Cleaner, simpler, better looking and more modern. If you have any suggestions for further improvements I'd be glad to hear them. Simpler licensing With SSMS tools Pack 2.7 the licensing is finally where it should be. It is now based on the activate/deactivate model. This way you can move a license from machine to machine with simple deactivation on one and reactivation on another machine. Much better, no? Because of very good feedback I have added an option for 6 machines and lowered the 4 machines option to 3 machines. This should make it much simpler for you to choose the right option for yourself. Improved features Version 2.5.3 was already extremely stable and 2.7 continues with that tradition. Because of that I could fully focus on features and why 3.0 will rock even more that 2.7! ;) In version 2.7 I have addressed quite a few improvements you were requesting for a while now. SQL History This is probably the biggest time saver out there, therefore it's only fair it gets a few important updates. If you have an existing .sql file opened, the Window Content History now saves your code to that existing file and also makes a backup in the SQL History log default location. Search is still done through the SQL History log but the Tab Sessions Restore opens your existing .sql file. This way you don't have to remember to save your existing files by yourself anymore. A bug when you couldn't search properly if you copied the log files to a new location was fixed. Unfortunately this removed the option to filter a search with the time component. The smallest search interval is now one day. The SSMS Tools Pack now remembers the visibility of the Current Window History window when you exit SSMS. SQL Snippets You can now set the position of the cursor in your snippets by placing {C} somewhere in your snippet. It's a small improvement but can be a huge time saver since you don't have to move through the snippet to the desired location anymore. Run script on multiple databases Database choices can now be saved with a name and then loaded again next time. You can also choose to run the script in a new window for each chosen database. Search through grid results You can now go previous/next search result with the Prev/Next control inside the search window. This is extremely useful if you have a large resultset. IT saves you the scrolling. CRUD generator Four new variables have been added: |CurrentDate| writes current date in format yyyy-MM-dd to your script |CurrentTime| writes current time in 24h format HH:mm:ss to your script |CurrentWinUser| writes current Windows logged on user to your script |CurrentSqlUser| writes current SQL logged on login to your script This was actually quite a requested feature so if you have any other ideas for extra variables, do let me know. That's about it. I hope you're going to enjoy this version as much as the previous ones. Have fun!

    Read the article

  • Creating a Sharepoint Development Environment from an Existing Production Environment

    - by Starky
    I have very little experience using Sharepoint but a good amount using Visual Studio 2008, SQL Server 2005, Windows Server 2003 and IIS6. I need to create a development environment for a SharePoint 2007 system that will be used internally. The system is already deployed over two servers - one of the servers simply holds the database and everything else is on the other server. We are also using WSS 3.0. I have created a Virtual Machine with all the required software including a clean installation of SharePoint Server 2007 and I wish to use this single Virtual Machine as the development environment. Right now there are no custom assemblies being used on the production server as far as I am aware. There are 3 websites, one over port 80 for user accesss, one over a custom port for central administration, and one over another custom port. Not sure what the last one is for but my blank instance of Sharepoint on my Virtual Machine also has something similar. I attempted to use the STSADM tool to backup and restore these 3 sites from my production environment to my development environment and while the operations completed succesfully, the central administration site in my development environment acted strangely and I could not access port 80 - I did not seem to have correct credentials for it. I suspected that it would not have been so simple so could I please have advice on how to create my development environment so that I can use it to deploy updates to the production one.

    Read the article

  • Selenium screenshots using rspec

    - by Thomas Albright
    I am trying to capture screenshots on test failure using selenium-client and rspec. I run this command: $ spec my_spec.rb \ --require 'rubygems,selenium/rspec/reporting/selenium_test_report_formatter' \ --format=Selenium::RSpec::SeleniumTestReportFormatter:./report.html It creates the report correctly when everything passes, since no screenshots are required. However, when the test fails, I get this message, and the report has blank screenshots: WARNING: Could not capture HTML snapshot: execution expired WARNING: Could not capture page screenshot: execution expired WARNING: Could not capture system screenshot: execution expired Problem while capturing system stateexecution expired What is causing this 'execution expired' error? Am I missing something important in my spec? Here is the code for my_spec.rb: require 'rubygems' gem "rspec", "=1.2.8" gem "selenium-client" require "selenium/client" require "selenium/rspec/spec_helper" describe "Databases" do attr_reader :selenium_driver alias :page :selenium_driver before(:all) do @selenium_driver = Selenium::Client::Driver.new \ :host => "192.168.0.10", :port => 4444, :browser => "*firefox", :url => "http://192.168.0.11/", :timeout_in_seconds => 10 end before(:each) do @selenium_driver.start_new_browser_session end # The system capture need to happen BEFORE closing the Selenium session append_after(:each) do @selenium_driver.close_current_browser_session end it "backed up" do page.open "/SQLDBDetails.aspx page.click "btnBackup", :wait_for => :page page.text?("Pending Backup").should be_true end end

    Read the article

  • Ext3 fs: Block bitmap for group 1 not in group (block 0). is fs dead?

    - by ip
    Hi, My company has a server with one big partition with Mysql database and php files. Now this partition seems to be corrupted, as reported from kernel messages when I tried to mount it manually: [329862.817837] EXT3-fs error (device loop1): ext3_check_descriptors: Block bitmap for group 1 not in group (block 0)! [329862.817846] EXT3-fs: group descriptors corrupted! I've tried to recovery it running tools from a PLD livecd. These are the tools I have tested: - e2retrieve - testdisk - photorec - dd_rescue/dd_rhelp - ddrescue - fsck.ext2 - e2salvage without any success. dumpe2fs 1.41.3 (12-Oct-2008) Filesystem volume name: /dev/sda3 Last mounted on: <not available> Filesystem UUID: dd51610b-6de0-4392-a6f3-67160dbc0343 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal filetype sparse_super Default mount options: (none) Filesystem state: not clean with errors Errors behavior: Continue Filesystem OS type: Linux Inode count: 9502720 Block count: 18987570 Reserved block count: 949378 Free blocks: 11555345 Free inodes: 11858398 First block: 0 Block size: 4096 Fragment size: 4096 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 16384 Inode blocks per group: 512 Last mount time: Wed Mar 24 09:31:03 2010 Last write time: Mon Apr 12 11:46:32 2010 Mount count: 10 Maximum mount count: 30 Last checked: Thu Jan 1 01:00:00 1970 Check interval: 0 (<none>) Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 128 Journal inode: 8 Journal backup: inode blocks dumpe2fs: A block group is missing an inode table while reading journal inode There's any other tools I have to test before considering these disk definitely unrecoverable? Many thanks, ip

    Read the article

  • How to break WinDbg in an anonymous method?

    - by Richard Berg
    Title kinda says it all. The usual SOS command !bpmd doesn't do a lot of good without a name. Some ideas I had: dump every method, then use !bpmd -md when you find the corresponding MethodDesc not practical in real world usage, from what I can tell. Even if I wrote a macro to limit the dump to anonymous types/methods, there's no obvious way to tell them apart. use Reflector to dump the MSIL name doesn't help when dealing with dynamic assemblies and/or Reflection.Emit. Visual Studio's inability to read local vars inside such scenarios is the whole reason I turned to Windbg in the first place... set the breakpoint in VS, wait for it to hit, then change to Windbg using the noninvasive trick attempting to detach from VS causes it to hang (along with the app). I think this is due to the fact that the managed debugger is a "soft" debugger via thread injection instead of a standard "hard" debugger. Or maybe it's just a VS bug specific to Silverlight (would hardly be the first I've encountered). set a breakpoint on some other location known to call into the anonymous method, then single-step your way in my backup plan, though I'd rather not resort to it if this Q&A reveals a better way

    Read the article

  • Version control of software refactoring

    - by Muhammad Alkarouri
    What is the best way of doing version control of large scale refactoring? My typical style of programming (actually of writing documents as well) is getting something out as quickly as possible and then refactoring it. Typically, refactoring takes place at the same time as adding other functionality. In addition to standard refactoring of classes and functions, functions may move from one file to another, files get split and merged or just reordered. For the time being, I am using version control as a lone user, so there is no issue of interaction with other developers at this stage. Still, version control gives me two aspects: Backup and ability to revert to a good version "in case". Looking at the history tells me how the project progressed and the flow of ideas. I am using mercurial on windows using TortoiseHg which enables selections of hunks to commit. The reason I mention this is that I would like advice on the granularity of a commit in refactoring. Should I split refactoring from functionality added always in committing? I have looked at the answers of http://stackoverflow.com/questions/68459/refactoring-and-source-control-how-to but it doesn't answer my question. That question focuses on collaboration with a team. This one concentrates on having a history that is understandable in future (assuming I don't rewrite history as some VCS seem to allow).

    Read the article

  • Remove file from git repository (history)

    - by Devenv
    (solved, see bottom of the question body) Looking for this for a long time now, what I have till now is: http://dound.com/2009/04/git-forever-remove-files-or-folders-from-history/ and http://progit.org/book/ch9-7.html Pretty much the same method, but both of them leave objects in pack files... Stuck. What I tried: git filter-branch --index-filter 'git rm --cached --ignore-unmatch file_name' rm -Rf .git/refs/original rm -Rf .git/logs/ git gc Still have files in the pack, and this is how I know it: git verify-pack -v .git/objects/pack/pack-3f8c0...bb.idx | sort -k 3 -n | tail -3 And this: git filter-branch --index-filter "git rm -rf --cached --ignore-unmatch file_name" HEAD rm -rf .git/refs/original/ && git reflog expire --all && git gc --aggressive --prune The same... Tried git clone trick, it removed some of the files (~3000 of them) but the largest files are still there... I have some large legacy files in the repository, ~200M, and I really don't want them there... And I don't want to reset the repository to 0 :( SOLUTION: This is the shortest way to get rid of the files: check .git/packed-refs - my problem was that I had there a refs/remotes/origin/master line for a remote repository, delete it, otherwise git won't remove those files (optional) git verify-pack -v .git/objects/pack/#{pack-name}.idx | sort -k 3 -n | tail -5 - to check for the largest files (optional) git rev-list --objects --all | grep a0d770a97ff0fac0be1d777b32cc67fe69eb9a98 - to check what files those are git filter-branch --index-filter 'git rm --cached --ignore-unmatch file_names' - to remove the file from all revisions rm -rf .git/refs/original/ - to remove git's backup git reflog expire --all --expire='0 days' - to expire all the loose objects (optional) git fsck --full --unreachable - to check if there are any loose objects git repack -A -d - repacking the pack git prune - to finally remove those objects

    Read the article

  • Best practice: git, github, lighthouse and 2 developers

    - by Alxandr
    I'm setting up a new project and plan on using git and github for sourcecontroll and hosting of repo and lighthouse for bugtracking. I've been working with git for some while now, but only been using it for more of a backup solution than collaborate coding solution. Also, I've noticed that in github you can setup a servicehook to lighthouse so that whenever you push to github it notifies lighthouse of the changes. This uses a token for user-authentication and has the ability to change tickets to resolved etc. However, this token I believe functions that way so that whenever a user pushes to the repo (dosn't matter who), it's the owner of the repo that "updates" to lighthouse. This is a problem. So, I believe it is necessary with 2 separate repos at github (one for each dev), and I'm wondering about the workflow that should be used. Any1 care to shred any light on this matter? Like when to pull and push (and where), and how to make the two github repos in sync or something like that? Or another solution to the problem altogether.

    Read the article

  • powershell errors on remove-item command

    - by Lachlan
    Hi, I've written a powershell script which removes folders more than 7 days old. It seems to be removing the folders, but for some reason its giving me errors. I was wondering if anyone might be able to tell me why? The script is... $rootBackupFolder = "\server\share" get-childitem $rootBackupFolder | where {$.PSIsContainer -AND $.Name -match ("^CVSRepository_(19|20)[0-9]0-9(0[1-9]|[12][0-9]|3[01])$") -AND $_.LastWriteTime -le (Get-Date).AddDays(-7)} | remove-item -recurse -force Their are 2 errors I get, this first is this one... (Get-Date).AddDays(-7)} | remove-item <<<< -recurse -force Remove-Item : Cannot remove item \server\share\CVSRepository_20100305\somesubfolder\sumotherfolder: Could not find a part of the path '\server\share\CVSRepository_20100305\somesubfolder\sumotherfolder'. The other error is this... (Get-Date).AddDays(-7)} | remove-item <<<< -recurse -force Remove-Item : Cannot remove item \server\share\CVSRepository_2010031 2\somesubfolder\: Access to the path 'Rdl' is denied.At C:\CVSRepository_BackupScript\Backup.ps1:43 char:38+ (Get-Date).AddDays(-7)} | remove-item <<<< -recurse -force Any ideas? The script is running under a domain account which has privs to delete the remote folders. And in fact it is indeed deleting the folders! But giving me errors... Thanks, Lachlan

    Read the article

  • Cast exception when trying to create new Task Schedueler task.

    - by seaneshbaugh
    I'm attempting to create a new task in the Windows Task Scheduler in C#. What I've got so far is pretty much a copy and paste of http://bartdesmet.net/blogs/bart/archive/2008/02/23/calling-the-task-scheduler-in-windows-vista-and-windows-server-2008-from-managed-code.aspx Everything compiles just fine but come run time I get the following exception: Unable to cast COM object of type 'System.__ComObject' to interface type 'TaskScheduler.ITimeTrigger'. This operation failed because the QueryInterface call on the COM component for the interface with IID '{B45747E0-EBA7-4276-9F29-85C5BB300006}' failed due to the following error: No such interface supported (Exception from HRESULT: 0x80004002 (E_NOINTERFACE)). Here's all the code so you can see what I'm doing here without following the above link. TaskSchedulerClass Scheduler = new TaskSchedulerClass(); Scheduler.Connect(null, null, null, null); ITaskDefinition Task = Scheduler.NewTask(0); Task.RegistrationInfo.Author = "Test Task"; Task.RegistrationInfo.Description = "Just testing this out."; ITimeTrigger Trigger = (ITimeTrigger)Task.Triggers.Create(_TASK_TRIGGER_TYPE2.TASK_TRIGGER_DAILY); Trigger.Id = "TestTrigger"; Trigger.StartBoundary = "2010-05-12T06:15:00"; IShowMessageAction Action = (IShowMessageAction)Task.Actions.Create(_TASK_ACTION_TYPE.TASK_ACTION_SHOW_MESSAGE); Action.Id = "TestAction"; Action.Title = "Test Task"; Action.MessageBody = "This is a test."; ITaskFolder Root = Scheduler.GetFolder("\\"); IRegisteredTask RegisteredTask = Root.RegisterTaskDefinition("Background Backup", Task, (int)_TASK_CREATION.TASK_CREATE_OR_UPDATE, null, null, _TASK_LOGON_TYPE.TASK_LOGON_INTERACTIVE_TOKEN, ""); The line that is throwing the exception is this one ITimeTrigger Trigger = (ITimeTrigger)Task.Triggers.Create(_TASK_TRIGGER_TYPE2.TASK_TRIGGER_DAILY); The exception message kinda makes sense to me, but I'm afraid I don't know enough about COM to really know where to begin with this. Also, I should add that I'm using VS 2010 and I had to set the project to either be for x86 or x64 CPU's instead of the usual "Any CPU" because it kept giving me a BadImageFormatException. I doubt that's related to my current problem, but just in case I thought I might as well mention it.

    Read the article

  • Images missing after moving Django to new server

    - by miszczu
    I'm moving Django project to new server. I'm newbie in Django, and I don't know where should be upload folder. There are all images which should be displayed on website. In config file I haven't seen upload folder I could specify, so I'm guessing it always should be the same location for django projects or I just can't find it. Locations are saved in database. When I've put uploaded files into media folder, so url was like domain.co.uk/media/upload/media/images/year/month/day/image_name.ext and the same is on the old website, images on website ware still missing. All images are visible if I put url by hand, but django doesn't seems to see files. Also I check django log file: 2012-05-30 09:13:33,393 ERROR render: Thumbnail tag failed: [in /usr/lib/python2.6/site-packages/sorl_thumbnail-11.12-py2.6.egg/sorl/thumbnail/templatetags/thumbnail.py (line 49)] Traceback (most recent call last): File "/usr/lib/python2.6/site-packages/sorl_thumbnail-11.12-py2.6.egg/sorl/thumbnail/templatetags/thumbnail.py", line 45, in render return self._render(context) File "/usr/lib/python2.6/site-packages/sorl_thumbnail-11.12-py2.6.egg/sorl/thumbnail/templatetags/thumbnail.py", line 97, in _render file_, geometry, **options File "/usr/lib/python2.6/site-packages/sorl_thumbnail-11.12-py2.6.egg/sorl/thumbnail/base.py", line 50, in get_thumbnail cached = default.kvstore.get(thumbnail) File "/usr/lib/python2.6/site-packages/sorl_thumbnail-11.12-py2.6.egg/sorl/thumbnail/kvstores/base.py", line 25, in get return self._get(image_file.key) File "/usr/lib/python2.6/site-packages/sorl_thumbnail-11.12-py2.6.egg/sorl/thumbnail/kvstores/base.py", line 123, in _get value = self._get_raw(add_prefix(key, identity)) File "/usr/lib/python2.6/site-packages/sorl_thumbnail-11.12-py2.6.egg/sorl/thumbnail/kvstores/cached_db_kvstore.py", line 26, in _get_raw value = KVStoreModel.objects.get(key=key).value File "/usr/lib/python2.6/site-packages/django/db/models/manager.py", line 132, in get return self.get_query_set().get(*args, **kwargs) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 344, in get num = len(clone) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 82, in __len__ self._result_cache = list(self.iterator()) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 273, in iterator for row in compiler.results_iter(): File "/usr/lib/python2.6/site-packages/django/db/models/sql/compiler.py", line 680, in results_iter for rows in self.execute_sql(MULTI): File "/usr/lib/python2.6/site-packages/django/db/models/sql/compiler.py", line 735, in execute_sql cursor.execute(sql, params) File "/usr/lib/python2.6/site-packages/django/db/backends/util.py", line 34, in execute return self.cursor.execute(sql, params) File "/usr/lib/python2.6/site-packages/django/db/backends/mysql/base.py", line 86, in execute return self.cursor.execute(query, args) File "/usr/lib64/python2.6/site-packages/MySQLdb/cursors.py", line 174, in execute self.errorhandler(self, exc, value) File "/usr/lib64/python2.6/site-packages/MySQLdb/connections.py", line 36, in defaulterrorhandler raise errorclass, errorvalue DatabaseError: (1146, "Table 'thumbnail_kvstore' doesn't exist") 2012-05-30 09:13:33,396 DEBUG execute: (0.000) SELECT `freetext_freetext`.`id`, `freetext_freetext`.`key`, `freetext_freetext`.`content`, `freetext_freetext`.`active` FROM `freetext_freetext` WHERE (`freetext_freetext`.`active` = True AND `freetext_freetext`.`key` = office-closed-message ); args=(True, u'office-closed-message') [in /usr/lib/python2.6/site-packages/django/db/backends/util.py (line 44)] 2012-05-30 09:13:33,399 DEBUG execute: (0.000) SELECT `menus_menu`.`id`, `menus_menu`.`name`, `menus_menu`.`slug`, `menus_menu`.`base_url`, `menus_menu`.`description`, `menus_menu`.`enabled` FROM `menus_menu` WHERE (`menus_menu`.`enabled` = True AND `menus_menu`.`slug` = about ); args=(True, u'about') [in /usr/lib/python2.6/site-packages/django/db/backends/util.py (line 44)] 2012-05-30 09:13:33,401 DEBUG execute: (0.000) SELECT `menus_menuitem`.`id`, `menus_menuitem`.`menu_id`, `menus_menuitem`.`title`, `menus_menuitem`.`url`, `menus_menuitem`.`order` FROM `menus_menuitem` INNER JOIN `menus_menu` ON (`menus_menuitem`.`menu_id` = `menus_menu`.`id`) WHERE `menus_menu`.`slug` = about ORDER BY `menus_menuitem`.`order` ASC; args=(u'about',) [in /usr/lib/python2.6/site-packages/django/db/backends/util.py (line 44)] 2012-05-30 09:13:33,404 DEBUG execute: (0.000) SELECT `freetext_freetext`.`id`, `freetext_freetext`.`key`, `freetext_freetext`.`content`, `freetext_freetext`.`active` FROM `freetext_freetext` WHERE (`freetext_freetext`.`active` = True AND `freetext_freetext`.`key` = contactdetails-footer ); args=(True, u'contactdetails-footer') [in /usr/lib/python2.6/site-packages/django/db/backends/util.py (line 44)] I checked database and there is no table calls thumbnail_kvstore, but I have database backup, and in backup files this table doesn't exist. All uploaded files I get are in media/uploads/media/. Also I'm getting errors on some pages: Syntax error. Expected: ``thumbnail source geometry [key1=val1 key2=val2...] as var`` /usr/lib/python2.6/site-packages/sorl_thumbnail-11.12-py2.6.egg/sorl/thumbnail/templatetags/thumbnail.py in __init__, line 72 In template /var/www/vhosts/domain.co.uk/sites/apps/shop/products/templates/products/product_detail.html, error at line 34 {% thumbnail image.file "800x700" detail as zoom %} Maybe some modules I install are not in the right version. Dont know how to fix it. Im using, CentOS 6, mod_wsgi, apache, python 2.6. Update 1.0: On the old server was Django 1.3, on the new one is Django 1.3.1 Update 1.1: I this i know where is the problem. I tried python manage.py syncdb and this is output: Syncing... Creating tables ... The following content types are stale and need to be deleted: orders | ordercontact Any objects related to these content types by a foreign key will also be deleted. Are you sure you want to delete these content types? If you're unsure, answer 'no'. Type 'yes' to continue, or 'no' to cancel: no Installing custom SQL ... Installing indexes ... No fixtures found. Synced: > django.contrib.auth > django.contrib.contenttypes > django.contrib.sessions > django.contrib.sites > django.contrib.messages > django.contrib.admin > django.contrib.admindocs > django.contrib.markup > django.contrib.sitemaps > django.contrib.redirects > django_filters > freetext > sorl.thumbnail > django_extensions > south > currencies > pagination > tagging > honeypot > core > faq > logentry > menus > news > shop > shop.cart > shop.orders Not synced (use migrations): - dbtemplates - contactform - links - media - pages - popularity - testimonials - shop.brands - shop.collections - shop.discount - shop.pricing - shop.product_types - shop.products - shop.shipping - shop.tax (use ./manage.py migrate to migrate these) Next I run python manage.py migrate, and thats what i get: Running migrations for dbtemplates: - Migrating forwards to 0002_auto__del_unique_template_name. > dbtemplates:0001_initial ! Error found during real run of migration! Aborting. ! Since you have a database that does not support running ! schema-altering statements in transactions, we have had ! to leave it in an interim state between migrations. ! You *might* be able to recover with: = DROP TABLE `django_template` CASCADE; [] = DROP TABLE `django_template_sites` CASCADE; [] ! The South developers regret this has happened, and would ! like to gently persuade you to consider a slightly ! easier-to-deal-with DBMS. ! NOTE: The error which caused the migration to fail is further up. Traceback (most recent call last): File "manage.py", line 13, in <module> execute_manager(settings) File "/usr/lib/python2.6/site-packages/django/core/management/__init__.py", line 438, in execute_manager utility.execute() File "/usr/lib/python2.6/site-packages/django/core/management/__init__.py", line 379, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/usr/lib/python2.6/site-packages/django/core/management/base.py", line 191, in run_from_argv self.execute(*args, **options.__dict__) File "/usr/lib/python2.6/site-packages/django/core/management/base.py", line 220, in execute output = self.handle(*args, **options) File "/usr/lib/python2.6/site-packages/South-0.7.3-py2.6.egg/south/management/commands/migrate.py", line 105, in handle ignore_ghosts = ignore_ghosts, File "/usr/lib/python2.6/site-packages/South-0.7.3-py2.6.egg/south/migration/__init__.py", line 191, in migrate_app success = migrator.migrate_many(target, workplan, database) File "/usr/lib/python2.6/site-packages/South-0.7.3-py2.6.egg/south/migration/migrators.py", line 221, in migrate_many result = migrator.__class__.migrate_many(migrator, target, migrations, database) File "/usr/lib/python2.6/site-packages/South-0.7.3-py2.6.egg/south/migration/migrators.py", line 292, in migrate_many result = self.migrate(migration, database) File "/usr/lib/python2.6/site-packages/South-0.7.3-py2.6.egg/south/migration/migrators.py", line 125, in migrate result = self.run(migration) File "/usr/lib/python2.6/site-packages/South-0.7.3-py2.6.egg/south/migration/migrators.py", line 99, in run return self.run_migration(migration) File "/usr/lib/python2.6/site-packages/South-0.7.3-py2.6.egg/south/migration/migrators.py", line 81, in run_migration migration_function() File "/usr/lib/python2.6/site-packages/South-0.7.3-py2.6.egg/south/migration/migrators.py", line 57, in <lambda> return (lambda: direction(orm)) File "/usr/lib/python2.6/site-packages/django_dbtemplates-1.3-py2.6.egg/dbtemplates/migrations/0001_initial.py", line 18, in forwards ('last_changed', self.gf('django.db.models.fields.DateTimeField')(default=datetime.datetime.now)), File "/usr/lib/python2.6/site-packages/South-0.7.3-py2.6.egg/south/db/generic.py", line 226, in create_table ', '.join([col for col in columns if col]), File "/usr/lib/python2.6/site-packages/South-0.7.3-py2.6.egg/south/db/generic.py", line 150, in execute cursor.execute(sql, params) File "/usr/lib/python2.6/site-packages/django/db/backends/util.py", line 34, in execute return self.cursor.execute(sql, params) File "/usr/lib/python2.6/site-packages/django/db/backends/mysql/base.py", line 86, in execute return self.cursor.execute(query, args) File "/usr/lib64/python2.6/site-packages/MySQLdb/cursors.py", line 174, in execute self.errorhandler(self, exc, value) File "/usr/lib64/python2.6/site-packages/MySQLdb/connections.py", line 36, in defaulterrorhandler raise errorclass, errorvalue _mysql_exceptions.OperationalError: (1050, "Table 'django_template' already exists") Also i run python manage.py migrate --list, and uotput is: dbtemplates (*) 0001_initial (*) 0002_auto__del_unique_template_name contactform (*) 0001_initial (*) 0002_auto__add_callback (*) 0003_auto__add_field_callback_notes (*) 0004_auto__add_field_callback_is_closed__add_field_callback_closed (*) 0005_auto__add_field_callback_url (*) 0006_auto__add_contact (*) 0007_auto__add_field_contact_category (*) 0008_auto__add_field_contact_url links (*) 0001_initial (*) 0002_auto__add_field_category_enabled__add_field_category_order media (*) 0001_initial (*) 0002_auto__del_field_image_external_url__add_field_image_link_url__del_fiel (*) 0003_add_model_FileAttachment (*) 0004_auto__chg_field_file_slug__chg_field_image_slug (*) 0005_auto__chg_field_image_file (*) 0006_auto__chg_field_file_file pages (*) 0001_initial (*) 0002_auto__chg_field_page_meta_description__chg_field_page_meta_title__chg_ (*) 0003_auto__add_field_page_show_in_sitemap (*) 0004_auto__add_field_page_changefreq__add_field_page_priority popularity (*) 0001_initial testimonials (*) 0001_initial (*) 0002_auto__add_field_testimonial_is_featured brands (*) 0001_initial (*) 0002_auto__add_field_brand_template (*) 0003_auto__chg_field_brand_meta_description__chg_field_brand_meta_title__ch (*) 0004_auto__add_field_brand_url (*) 0005_auto__del_field_brand_image__add_field_brand_logo collections (*) 0001_initial (*) 0002_auto__add_field_collection_discount (*) 0003_auto__chg_field_collection_meta_description__chg_field_collection_meta (*) 0004_auto__add_field_collection_is_featured (*) 0005_auto__add_field_collection_order discount (*) 0001_initial (*) 0002_added_field_discount_description (*) 0003_auto__add_field_discountvoucher_automatic (*) 0004_auto__add_field_discountvoucher_collection (*) 0005_auto__del_field_discountvoucher_collection (*) 0006_auto__chg_field_discountvoucher_expiry_date pricing (*) 0001_initial (*) 0002_auto__add_pricingrule product_types (*) 0001_initial (*) 0002_auto__add_field_producttype_meta_title__add_field_producttype_meta_des (*) 0003_auto__add_field_producttype_summary__add_field_producttype_description products (*) 0001_initial (*) 0002_auto__del_field_product_is_featured (*) 0003_auto__chg_field_product_meta_keywords__chg_field_product_meta_descript (*) 0004_auto shipping (*) 0001_initial (*) 0002_auto__add_field_shippingmethod_includes_tax__add_field_shippingmethod_ (*) 0003_auto__add_field_shippingmethod_order (*) 0004_auto__del_field_shippingmethod_tax_rate__del_field_shippingmethod_incl (*) 0005_auto__del_field_shippingrule_enabled tax (*) 0001_initial (*) 0002_auto__add_field_taxrate_internal_name (*) 0003_initial_internal_names (*) 0004_auto__add_unique_taxrate_internal_name (*) 0005_force_unique_taxrate_name (*) 0006_auto__add_unique_taxrate_name After that some images source were something like this: src="cache/1e/bd/1ebd719910aa843238028edd5fe49e71.jpg" Is any1 could help me with syncdb pledase?

    Read the article

  • Incoming connections from 2 different Internet connections

    - by RJClinton
    I am hosting a Gameserver on my Windows 2003 Server 32 Bit Server. Due to some limitations of my ISP, I have to take 2 Internet connections in order to satisfy my bandwidth requirement. I am facing serious problems here. The Internet is supplied using Wimax. I am given a POE and the RJ45 from both the POE (2 connections) are plugged into a Netgear 100mbps Switch and another RJ45 connects the server to the switch. I have tried to configure the IP addresses and gateway of both connections to the single NIC in the server. The IP series and the gateway of each Internet connection are very much different. I know that this is not the proper approach, because alternate gateways are meant to be only for backup in case of main gateway failure and the usage of the gateways depends upon the metric (correct me if I am wrong). I am planning to get a second PCI NIC so that I can connect each of the Internet connections into its respective NIC. If I use this approach, will I be able to accept connections on both NICs? Also, please suggest any other alternatives that I might use.

    Read the article

  • What is the most important thing you weren't taught in school?

    - by Alexandre Brisebois
    What is the most important thing you weren't taught in school? What topics are missing from the CS/IS education? Posted so far How to sell an idea Principles: Often, good enough is better than perfect. Making mistakes is actually a Good Thing™ -- as long as they're new mistakes. If a user can break your code they will. In the Real World™ they're all open-book exams Self confidence is way more important in getting ahead than intelligence. Always prefer simplicity over complexity. The best code is the code that you don't write. You never know when you'll meet someone again ... or where. It's always worthwhile to treat people with respect and kindness. Be aware of what you don't know and don't be afraid to ask questions when you need to Missing knowledge: How to communicate effectively. Lack of source control Lack of Softskills experience How to productize code How to write secure code How to formulate problems How to self-measurement. To evaluate ones true competences and market worth. How to debug code How important is backup How to read code on a large scale (being able to adapt and build upon existing projects) Good Regular expressions comprehension How to teach others effectively TDD/Unit testing Critical thinking How to integrate different skills and languages in a single project

    Read the article

  • WSE 3.0 crashes when ClearHeaders is called

    - by Daniel Enetoft
    Hi! I'm developing a client-server application in c# using WSE web-service. One of the things that the user can do is send jpg images to the server for backup via the web-service. Recently strange errors have occurred. This does not happen for all users, just a few. On the client side the exception is a System.Net.WebException Exception message: The operation has timed out and on the server the following warning is found in the event viewer: Exception information: Exception type: HttpException Exception message: Server cannot clear headers after HTTP headers have been sent. Request information Request URL: MyUrl/Service.asmx Request path: /MyWebService/Service.asmx User host address: ------- User: Is authenticated: False Authentication Type: Thread account name: NT AUTHORITY\NETWORK SERVICE Thread information: Thread ID: 7 Thread account name: NT AUTHORITY\NETWORK SERVICE Is impersonating: False Stack trace: at System.Web.HttpResponse.ClearHeaders() at System.Web.Services.Protocols.SoapServerProtocol.WriteException(Exception e, Stream outputStream) at System.Web.Services.Protocols.ServerProtocolFactory.Create(Type type, HttpContext context, HttpRequest request, HttpResponse response, Boolean& abortProcessing) at System.Web.Services.Protocols.WebServiceHandlerFactory.CoreGetHandler(Type type, HttpContext context, HttpRequest request, HttpResponse response) Does anyone have an idea where this error can come from? I have already tried to raise the "maxRequestLength" in web.config to 16Mb but this doesn't fix it. Regards /Daniel

    Read the article

  • Android: ListActivity design - changing the content of the List Adapter

    - by Rob
    Hi, I would like to write a rather simple content application which displays a list of textual items (along with a small pic). I have a standard menu in which each menu item represents a different category of textual items (news, sports, leisure etc.). Pressing a menu item will display a list of textual items of this category. Now, having a separate ListActivity for each category seems like an overkill (or does it?..) Naturally, it makes much more sense to use one ListActivity and replace the data of its adapter when each category is loaded. My concern is when "back" is pressed. The adapter is loaded with items of the current category and now I need to display list of the previous category (and enable clicking on list items too...). Since I have only one activity - I thought of backup and load mechanism in onPause() and onResume() functions as well as making some distinction whether these function are invoked as a result of a "new" event (menu item selected) or by a "back" press. This seems very cumbersome for such a trivial usage... Am I missing something here? Thanks, Rob

    Read the article

  • Verifying regular expression for malware removal

    - by Legend
    Unfortunately, one of my web servers was compromised recently. I have two questions. Is there a way I can scan the downloaded directory for backdoors? Is there anything I can do to ensure that at least known vulnerabilities do not exist anymore? Secondly, the malware put up the following in all index.* files on my webserver: <script>/*GNU GPL*/ try{window.onload = function(){var Hva23p3hnyirlpv7 = document.createElement('script');Hva23p3hnyirlpv7.setAttribute('type', 'text/javascript');Hva23p3hnyirlpv7.setAttribute('id', 'myscript1');Hva23p3hnyirlpv7.setAttribute('src',.... CODE DELETED FOR SAFETY.... );}} catch(e) {}</script> Obviously, this snippet seems to download some rogue file onto the user's machine. I downloaded an entire backup of the web server and am currently trying to remove this snippet from all file. For this I am doing: find ./ -name "index.*" -exec sed -i 's/<script>\/\*GNU GPL\*.*Hva23p3hnyirlpv7.*<\/script>//g' {} \; Just wanted to verify if this does the trick. I verified it with a few files but I want to be sure that this doesn't delete some valid code. Anyone suggests any other modifications?

    Read the article

  • Increase Query Speed in PostgreSQL

    - by Anthoni Gardner
    Hello, First time posting here, but an avid reader. I am experiancing slow query times on my database (all tested locally thus far) and not sure how to go about it. The database itself has 44 tables and some of them tables have over 1 Million records (mainly the movies, actresses and actors tables). The table is made via JMDB using the flat files on IMDB. Also the SQL query that I am about to show is from that said program (that too experiances very slow search times). I have tried to include as much information as I can, such as the explain plan etc. "QUERY PLAN" "HashAggregate (cost=46492.52..46493.50 rows=98 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " - Append (cost=39094.17..46491.79 rows=98 width=46)" " - HashAggregate (cost=39094.17..39094.87 rows=70 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " - Seq Scan on movies (cost=0.00..39093.65 rows=70 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " Filter: (((title)::text ~~* '%Babe%'::text) AND ((title)::text !~~* '""%}'::text))" " - Nested Loop (cost=0.00..7395.94 rows=28 width=46)" " Output: public.movies.title, public.movies.movieid, public.movies.year" " - Seq Scan on akatitles (cost=0.00..7159.24 rows=28 width=4)" " Output: akatitles.movieid, akatitles.language, akatitles.title, " Filter: (((title)::text ~~* '%Babe%'::text) AND ((title)::text !~~* '""%}'::text))" " - Index Scan using movies_pkey on movies (cost=0.00..8.44 rows=1 width=46)" " Output: public.movies.movieid, public.movies.title, public.movies.year, public.movies.imdbid" " Index Cond: (public.movies.movieid = akatitles.movieid)" SELECT * FROM ((SELECT DISTINCT title, movieid, year FROM movies WHERE title ILIKE '%Babe%' AND NOT (title ILIKE '"%}')) UNION (SELECT movies.title, movies.movieid, movies.year FROM movies INNER JOIN akatitles ON movies.movieid=akatitles.movieid WHERE akatitles.title ILIKE '%Babe%' AND NOT (akatitles.title ILIKE '"%}'))) AS union_tmp2; Returns 612 Rows in 9078ms Database backup (plain text) is 1.61GB It's a really complex query and I am not fully cognizant on it, like I said it was spat out by JMDB. Do you have any suggestions on how I can increase the speed ? Regards Anthoni

    Read the article

  • Production settings file for log4j?

    - by James
    Here is my current log4j settings file. Are these settings ideal for production use or is there something I should remove/tweak or change? I ask because I was getting all my threads being hung due to log4j blocking. I checked my open file descriptors I was only using 113. # ***** Set root logger level to WARN and its two appenders to stdout and R. log4j.rootLogger=warn, stdout, R # ***** stdout is set to be a ConsoleAppender. log4j.appender.stdout=org.apache.log4j.ConsoleAppender # ***** stdout uses PatternLayout. log4j.appender.stdout.layout=org.apache.log4j.PatternLayout # ***** Pattern to output the caller's file name and line number. log4j.appender.stdout.layout.ConversionPattern=%5p [%t] (%F:%L) - %m%n # ***** R is set to be a RollingFileAppender. log4j.appender.R=org.apache.log4j.RollingFileAppender log4j.appender.R.File=logs/myapp.log # ***** Max file size is set to 100KB log4j.appender.R.MaxFileSize=102400KB # ***** Keep one backup file log4j.appender.R.MaxBackupIndex=5 # ***** R uses PatternLayout. log4j.appender.R.layout=org.apache.log4j.PatternLayout log4j.appender.R.layout.ConversionPattern=%p %t %d %c - %m%n #set httpclient debug levels log4j.logger.org.apache.component=ERROR,stdout log4j.logger.httpclient.wire=ERROR,stdout log4j.logger.org.apache.commons.httpclient=ERROR,stdout log4j.logger.org.apache.http.client.protocol=ERROR,stdout UPDATE*** Adding thread dump sample from all my threads (100) "pool-1-thread-5" - Thread t@25 java.lang.Thread.State: BLOCKED on org.apache.log4j.spi.RootLogger@1d45a585 owned by: pool-1-thread-35 at org.apache.log4j.Category.callAppenders(Category.java:201) at org.apache.log4j.Category.forcedLog(Category.java:388) at org.apache.log4j.Category.error(Category.java:302)

    Read the article

  • Strange VS2005 compile errors: unable to locate resource file (because the compiler keeps deleting i

    - by Velika
    I AM GETTING THE FOLLOWING ERROR IN A VERY SIMPLE CLASS LIBRARY: Error 1 Unable to copy file "obj\Debug\SMIT.SysAdmin.BusinessLayer.Resources.resources" to "obj\Debug\SMIT.SysAdmin.BusinessLayer.SMIT.SysAdmin.BusinessLayer.Resources.resources". Could not find file 'obj\Debug\SMIT.SysAdmin.BusinessLayer.Resources.resources'. SMIT.SysAdmin.BusinessLayer Going to the Project Properties-Resource tab, I see that I defined do resources. Still, I tried to delete the resource file and recreate by going to the resource tab. When I recompile, I still get the same error. Why is it even looking for a resource file? I define no resources on teh project properties tab and added no new resource file items. Any suggestions of things to try? Update: I found the missing file in an old backup. I copied it to the location where the compiler expected it, and then successfully recompiled the project that previously had compile time errors. However, when I rebuild the entire solution, it deletes the file that I previously restored and I'm back to where I started. My solution contains several projects (maybe 10 or so). Could VS 2005 be having a problem determining dependencies and the proper order to compile these projects?

    Read the article

  • How can you change Network settings (IP Address, DNS, WINS, Host Name) with code in C#

    - by rathkopf
    I am developing a wizard for a machine that is to be used as a backup of other machines. When it replaces an existing machine, it needs to set its IP address, DNS, WINS, and host name to match the machine being replaced. Is there a library in .net (C#) which allows me to do this programatically? There are multiple NICs, each which need to be set individually. EDIT Thannk you TimothyP for your example. It got me moving on the right track and the quick reply was awesome. Thanks balexandre. Your code is perfect. I was in a rush and had already adapted the example TimothyP linked to, but I would have loved to have had your code sooner. I've also developed a routine using similar techniques for changing the computer name. I'll post it in the future so subscribe to this questions RSS feed if you want to be informed of the update. I may get it up later today or on Monday after a bit of cleanup.

    Read the article

  • Access denied error on select into outfile using Zend

    - by Peter
    Hi, I'm trying to make a dump of a MySQL table on the server and I'm trying to do this in Zend. I have a model/mapper/dbtable structure for all my connections to my tables and I'm adding the following code to the mappers: public function dumpTable() { $db = $this->getDbTable()->getAdapter(); $name = $this->getDbTable()->info('name'); $backupFile = APPLICATION_PATH . '/backup/' . date('U') . '_' . $name . '.sql'; $query = "SELECT * INTO OUTFILE '$backupFile' FROM $name"; $db->query( $query ); } This should work peachy, I thought, but Message: Mysqli prepare error: Access denied for user 'someUser'@'localhost' (using password: YES) is what this results in. I checked the user rights for someUser and he has all the rights to the database and table in question. I've been looking around here and on the net in general and usually turning on "all" the rights for the user seems to be the solution, but not in my case (unless I'm overlooking something right now with my tired eyes + I don't want to turn on "all" on my production server). What am I doing wrong here? Or, does anybody know a more elegant way to get this done in Zend?

    Read the article

  • How to rename Java packages without breaking Subversion history?

    - by M. Joanis
    Hello everyone, The company I'm working for is starting up and they changed their name in the process. So we still use the package name com.oldname because we are afraid of breaking the file change history, or the ancestry links between versions, or whatever we could break (I don't think I use the right terms, but you get the concept). We use: Eclipse, TortoiseSVN, Subversion I found somewhere that I should do it in many steps to prevent incoherence between content of .svn folders and package names in java files: First use TortoiseSVN to rename the directory, updating the .svn directories. Then, manually rename the directory back to the original name. To finally use Eclipse to rename the packages (refactor) back to the new name, updating the java files. That seems good to me, but I need to know if the ancestry and history and everything else will still be coherent and working well. I don't have the keys to that server, that's why I don't hastily backup things and try one or two things. I would like to come up with a good reason not to do it, or a way of doing it which works. Thank you for your help, M. Joanis

    Read the article

  • Full text index requires dropping and recreating - why?

    - by Amjid Qureshi
    Hi all, So I've got a web app running on .net 3.5 connected to a SQL 2005 box. We do scheduled releases every 2 weeks. About 14 tables out of 250 are full text indexed. After not every release, but a few too many, the indexes crap out. They seem to have data in there, but when we try to search them from the front end or SQL enterprise we get timeouts/hangs. We have a script that disables the indexes, drops them, deletes the catalog and then re creates the indexes. This fixes the problem 99 times out of 100. and the one other time, we run the script again and it all works We have tried just rebuilding the fulltext index but that doesn't fix the issue. My question is why do we have to do this ? what can we do to sort the index out? Here is a bit of the script, IF EXISTS (SELECT * FROM sys.fulltext_indexes fti WHERE fti.object_id = OBJECT_ID(N'[dbo].[Address]')) ALTER FULLTEXT INDEX ON [dbo].[Address] DISABLE GO IF EXISTS (SELECT * FROM sys.fulltext_indexes fti WHERE fti.object_id = OBJECT_ID(N'[dbo].[Address]')) DROP FULLTEXT INDEX ON [dbo].[Address] GO IF EXISTS (SELECT * FROM sysfulltextcatalogs ftc WHERE ftc.name = N'DbName.FullTextCatalog') DROP FULLTEXT CATALOG [DbName.FullTextCatalog] GO -- may need this line if we get an error BACKUP LOG SMS2 WITH TRUNCATE_ONLY CREATE FULLTEXT CATALOG [DbName.FullTextCatalog] ON FILEGROUP [FullTextCatalogs] IN PATH N'F:\Data' AS DEFAULT AUTHORIZATION [dbo] CREATE FULLTEXT INDEX ON [Address](CommonPlace LANGUAGE 'ENGLISH') KEY INDEX PK_Address ON [DbName.FullTextCatalog] WITH CHANGE_TRACKING AUTO go

    Read the article

< Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >